888 resultados para international student mobility cross-section time series model Source country host country
Resumo:
The fundamental goal of this thesis is the determination of the isospin dependence of the Ar+Ni fusion-evaporation cross section. Three Ar isotope beams, with energies of about 13AMeV, have been accelerated and impinged onto isotopically enriched Ni targets, in order to produce Pd nuclei, with mass number varying from 92 to 104. The measurements have been performed by the high performance 4pi detector INDRA, coupled with the magnetic spectrometer VAMOS. Even if the results are very preliminary, the obtained fusion-evaporation cross sections behaviour gives a hint at the possible isospin dependence of the fusion-evaporation cross sections.
Resumo:
This thesis is about three major aspects of the identification of top quarks. First comes the understanding of their production mechanism, their decay channels and how to translate theoretical formulae into programs that can simulate such physical processes using Monte Carlo techniques. In particular, the author has been involved in the introduction of the POWHEG generator in the framework of the ATLAS experiment. POWHEG is now fully used as the benchmark program for the simulation of ttbar pairs production and decay, along with MC@NLO and AcerMC: this will be shown in chapter one. The second chapter illustrates the ATLAS detectors and its sub-units, such as calorimeters and muon chambers. It is very important to evaluate their efficiency in order to fully understand what happens during the passage of radiation through the detector and to use this knowledge in the calculation of final quantities such as the ttbar production cross section. The last part of this thesis concerns the evaluation of this quantity deploying the so-called "golden channel" of ttbar decays, yielding one energetic charged lepton, four particle jets and a relevant quantity of missing transverse energy due to the neutrino. The most important systematic errors arising from the various part of the calculation are studied in detail. Jet energy scale, trigger efficiency, Monte Carlo models, reconstruction algorithms and luminosity measurement are examples of what can contribute to the uncertainty about the cross-section.
Resumo:
In this thesis we present a study of the D0 meson (through one of its two-body decay channel, D0 → Kπ) collected by the CDF II experiment at the Tevatron pp ̄ collider at Fermilab. In particular we measured the differential production cross section as a function of the transverse momentum down to pT = 1.5 GeV/c.
A Phase Space Box-counting based Method for Arrhythmia Prediction from Electrocardiogram Time Series
Resumo:
Arrhythmia is one kind of cardiovascular diseases that give rise to the number of deaths and potentially yields immedicable danger. Arrhythmia is a life threatening condition originating from disorganized propagation of electrical signals in heart resulting in desynchronization among different chambers of the heart. Fundamentally, the synchronization process means that the phase relationship of electrical activities between the chambers remains coherent, maintaining a constant phase difference over time. If desynchronization occurs due to arrhythmia, the coherent phase relationship breaks down resulting in chaotic rhythm affecting the regular pumping mechanism of heart. This phenomenon was explored by using the phase space reconstruction technique which is a standard analysis technique of time series data generated from nonlinear dynamical system. In this project a novel index is presented for predicting the onset of ventricular arrhythmias. Analysis of continuously captured long-term ECG data recordings was conducted up to the onset of arrhythmia by the phase space reconstruction method, obtaining 2-dimensional images, analysed by the box counting method. The method was tested using the ECG data set of three different kinds including normal (NR), Ventricular Tachycardia (VT), Ventricular Fibrillation (VF), extracted from the Physionet ECG database. Statistical measures like mean (μ), standard deviation (σ) and coefficient of variation (σ/μ) for the box-counting in phase space diagrams are derived for a sliding window of 10 beats of ECG signal. From the results of these statistical analyses, a threshold was derived as an upper bound of Coefficient of Variation (CV) for box-counting of ECG phase portraits which is capable of reliably predicting the impeding arrhythmia long before its actual occurrence. As future work of research, it was planned to validate this prediction tool over a wider population of patients affected by different kind of arrhythmia, like atrial fibrillation, bundle and brunch block, and set different thresholds for them, in order to confirm its clinical applicability.
Resumo:
This PhD thesis presents two measurements of differential production cross section of top and anti-top pairs tt ̅ decaying in a lepton+jets final state. The normalize cross section is measured as a function of the top transverse momentum and the tt ̅ mass, transverse momentum and rapidity using the full 2011 proton-proton (pp) ATLAS data taking at a center of mass energy of √s=7 TeV and corresponding to an integrated luminosity of L=4.6 〖fb〗^(-1). The cross section is also measured at the particle level as a function of the hadronic top transverse momentum for highly energetic events using the full 2012 data taking at √s=8 TeV and with L=20 〖fb〗^(-1). The measured spectra are fully corrected for detector efficiency and resolution effects and are compared to several theoretical predictions showing a quite good agreement, depending on different spectra.
Resumo:
The main objective of this thesis is to explore the short and long run causality patterns in the finance – growth nexus and finance-growth-trade nexus before and after the global financial crisis, in the case of Albania. To this end we use quarterly data on real GDP, 13 proxy measures for financial development and the trade openness indicator for the period 1998Q1 – 2013Q2 and 1998Q1-2008Q3. Causality patterns will be explored in a VAR-VECM framework. For this purpose we will proceed as follows: (i) testing for the integration order of the variables; (ii) cointegration analysis and (iii) performing Granger causality tests in a VAR-VECM framework. In the finance-growth nexus, empirical evidence suggests for a positive long run relationship between finance and economic growth, with causality running from financial development to economic growth. The global financial crisis seems to have not affected the causality direction in the finance and growth nexus, thus supporting the finance led growth hypothesis in the long run in the case of Albania. In the finance-growth-trade openness nexus, we found evidence for a positive long run relationship the variables, with causality direction depending on the proxy used for financial development. When the pre-crisis sample is considered, we find evidence for causality running from financial development and trade openness to economic growth. The global financial crisis seems to have affected somewhat the causality direction in the finance-growth-trade nexus, which has become sensible to the proxy used for financial development. On the short run, empirical evidence suggests for a clear unidirectional relationship between finance and growth, with causality mostly running from economic growth to financial development. When we consider the per-crisis sub sample results are mixed, depending on the proxy used for financial development. The same results are confirmed when trade openness is taken into account.
Resumo:
The dominant process in hard proton-proton collisions is the production of hadronic jets.rnThese sprays of particles are produced by colored partons, which are struck out of their confinement within the proton.rnPrevious measurements of inclusive jet cross sections have provided valuable information for the determination of parton density functions and allow for stringent tests of perturbative QCD at the highest accessible energies.rnrnThis thesis will present a measurement of inclusive jet cross sections in proton-proton collisions using the ATLAS detector at the LHC at a center-of-mass energy of 7 TeV.rnJets are identified using the anti-kt algorithm and jet radii of R=0.6 and R=0.4.rnThey are calibrated using a dedicated pT and eta dependent jet calibration scheme.rnThe cross sections are measured for 40 GeV < pT <= 1 TeV and |y| < 2.8 in four bins of absolute rapidity, using data recorded in 2010 corresponding to an integrated luminosity of 3 pb^-1.rnThe data is fully corrected for detector effects and compared to theoretical predictions calculated at next-to-leading order including non-perturbative effects.rnThe theoretical predictions are found to agree with data within the experimental and theoretic uncertainties.rnrnThe ratio of cross sections for R=0.4 and R=0.6 is measured, exploiting the significant correlations of the systematic uncertainties, and is compared to recently developed theoretical predictions.rnThe underlying event can be characterized by the amount of transverse momentum per unit rapidity and azimuth, called rhoue.rnUsing analytical approaches to the calculation of non-perturbative corrections to jets, rhoue at the LHC is estimated using the ratio measurement.rnA feasibility study of a combined measurement of rhoue and the average strong coupling in the non-perturbative regime alpha_0 is presented and proposals for future jet measurements at the LHC are made.
Resumo:
The aim of this work is to provide a precise and accurate measurement of the 238U(n,gamma) reaction cross-section. This reaction is of fundamental importance for the design calculations of nuclear reactors, governing the behaviour of the reactor core. In particular, fast neutron reactors, which are experiencing a growing interest for their ability to burn radioactive waste, operate in the high energy region of the neutron spectrum. In this energy region inconsistencies between the existing measurements are present up to 15%, and the most recent evaluations disagree each other. In addition, the assessment of nuclear data uncertainty performed for innovative reactor systems shows that the uncertainty in the radiative capture cross-section of 238U should be further reduced to 1-3% in the energy region from 20 eV to 25 keV. To this purpose, addressed by the Nuclear Energy Agency as a priority nuclear data need, complementary experiments, one at the GELINA and two at the n_TOF facility, were scheduled within the ANDES project within the 7th Framework Project of the European Commission. The results of one of the 238U(n,gamma) measurement performed at the n_TOF CERN facility are presented in this work, carried out with a detection system constituted of two liquid scintillators. The very accurate cross section from this work is compared with the results obtained from the other measurement performed at the n_TOF facility, which exploit a different and complementary detection technique. The excellent agreement between the two data-sets points out that they can contribute to the reduction of the cross section uncertainty down to the required 1-3%.
Resumo:
Top quark studies play an important role in the physics program of the Large Hadron Collider (LHC). The energy and luminosity reached allow the acquisition of a large amount of data especially in kinematic regions never studied before. In this thesis is presented the measurement of the ttbar production differential cross section on data collected by ATLAS in 2012 in proton proton collisions at \sqrt{s} = 8 TeV, corresponding to an integrated luminosity of 20.3 fb^{−1}. The measurement is performed for ttbar events in the semileptonic channel where the hadronically decaying top quark has a transverse momentum above 300 GeV. The hadronic top quark decay is reconstructed as a single large radius jet and identified using jet substructure properties. The final differential cross section result has been compared with several theoretical distributions obtaining a discrepancy of about the 25% between data and predictions, depending on the MC generator. Furthermore the kinematic distributions of the ttbar production process are very sensitive to the choice of the parton distribution function (PDF) set used in the simulations and could provide constraints on gluons PDF. In particular in this thesis is performed a systematic study on the PDF of the protons, varying several PDF sets and checking which one better describes the experimental distributions. The boosted techniques applied in this measurement will be fundamental in the next data taking at \sqrt{s}=13 TeV when will be produced a large amount of heavy particles with high momentum.
Resumo:
Die Quantenchromodynamik ist die zugrundeliegende Theorie der starken Wechselwirkung und kann in zwei Bereiche aufgeteilt werden. Harte Streuprozesse, wie zum Beispiel die Zwei-Jet-Produktion bei hohen invarianten Massen, können störungstheoretisch behandelt und berechnet werden. Bei Streuprozessen mit niedrigen Impulsüberträgen hingegen ist die Störungstheorie nicht mehr anwendbar und phänemenologische Modelle werden für Vorhersagen benutzt. Das ATLAS Experiment am Large Hadron Collider am CERN ermöglicht es, QCD Prozesse bei hohen sowie niedrigen Impulsüberträgen zu untersuchen. In dieser Arbeit werden zwei Analysen vorgestellt, die jeweils ihren Schwerpunkt auf einen der beiden Regime der QCD legen:rnDie Messung von Ereignisformvariablen bei inelastischen Proton--Proton Ereignissen bei einer Schwerpunktsenergie von $sqrt{s} = unit{7}{TeV}$ misst den transversalen Energiefluss in hadronischen Ereignissen. rnDie Messung des zweifachdifferentiellen Zwei-Jet-Wirkungsquerschnittes als Funktion der invarianten Masse sowie der Rapiditätsdifferenz der beiden Jets mit den höchsten Transversalimpulsen kann genutzt werden um Theorievorhersagen zu überprüfen. Proton--Proton Kollisionen bei $sqrt{s} = unit{8}{TeV}$, welche während der Datennahme im Jahr 2012 aufgezeichnet wurden, entsprechend einer integrierten Luminosität von $unit{20.3}{fb^{-1}}$, wurden analysiert.rn
Resumo:
Zeitreihen sind allgegenwärtig. Die Erfassung und Verarbeitung kontinuierlich gemessener Daten ist in allen Bereichen der Naturwissenschaften, Medizin und Finanzwelt vertreten. Das enorme Anwachsen aufgezeichneter Datenmengen, sei es durch automatisierte Monitoring-Systeme oder integrierte Sensoren, bedarf außerordentlich schneller Algorithmen in Theorie und Praxis. Infolgedessen beschäftigt sich diese Arbeit mit der effizienten Berechnung von Teilsequenzalignments. Komplexe Algorithmen wie z.B. Anomaliedetektion, Motivfabfrage oder die unüberwachte Extraktion von prototypischen Bausteinen in Zeitreihen machen exzessiven Gebrauch von diesen Alignments. Darin begründet sich der Bedarf nach schnellen Implementierungen. Diese Arbeit untergliedert sich in drei Ansätze, die sich dieser Herausforderung widmen. Das umfasst vier Alignierungsalgorithmen und ihre Parallelisierung auf CUDA-fähiger Hardware, einen Algorithmus zur Segmentierung von Datenströmen und eine einheitliche Behandlung von Liegruppen-wertigen Zeitreihen.rnrnDer erste Beitrag ist eine vollständige CUDA-Portierung der UCR-Suite, die weltführende Implementierung von Teilsequenzalignierung. Das umfasst ein neues Berechnungsschema zur Ermittlung lokaler Alignierungsgüten unter Verwendung z-normierten euklidischen Abstands, welches auf jeder parallelen Hardware mit Unterstützung für schnelle Fouriertransformation einsetzbar ist. Des Weiteren geben wir eine SIMT-verträgliche Umsetzung der Lower-Bound-Kaskade der UCR-Suite zur effizienten Berechnung lokaler Alignierungsgüten unter Dynamic Time Warping an. Beide CUDA-Implementierungen ermöglichen eine um ein bis zwei Größenordnungen schnellere Berechnung als etablierte Methoden.rnrnAls zweites untersuchen wir zwei Linearzeit-Approximierungen für das elastische Alignment von Teilsequenzen. Auf der einen Seite behandeln wir ein SIMT-verträgliches Relaxierungschema für Greedy DTW und seine effiziente CUDA-Parallelisierung. Auf der anderen Seite führen wir ein neues lokales Abstandsmaß ein, den Gliding Elastic Match (GEM), welches mit der gleichen asymptotischen Zeitkomplexität wie Greedy DTW berechnet werden kann, jedoch eine vollständige Relaxierung der Penalty-Matrix bietet. Weitere Verbesserungen umfassen Invarianz gegen Trends auf der Messachse und uniforme Skalierung auf der Zeitachse. Des Weiteren wird eine Erweiterung von GEM zur Multi-Shape-Segmentierung diskutiert und auf Bewegungsdaten evaluiert. Beide CUDA-Parallelisierung verzeichnen Laufzeitverbesserungen um bis zu zwei Größenordnungen.rnrnDie Behandlung von Zeitreihen beschränkt sich in der Literatur in der Regel auf reellwertige Messdaten. Der dritte Beitrag umfasst eine einheitliche Methode zur Behandlung von Liegruppen-wertigen Zeitreihen. Darauf aufbauend werden Distanzmaße auf der Rotationsgruppe SO(3) und auf der euklidischen Gruppe SE(3) behandelt. Des Weiteren werden speichereffiziente Darstellungen und gruppenkompatible Erweiterungen elastischer Maße diskutiert.
Resumo:
La sezione d’urto totale adronica gioca un ruolo fondamentale nel programma di fisica di LHC. Un calcolo di questo parametro, fondamentale nell’ambito della teoria delle interazioni forti, non é possibile a causa dell’inapplicabilità dell’approccio perturbativo. Nonostante ciò, la sezione d’urto può essere stimata, o quanto meno le può essere dato un limite, grazie ad un certo numero di relazioni, come ad esempio il Teorema Ottico. In questo contesto, il detector ALFA (An Absolute Luminosity For ATLAS) sfrutta il Teorema Ottico per determinare la sezione d’urto totale misurando il rate di eventi elastici nella direzione forward. Un tale approccio richiede un metodo accurato di misura della luminosità in condizioni sperimentali difficoltose, caratterizzate da valori di luminosità istantanea inferiore fino a 7 ordini di grandezza rispetto alle normali condizioni di LHC. Lo scopo di questa tesi è la determinazione della luminosità integrata di due run ad alto β*, utilizzando diversi algoritmi di tipo Event-Counting dei detector BCM e LUCID. Particolare attenzione è stata riservata alla sottrazione del fondo e allo studio delle in- certezze sistematiche. I valori di luminosità integrata ottenuti sono L = 498.55 ± 0.31 (stat) ± 16.23 (sys) μb^(-1) and L = 21.93 ± 0.07 (stat) ± 0.79 (sys) μb^(-1), rispettivamente per i due run. Tali saranno forniti alla comunità di fisica che si occupa della misura delle sezioni d’urto protone-protone, elastica e totale. Nel Run II di LHC, la sezione d’urto totale protone-protone sarà stimata con un’energia nel centro di massa di 13 TeV per capire meglio la sua dipendenza dall’energia in un simile regime. Gli strumenti utilizzati e l’esperienza acquisita in questa tesi saranno fondamentali per questo scopo.
Resumo:
Lo scopo di questa tesi è la misura di sezione d’urto di produzione di coppie top-antitop nel canale adronico. Per la misura sono stati utilizzati i dati raccolti dall’esperimento CMS in collisioni protone-protone ad LHC, con un’energia nel centro di massa pari a 13 TeV. Il campione di dati utilizzato corrisponde ad una luminosità integrata di 2.474 f b^ −1 . L’analisi dati inizia selezionando gli eventi che soddisfano determinate condizioni (e.g. trigger, tagli cinematici, sei o più jet, almeno 2 jet provenienti dall’adronizzazione di due quark bottom) con lo scopo di incrementare la purezza del segnale scartando il più possibile gli eventi di fondo. A seguire, viene ricostruita la massa del quark top usando un fit cinematico. Sulle distribuzioni di tale massa si basa la stima degli eventi di fondo e di segnale. Infine, attraverso un fit di verosimiglianza, si ottiene il valore della sezione d’urto: σ t t ̄ = 893 ± 57 (stat) ± 104 (syst) pb. Questo risultato è in buon accordo con il valore teorico di 832 pb e con altre misure di CMS effettuate in canali differenti.
Resumo:
La tesi tratta una panoramica generale sui Time Series database e relativi gestori. Successivamente l'attenzione è focalizzata sul DBMS InfluxDB. Infine viene mostrato un progetto che implementa InfluxDB