742 resultados para Spectrométrie de masse


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Im Rahmen dieser Arbeit wurde ein flugzeuggetragenes Laserablations-Einzelpartikel-Massenspektrometer von Grund auf entworfen, gebaut, charakterisiert und auf verschiedenen Feldmesskampagnen eingesetzt. Das ALABAMA (Aircraft-based Laser ABlation Aerosol MAss Spectrometer) ist in der Lage die chemische Zusammensetzung und Größe von einzelnen Aerosolpartikeln im submikrometer-Bereich (135 – 900 nm) zu untersuchen.rnNach dem Fokussieren in einer aerodynamischen Linse wird dafür zunächst derrnaerodynamische Durchmesser der einzelnen Partikel mit Hilfe einer Flugzeitmessung zwischen zwei Dauerstrichlasern bestimmt. Anschließend werden die zuvor detektierten und klassifizierten Partikel durch einen gezielten Laserpuls einzeln verdampft und ionisiert. Die Ionen werden in einem bipolaren Flugzeit-Massenspektrometer entsprechend ihrem Masse zu- Ladungs Verhältnisses getrennt und detektiert. Die entstehenden Massenspektren bieten einen detaillierten Einblick in die chemische Struktur der einzelnen Partikel.rnDas gesamte Instrument wurde so konzipiert, dass es auf dem neuen Höhenforschungsflugzeug HALO und anderen mobilen Plattformen eingesetzt werden kann. Um dies zu ermöglichen wurden alle Komponenten in einem Rahmen mit weniger als 0.45 m³ Volumen untergebracht. Das gesamte Instrument inklusive Rahmen wiegt weniger als 150 kg und erfüllt die strengen sicherheitsvorschriften für den Betrieb an Bord von Forschungsflugzeugen. Damit ist ALABAMA das kleinste und leichteste Instrument seiner Art.rnNach dem Aufbau wurden die Eigenschaften und Grenzen aller Komponenten detailliert im Labor und auf Messkampagnen charakterisiert. Dafür wurden zunächst die Eigenschaften des Partikelstrahls, wie beispielsweise Strahlbreite und –divergenz, ausführlich untersucht. Die Ergebnisse waren wichtig, um die späteren Messungen der Detektions- und Ablationseffizienz zu validieren.rnBei den anschließenden Effizienzmessungen wurde gezeigt, dass abhängig von ihrer Größe und Beschaffenheit, bis zu 86 % der vorhandenen Aerosolpartikel erfolgreich detektiert und größenklassifiziert werden. Bis zu 99.5 % der detektierten Partikel konnten ionisiert und somit chemisch untersucht werden. Diese sehr hohen Effizienzen sind insbesondere für Messungen in großer Höhe entscheidend, da dort zum Teil nur sehr geringe Partikelkonzentrationen vorliegen.rnDas bipolare Massenspektrometer erzielt durchschnittliche Massenauflösungen von bis zu R=331. Während Labor- und Feldmessungen konnten dadurch Elemente wie Au, Rb, Co, Ni, Si, Ti und Pb eindeutig anhand ihres Isotopenmusters zugeordnet werden.rnErste Messungen an Bord eines ATR-42 Forschungsflugzeuges während der MEGAPOLI Kampagne in Paris ergaben einen umfassenden Datensatz von Aerosolpartikeln innerhalb der planetaren Grenzschicht. Das ALABAMA konnte unter harten physischen Bedingungen (Temperaturen > 40°C, Beschleunigungen +/- 2 g) verlässlich und präzise betrieben werden. Anhand von charakteristischen Signalen in den Massenspektren konnten die Partikel zuverlässig in 8 chemische Klassen unterteilt werden. Einzelne Klassen konnten dabei bestimmten Quellen zugeordnet werden. So ließen sich beispielsweise Partikel mit starkerrnNatrium- und Kaliumsignatur eindeutig auf die Verbrennung von Biomasse zurückführen.rnALABAMA ist damit ein wertvolles Instrument um Partikel in-situ zu charakterisieren und somit verschiedenste wissenschaftliche Fragestellungen, insbesondere im Bereich der Atmosphärenforschung, zu untersuchen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Heutzutage gewähren hochpräzise Massenmessungen mit Penning-Fallen tiefe Einblicke in die fundamentalen Eigenschaften der Kernmaterie. Zu diesem Zweck wird die freie Zyklotronfrequenz eines Ions bestimmt, das in einem starken, homogenen Magnetfeld gespeichert ist. Am ISOLTRAP-Massenspektrometer an ISOLDE / CERN können die Massen von kurzlebigen, radioaktiven Nukliden mit Halbwertszeiten bis zu einigen zehn ms mit einer Unsicherheit in der Größenordnung von 10^-8 bestimmt werden. ISOLTRAP besteht aus einem Radiofrequenz-Quadrupol zum akkumulieren der von ISOLDE gelieferten Ionen, sowie zwei Penning-Fallen zum säubern und zur Massenbestimmung der Ionen. Innerhalb dieser Arbeit wurden die Massen von neutronenreichen Xenon- und Radonisotopen (138-146Xe und 223-229Rn) gemessen. Für elf davon wurde zum ersten Mal die Masse direkt bestimmt; 229Rn wurde im Zuge dieses Experimentes sogar erstmalig beobachtet und seine Halbwertszeit konnte zu ungefähr 12 s bestimmt werden. Da die Masse eines Nuklids alle Wechselwirkungen innerhalb des Kerns widerspiegelt, ist sie einzigartig für jedes Nuklid. Eine dieser Wechselwirkungen, die Wechselwirkung zwischen Protonen und Neutronen, führt zum Beispiel zu Deformationen. Das Ziel dieser Arbeit ist eine Verbindung zwischen kollektiven Effekten, wie Deformationen und Doppeldifferenzen von Bindungsenergien, sogenannten deltaVpn-Werten zu finden. Insbesondere in den hier untersuchten Regionen zeigen deltaVpn-Werte ein sehr ungewöhnliches Verhalten, das sich nicht mit einfachen Argumenten deuten lässt. Eine Erklärung könnte das Auftreten von Oktupoldeformationen in diesen Gebieten sein. Nichtsdestotrotz ist eine quantitative Beschreibung von deltaVpn-Werten, die den Effekt von solchen Deformationen berücksichtigt mit modernen Theorien noch nicht möglich.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nuclear masses are an important quantity to study nuclear structure since they reflect the sum of all nucleonic interactions. Many experimental possibilities exist to precisely measure masses, out of which the Penning trap is the tool to reach the highest precision. Moreover, absolute mass measurements can be performed using carbon, the atomic-mass standard, as a reference. The new double-Penning trap mass spectrometer TRIGA-TRAP has been installed and commissioned within this thesis work, which is the very first experimental setup of this kind located at a nuclear reactor. New technical developments have been carried out such as a reliable non-resonant laser ablation ion source for the production of carbon cluster ions and are still continued, like a non-destructive ion detection technique for single-ion measurements. Neutron-rich fission products will be available by the reactor that are important for nuclear astrophysics, especially the r-process. Prior to the on-line coupling to the reactor, TRIGA-TRAP already performed off-line mass measurements on stable and long-lived isotopes and will continue this program. The main focus within this thesis was on certain rare-earth nuclides in the well-established region of deformation around N~90. Another field of interest are mass measurements on actinoids to test mass models and to provide direct links to the mass standard. Within this thesis, the mass of 241-Am could be measured directly for the first time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Es wurde ein Teil der life-history, die Reproduktion, von Dinosauriern, speziell der Sauropoden, den größten bekannten jemals auf der Erde existierenden Landtieren, untersucht, um unter anderem den Zusammenhang zwischen Gigantismus und Reproduktion zu erforschen. Hierzu wurde eine mögliche life-history für Sauropoden, auf Grundlage des heutigen Forschungsstands in der Biologie und der Paläontologie, anhand einer Literaturrecherche erstellt. Des Weiteren wurde ein Modell zur Reproduktion bei ausgestorbenen oviparen Amnioten, basierend auf bestehenden Zusammenhängen zwischen Körpergröße und verschiedenen masse-spezifischen Reproduktionsmerkmalen (Eigewicht, Gelegegewicht, jähr. Gelegegewicht) bei rezenten oviparen Amnioten, erarbeitet. Mit Hilfe dieses Modells und Informationen aus Fossilfunden wurde der Frage nachgegangen, wie diese Reproduktionsmerkmale bei Dinosauriern wahrscheinlich ausgesehen haben. Weiterhin erfolgte die Überprüfung der Hypothese, dass Dinosaurier, insbesondere Sauropoden, eine höhere Reproduktionskapazität hatten als gleich große landlebende Säugetiere, was ersteren im Vergleich zu letzteren ermöglicht haben soll so viel größer zu werden (Janis und Carrano 1992). rnDie Untersuchungen der Zusammenhänge zwischen Körpergewicht und den masse-spezifischen Reproduktionsmerkmalen ergaben, dass das Körpergewicht immer stark mit den untersuchten Reproduktionsmerkmalen korreliert war. Große Vögel und große Reptilien unterscheiden sich in ihrem relativen Eigewicht (Eigewicht/Körpergewicht). Vögel haben relativ größere Eier. Betrachtet man das relative Gelegegewicht oder das relative jährliche Gelegegewicht so wird der Unterschied kleiner bzw. ist zwischen manchen Reptilien- und Vogelgruppen nicht mehr vorhanden. Dinosaurier hatten relative Eigewichte, die zwischen denen von Reptilien und Vögel liegen. Basale Dinosaurier, wie Prosauropoden, waren in ihrer Reproduktion eher reptilien-ähnlich, während vogel-ähnliche Theropoden eine Reproduktion hatten, die sich besser durch ein Vogelmodel beschreiben lässt. Die Reproduktion anderer Dinosaurier, wie Sauropoden und Hadrosaurier, lässt sich nicht eindeutig durch eines der beiden Modelle beschreiben und/oder die Modelle variierten in Abhängigkeit des betrachteten Merkmals. Trotzdem war es möglich für alle untersuchten Dinosaurier eine Abschätzung zur Gelegegröße und der Anzahl der jährlich gelegten Eier zu machen. Diese Schätzungen ergaben, dass die vermutete hohe Reproduktionskapazität von mehreren hundert Eiern pro Jahr nur für extrem große Sauropoden (70 t) haltbar ist. rnMit Ausnahme der Nagetiere fand ich die Unterschiede in der Reproduktionskapazität von Vögeln und Säugetieren, die Janis und Carrano (1992) postulierten, sogar auf der Ebene von Ordnungen. Dinosauriergelege waren größer als die Würfe von gleichgroßen (extrapolierten) Säugetieren während die Gelegegröße von gleichgroßen (extrapolierten) Vögeln ähnlich der von Sauropoden war. Da das Aussterberisiko häufig mit niedriger Reproduktionskapazität korreliert ist, impliziert dies ein geringeres Aussterberisiko großer Dinosaurier im Vergleich zu großen Säugetieren. Populationen sehr großer Dinosaurier, wie der Sauropoden, konnten vermutlich daher, über evolutionäre Zeiträume betrachtet, sehr viel länger existieren als Populationen großer Säugetiere.rn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The production of the Z boson in proton-proton collisions at the LHC serves as a standard candle at the ATLAS experiment during early data-taking. The decay of the Z into an electron-positron pair gives a clean signature in the detector that allows for calibration and performance studies. The cross-section of ~ 1 nb allows first LHC measurements of parton density functions. In this thesis, simulations of 10 TeV collisions at the ATLAS detector are studied. The challenges for an experimental measurement of the cross-section with an integrated luminositiy of 100 pb−1 are discussed. In preparation for the cross-section determination, the single-electron efficiencies are determined via a simulation based method and in a test of a data-driven ansatz. The two methods show a very good agreement and differ by ~ 3% at most. The ingredients of an inclusive and a differential Z production cross-section measurement at ATLAS are discussed and their possible contributions to systematic uncertainties are presented. For a combined sample of signal and background the expected uncertainty on the inclusive cross-section for an integrated luminosity of 100 pb−1 is determined to 1.5% (stat) +/- 4.2% (syst) +/- 10% (lumi). The possibilities for single-differential cross-section measurements in rapidity and transverse momentum of the Z boson, which are important quantities because of the impact on parton density functions and the capability to check for non-pertubative effects in pQCD, are outlined. The issues of an efficiency correction based on electron efficiencies as function of the electron’s transverse momentum and pseudorapidity are studied. A possible alternative is demonstrated by expanding the two-dimensional efficiencies with the additional dimension of the invariant mass of the two leptons of the Z decay.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In questo elaborato abbiamo analizzato un campione di 22 galssie early-type. Utilizzando una tecnica di cross-correlazione, abbiamo ottenuto profili radiali di rotazione e di dis- persione di velocitá. Questi dati ci hanno permesso di investigare molte delle proprietá dinamiche delle nostre galassie. Abbiamo ottenuto indizi sull’anisotropia orbitale e stimato le masse e il rapporto M/L del campione. Le masse misurate variano da 1010 a 1012 M , mentre i valori degli M/L, per cui abbiamo trovato una dipendenza del tipo Log M/L ∝ 0.28 Log L , sono dell’ordine dell’unitá. Abbiamo anche riprodotto le famose relazioni di scala e abbi- amo utlizzato un set di dati sugli indici di Lick/IDS per ricercare relazioni tra le proprietá chimiche e quelle dinamiche. In particolare, abbiamo riscontrato una correlazione tra molti degli indici dipendenti dalla metallicitá e la profonditá della buca di potenziale. Tali indici sembrano correlare anche con il M/L. La rotazione e la forma del profilo di dispersione di velocitá sembrano essere ininfluenti sulle proprietá chimiche. In ultima analisi, abbiamo considerato le implicazioni delle nostre misure riguardo la natura della popolazione stellare e dell’emissione X delle nostre galassie. L’indice di colore e il M/L sembrano indicare che la popolazione stellare delle nostre galassie é dominata da stelle appartenenti alle classi spettrali late-G e early-K. Sembra inoltre esserci una correlazione tra l’emissione X degli elementi del nostro campione e la profonditá della loro buca di potenziale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Within this work, a particle-polymer surface system is studied with respect to the particle-surface interactions. The latter are governed by micromechanics and are an important aspect for a wide range of industrial applications. Here, a new methodology is developed for understanding the adhesion process and measure the relevant forces, based on the quartz crystal microbalance, QCM. rnThe potential of the QCM technique for studying particle-surface interactions and reflect the adhesion process is evaluated by carrying out experiments with a custom-made setup, consisting of the QCM with a 160 nm thick film of polystyrene (PS) spin-coated onto the quartz and of glass particles, of different diameters (5-20µm), deposited onto the polymer surface. Shifts in the QCM resonance frequency are monitored as a function of the oscillation amplitude. The induced frequency shifts of the 3rd overtone are found to decrease or increase, depending on the particle-surface coupling type and the applied oscillation (frequency and amplitude). For strong coupling the 3rd harmonic decreased, corresponding to an “added mass” on the quartz surface. However, positive frequency shifts are observed in some cases and are attributed to weak-coupling between particle and surface. Higher overtones, i.e. the 5th and 7th, were utilized in order to derive additional information about the interactions taking place. For small particles, the shift for specific overtones can increase after annealing, while for large particle diameters annealing causes a negative frequency shift. The lower overtones correspond to a generally strong-coupling regime with mainly negative frequency shifts observed, while the 7th appears to be sensitive to the contact break-down and the recorded shifts are positive.rnDuring oscillation, the motion of the particles and the induced frequency shift of the QCM are governed by a balance between inertial forces and contact forces. The adherence of the particles can be increased by annealing the PS film at 150°C, which led to the formation of a PS meniscus. For the interpretation, the Hertz, Johnson-Kendall-Roberts, Derjaguin-Müller-Toporov and the Mindlin theory of partial slip are considered. The Mindlin approach is utilized to describe partial slip. When partial slip takes place induced by an oscillating load, a part of the contact ruptures. This results in a decrease of the effective contact stiffness. Additionally, there are long-term memory effects due to the consolidation which along with the QCM vibrations induce a coupling increase. However, the latter can also break the contact, lead to detachment and even surface damage and deformation due to inertia. For strong coupling the particles appear to move with the vibrations and simply act as added effective mass leading to a decrease of the resonance frequency, in agreement with the Sauerbrey equation that is commonly used to calculate the added mass on a QCM). When the system enters the weak-coupling regime the particles are not able to follow the fast movement of the QCM surface. Hence, they effectively act as adding a “spring” with an additional coupling constant and increase the resonance frequency. The frequency shift, however, is not a unique function of the coupling constant. Furthermore, the critical oscillation amplitude is determined, above which particle detach. No movement is detected at much lower amplitudes, while for intermediate values, lateral particle displacement is observed. rnIn order to validate the QCM results and study the particle effects on the surface, atomic force microscopy, AFM, is additionally utilized, to image surfaces and measure surface forces. By studying the surface of the polymer film after excitation and particle removal, AFM imaging helped in detecting three different meniscus types for the contact area: the “full contact”, the “asymmetrical” and a third one including a “homocentric smaller meniscus”. The different meniscus forms result in varying bond intensity between particles and polymer film, which could explain the deviation between number of particles per surface area measured by imaging and the values provided by the QCM - frequency shift analysis. The asymmetric and the homocentric contact types are suggested to be responsible for the positive frequency shifts observed for all three measured overtones, i.e. for the weak-coupling regime, while the “full contact” type resulted in a negative frequency shift, by effectively contributing to the mass increase of the quartz..rnThe interplay between inertia and contact forces for the particle-surface system leads to strong- or weak-coupling, with the particle affecting in three mentioned ways the polymer surface. This is manifested in the frequency shifts of the QCM system harmonics which are used to differentiate between the two interaction types and reflect the overall state of adhesion for particles of different size.rn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il tumore al seno si colloca al primo posto per livello di mortalità tra le patologie tumorali che colpiscono la popolazione femminile mondiale. Diversi studi clinici hanno dimostrato come la diagnosi da parte del radiologo possa essere aiutata e migliorata dai sistemi di Computer Aided Detection (CAD). A causa della grande variabilità di forma e dimensioni delle masse tumorali e della somiglianza di queste con i tessuti che le ospitano, la loro ricerca automatizzata è un problema estremamente complicato. Un sistema di CAD è generalmente composto da due livelli di classificazione: la detection, responsabile dell’individuazione delle regioni sospette presenti sul mammogramma (ROI) e quindi dell’eliminazione preventiva delle zone non a rischio; la classificazione vera e propria (classification) delle ROI in masse e tessuto sano. Lo scopo principale di questa tesi è lo studio di nuove metodologie di detection che possano migliorare le prestazioni ottenute con le tecniche tradizionali. Si considera la detection come un problema di apprendimento supervisionato e lo si affronta mediante le Convolutional Neural Networks (CNN), un algoritmo appartenente al deep learning, nuova branca del machine learning. Le CNN si ispirano alle scoperte di Hubel e Wiesel riguardanti due tipi base di cellule identificate nella corteccia visiva dei gatti: le cellule semplici (S), che rispondono a stimoli simili ai bordi, e le cellule complesse (C) che sono localmente invarianti all’esatta posizione dello stimolo. In analogia con la corteccia visiva, le CNN utilizzano un’architettura profonda caratterizzata da strati che eseguono sulle immagini, alternativamente, operazioni di convoluzione e subsampling. Le CNN, che hanno un input bidimensionale, vengono solitamente usate per problemi di classificazione e riconoscimento automatico di immagini quali oggetti, facce e loghi o per l’analisi di documenti.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, a systematic analysis of the bar B to X_sgamma photon spectrum in the endpoint region is presented. The endpoint region refers to a kinematic configuration of the final state, in which the photon has a large energy m_b-2E_gamma = O(Lambda_QCD), while the jet has a large energy but small invariant mass. Using methods of soft-collinear effective theory and heavy-quark effective theory, it is shown that the spectrum can be factorized into hard, jet, and soft functions, each encoding the dynamics at a certain scale. The relevant scales in the endpoint region are the heavy-quark mass m_b, the hadronic energy scale Lambda_QCD and an intermediate scale sqrt{Lambda_QCD m_b} associated with the invariant mass of the jet. It is found that the factorization formula contains two different types of contributions, distinguishable by the space-time structure of the underlying diagrams. On the one hand, there are the direct photon contributions which correspond to diagrams with the photon emitted directly from the weak vertex. The resolved photon contributions on the other hand arise at O(1/m_b) whenever the photon couples to light partons. In this work, these contributions will be explicitly defined in terms of convolutions of jet functions with subleading shape functions. While the direct photon contributions can be expressed in terms of a local operator product expansion, when the photon spectrum is integrated over a range larger than the endpoint region, the resolved photon contributions always remain non-local. Thus, they are responsible for a non-perturbative uncertainty on the partonic predictions. In this thesis, the effect of these uncertainties is estimated in two different phenomenological contexts. First, the hadronic uncertainties in the bar B to X_sgamma branching fraction, defined with a cut E_gamma > 1.6 GeV are discussed. It is found, that the resolved photon contributions give rise to an irreducible theory uncertainty of approximately 5 %. As a second application of the formalism, the influence of the long-distance effects on the direct CP asymmetry will be considered. It will be shown that these effects are dominant in the Standard Model and that a range of -0.6 < A_CP^SM < 2.8 % is possible for the asymmetry, if resolved photon contributions are taken into account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we investigate several phenomenologically important properties of top-quark pair production at hadron colliders. We calculate double differential cross sections in two different kinematical setups, pair invariant-mass (PIM) and single-particle inclusive (1PI) kinematics. In pair invariant-mass kinematics we are able to present results for the double differential cross section with respect to the invariant mass of the top-quark pair and the top-quark scattering angle. Working in the threshold region, where the pair invariant mass M is close to the partonic center-of-mass energy sqrt{hat{s}}, we are able to factorize the partonic cross section into different energy regions. We use renormalization-group (RG) methods to resum large threshold logarithms to next-to-next-to-leading-logarithmic (NNLL) accuracy. On a technical level this is done using effective field theories, such as heavy-quark effective theory (HQET) and soft-collinear effective theory (SCET). The same techniques are applied when working in 1PI kinematics, leading to a calculation of the double differential cross section with respect to transverse-momentum pT and the rapidity of the top quark. We restrict the phase-space such that only soft emission of gluons is possible, and perform a NNLL resummation of threshold logarithms. The obtained analytical expressions enable us to precisely predict several observables, and a substantial part of this thesis is devoted to their detailed phenomenological analysis. Matching our results in the threshold regions to the exact ones at next-to-leading order (NLO) in fixed-order perturbation theory, allows us to make predictions at NLO+NNLL order in RG-improved, and at approximate next-to-next-to-leading order (NNLO) in fixed order perturbation theory. We give numerical results for the invariant mass distribution of the top-quark pair, and for the top-quark transverse-momentum and rapidity spectrum. We predict the total cross section, separately for both kinematics. Using these results, we analyze subleading contributions to the total cross section in 1PI and PIM originating from power corrections to the leading terms in the threshold expansions, and compare them to previous approaches. We later combine our PIM and 1PI results for the total cross section, this way eliminating uncertainties due to these corrections. The combined predictions for the total cross section are presented as a function of the top-quark mass in the pole, the minimal-subtraction (MS), and the 1S mass scheme. In addition, we calculate the forward-backward (FB) asymmetry at the Tevatron in the laboratory, and in the ttbar rest frames as a function of the rapidity and the invariant mass of the top-quark pair at NLO+NNLL. We also give binned results for the asymmetry as a function of the invariant mass and the rapidity difference of the ttbar pair, and compare those to recent measurements. As a last application we calculate the charge asymmetry at the LHC as a function of a lower rapidity cut-off for the top and anti-top quarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tesi realizzata presso l’azienda Paglierani S.r.l. con sede a Torriana, per l’esigenza di studiare e ottimizzare dinamicamente il manipolatore “Antrop”. Dopo una breve descrizione dell’azienda e della sua storia, nel secondo capitolo sono racchiuse le informazioni più importanti e utili sulla robotica e sugli azionamenti elettrici, nozioni poi sfruttate durante lo svolgimento dell’elaborato stesso. Successivamente sono descritte in modo dettagliato il robot, le sue funzionalità e le sue caratteristiche meccaniche. Seguono l’analisi cinematica e l’analisi dinamica, suddivisa in due parti a causa della geometria cilindrica del manipolatore. Il sesto capitolo contiene il procedimento di mappatura e la logica con cui è stato scritto. Inoltre, vengono presentati i risultati dei test e le conseguenti considerazioni. Nel capitolo settimo è descritta la formulazione analitica delle sollecitazioni per le leve del robot e ne vengono raffigurati i risultati generali. L’ultimo capitolo è dedicato al bilanciamento statico con masse e molle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Among all possible realizations of quark and antiquark assembly, the nucleon (the proton and the neutron) is the most stable of all hadrons and consequently has been the subject of intensive studies. Mass, shape, radius and more complex representations of its internal structure are measured since several decades using different probes. The proton (spin 1/2) is described by the electric GE and magnetic GM form factors which characterise its internal structure. The simplest way to measure the proton form factors consists in measuring the angular distribution of the electron-proton elastic scattering accessing the so-called Space-Like region where q2 < 0. Using the crossed channel antiproton proton <--> e+e-, one accesses another kinematical region, the so-called Time-Like region where q2 > 0. However, due to the antiproton proton <--> e+e- threshold q2th, only the kinematical domain q2 > q2th > 0 is available. To access the unphysical region, one may use the antiproton proton --> pi0 e+ e- reaction where the pi0 takes away a part of the system energy allowing q2 to be varied between q2th and almost 0. This thesis aims to show the feasibility of such measurements with the PANDA detector which will be installed on the new high intensity antiproton ring at the FAIR facility at Darmstadt. To describe the antiproton proton --> pi0 e+ e- reaction, a Lagrangian based approach is developed. The 5-fold differential cross section is determined and related to linear combinations of hadronic tensors. Under the assumption of one nucleon exchange, the hadronic tensors are expressed in terms of the 2 complex proton electromagnetic form factors. An extraction method which provides an access to the proton electromagnetic form factor ratio R = |GE|/|GM| and for the first time in an unpolarized experiment to the cosine of the phase difference is developed. Such measurements have never been performed in the unphysical region up to now. Extended simulations were performed to show how the ratio R and the cosine can be extracted from the positron angular distribution. Furthermore, a model is developed for the antiproton proton --> pi0 pi+ pi- background reaction considered as the most dangerous one. The background to signal cross section ratio was estimated under different cut combinations of the particle identification information from the different detectors and of the kinematic fits. The background contribution can be reduced to the percent level or even less. The corresponding signal efficiency ranges from a few % to 30%. The precision on the determination of the ratio R and of the cosine is determined using the expected counting rates via Monte Carlo method. A part of this thesis is also dedicated to more technical work with the study of the prototype of the electromagnetic calorimeter and the determination of its resolution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantum Chromodynamics (QCD) is the theory of strong interactions, one of the four fundamental forces in our Universe. It describes the interaction of gluons and quarks which build up hadrons like protons and neutrons. Most of the visible matter in our universe is made of protons and neutrons. Hence, we are interested in their fundamental properties like their masses, their distribution of charge and their shape. \\rnThe only known theoretical, non-perturbative and {\it ab initio} method to investigate hadron properties at low energies is lattice Quantum Chromodynamics (lattice QCD). However, up-to-date simulations (especially for baryonic quantities) do not achieve the accuracy of experiments. In fact, current simulations do not even reproduce the experimental values for the form factors. The question arises wether these deviations can be explained by systematic effects in lattice QCD simulations.rnrnThis thesis is about the computation of nucleon form factors and other hadronic quantities from lattice QCD. So called Wilson fermions are used and the u- and d-quarks are treated fully dynamically. The simulations were performed using gauge ensembles with a range of lattice spacings, volumes and pion masses.\\rnFirst of all, the lattice spacing was set to be able to make contact between the lattice results and their experimental complement and to be able to perform a continuum extrapolation. The light quark mass has been computed and found to be $m_{ud}^{\overline{\text{MS}}}(2\text{ GeV}) = 3.03(17)(38)\text{ MeV}$. This value is in good agreement with values from experiments and other lattice determinations.\\rnElectro-magnetic and axial form factors of the nucleon have been calculated. From these form factors the nucleon radii and the coupling constants were computed. The different ensembles enabled us to investigate systematically the dependence of these quantities on the volume, the lattice spacing and the pion mass.\newpage Finally we perform a continuum extrapolation and chiral extrapolations to the physical point.\\rnIn addition, we investigated so called excited state contributions to these observables. A technique was used, the summation method, which reduces these effects significantly and a much better agreement with experimental data was achieved. On the lattice, the Dirac radius and the axial charge are usually found to be much smaller than the experimental values. However, due to the carefully investigation of all the afore-mentioned systematic effects we get $\langle r_1^2\rangle_{u-d}=0.627(54)\text{ fm}^2$ and $g_A=1.218(92)$, which is in agreement with the experimental values within the errors.rnrnThe first three chapters introduce the theoretical background of form factors of the nucleon and lattice QCD in general. In chapter four the lattice spacing is determined. The computation of nucleon form factors is described in chapter five where systematic effects are investigated. All results are presented in chapter six. The thesis ends with a summary of the results and identifies options to complement and extend the calculations presented. rn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’obiettivo della tesi è stato quello di integrare un feedback sul sodio plasmatico all’interno di un sistema di biofeedback sul volume ematico già esistente. I vantaggi del nuovo sistema risiedono nella capacità del sistema di rispondere in modo ancora più accurato alle specifiche esigenze dei singoli pazienti. Il sistema è stato implementato in Matlab/Simulink. Sono stati effettuate 72 simulazioni di trattamenti reali di emodialisi. Il sistema si è dimostrato capace di portare a compimento il raggiungimento dei target di sodio plasmatico e volume ematico, con rimozione delle masse di sodio dal plasma simile a quella ottenuta utilizzando il sistema privo di feedback sul sodio plasmatico.