907 resultados para PRECISION EXPERIMENTS
Resumo:
rnThis thesis is on the flavor problem of Randall Sundrum modelsrnand their strongly coupled dual theories. These models are particularly wellrnmotivated extensions of the Standard Model, because they simultaneously address rntherngauge hierarchy problem and the hierarchies in the quarkrnmasses and mixings. In order to put this into context, special attention is given to concepts underlying therntheories which can explain the hierarchy problem and the flavor structure of the Standard Model (SM). ThernAdS/CFTrnduality is introduced and its implications for the Randall Sundrum model withrnfermions in the bulk andrngeneral bulk gauge groups is investigated. It will be shown that the differentrnterms in the general 5D propagator of a bulk gauge field can be related tornthe corresponding diagrams of the strongly coupled dual, which allows for arndeeperrnunderstanding of the origin of flavor changing neutral currents generated by thernexchange of the Kaluza Klein excitations of these bulk fields.rnIn the numerical analysis, different observables which are sensitive torncorrections from therntree-levelrnexchange of these resonances will be presented on the basis of updatedrnexperimental data from the Tevatron and LHC experiments. This includesrnelectroweak precision observables, namely corrections to the S and Trnparameters followed by corrections to the Zbb vertex, flavor changingrnobservables with flavor changes at one vertex, viz. BR (Bd -> mu+mu-) and BR (Bs -> mu+mu-), and two vertices,rn viz. S_psiphi and |eps_K|, as well as bounds from direct detectionrnexperiments. rnThe analysis will show that all of these bounds can be brought in agreement withrna new physics scale Lambda_NP in the TeV range, except for the CPrnviolating quantity |eps_K|, which requires Lambda_NP= Ord(10) TeVrnin the absencernof fine-tuning. The numerous modifications of the Randall Sundrum modelrnin the literature, which try to attenuate this bound are reviewed andrncategorized.rnrnSubsequently, a novel solution to this flavor problem, based on an extendedrncolor gauge group in the bulk and its thorough implementation inrnthe RS model, will be presented, as well as an analysis of the observablesrnmentioned above in the extended model. This solution is especially motivatedrnfromrnthe point of view of the strongly coupled dual theory and the implications forrnstrongly coupled models of new physics, which do not possess a holographic dual,rnare examined.rnFinally, the top quark plays a special role in models with a geometric explanation ofrnflavor hierarchies and the predictions in the Randall-Sundrum model with andrnwithout the proposed extension for the forward-backward asymmetryrnA_FB^trnin top pair production are computed.
Resumo:
In dieser Arbeit stelle ich Aspekte zu QCD Berechnungen vor, welche eng verknüpft sind mit der numerischen Auswertung von NLO QCD Amplituden, speziell der entsprechenden Einschleifenbeiträge, und der effizienten Berechnung von damit verbundenen Beschleunigerobservablen. Zwei Themen haben sich in der vorliegenden Arbeit dabei herauskristallisiert, welche den Hauptteil der Arbeit konstituieren. Ein großer Teil konzentriert sich dabei auf das gruppentheoretische Verhalten von Einschleifenamplituden in QCD, um einen Weg zu finden die assoziierten Farbfreiheitsgrade korrekt und effizient zu behandeln. Zu diesem Zweck wird eine neue Herangehensweise eingeführt welche benutzt werden kann, um farbgeordnete Einschleifenpartialamplituden mit mehreren Quark-Antiquark Paaren durch Shufflesummation über zyklisch geordnete primitive Einschleifenamplituden auszudrücken. Ein zweiter großer Teil konzentriert sich auf die lokale Subtraktion von zu Divergenzen führenden Poltermen in primitiven Einschleifenamplituden. Hierbei wurde im Speziellen eine Methode entwickelt, um die primitiven Einchleifenamplituden lokal zu renormieren, welche lokale UV Counterterme und effiziente rekursive Routinen benutzt. Zusammen mit geeigneten lokalen soften und kollinearen Subtraktionstermen wird die Subtraktionsmethode dadurch auf den virtuellen Teil in der Berechnung von NLO Observablen erweitert, was die voll numerische Auswertung der Einschleifenintegrale in den virtuellen Beiträgen der NLO Observablen ermöglicht. Die Methode wurde schließlich erfolgreich auf die Berechnung von NLO Jetraten in Elektron-Positron Annihilation im farbführenden Limes angewandt.
Resumo:
Radial velocities measured from near-infrared (NIR) spectra are a potential tool to search for extrasolar planets around cool stars. High resolution infrared spectrographs now available reach the high precision of visible instruments, with a constant improvement over time. GIANO is an infrared echelle spectrograph and it is a powerful tool to provide high resolution spectra for accurate radial velocity measurements of exo-planets and for chemical and dynamical studies of stellar or extragalactic objects. No other IR instruments have the GIANO's capability to cover the entire NIR wavelength range. In this work we develop an ensemble of IDL procedures to measure high precision radial velocities on a few GIANO spectra acquired during the commissioning run, using the telluric lines as wevelength reference. In Section 1.1 various exoplanet search methods are described. They exploit different properties of the planetary system. In Section 1.2 we describe the exoplanet population discovered trough the different methods. In Section 1.3 we explain motivations for NIR radial velocities and the challenges related the main issue that has limited the pursuit of high-precision NIR radial velocity, that is, the lack of a suitable calibration method. We briefly describe calibration methods in the visible and the solutions for IR calibration, for instance, the use of telluric lines. The latter has advantages and problems, described in detail. In this work we use telluric lines as wavelength reference. In Section 1.4 the Cross Correlation Function (CCF) method is described. This method is widely used to measure the radial velocities.In Section 1.5 we describe GIANO and its main science targets. In Chapter 2 observational data obtained with GIANO spectrograph are presented and the choice criteria are reported. In Chapter 3 we describe the detail of the analysis and examine in depth the flow chart reported in Section 3.1. In Chapter 4 we give the radial velocities measured with our IDL procedure for all available targets. We obtain an rms scatter in radial velocities of about 7 m/s. Finally, we conclude that GIANO can be used to measure radial velocities of late type stars with an accuracy close to or better than 10 m/s, using telluric lines as wevelength reference. In 2014 September GIANO is being operative at TNG for Science Verification and more observational data will allow to further refine this analysis.
Resumo:
Die theoretische und experimentelle Untersuchung von wasserstoffähnlichen Systemen hat in den letzten hundert Jahren immer wieder sowohl die experimentelle als auch die theoretische Physik entscheidend vorangebracht. Formulierung und Test der Quantenelektrodynamik (QED) standen und stehen in engen Zusammenhang mit der Untersuchung wasserstoffähnlicher Systeme. Gegenwärtig sind besonders wasserstoffähnliche Systeme schwerer Ionen von Interesse, um die QED in den extrem starken Feldern in Kernnähe zu testen. Laserspektroskopische Messungen der Hyperfeinstrukturaufspaltung des Grundzustandes bieten eine hohe Genauigkeit, ihre Interpretation wird jedoch durch die Unsicherheit in der Größe der Kernstruktureffekte erschwert. Beseitigt werden können diese durch die Kombination der Aufspaltung in wasserstoff- und lithiumähnlichen Ionen des gleichen Nuklids. In den letzten zwei Jahrzehnten scheiterten mehrere dadurch motivierte Versuche, den HFS-Übergang in lithiumähnlichen 209Bi80+ zu finden. Im Rahmen dieser Arbeit wurde kollineare Laserspektroskopie bei etwa 70% der Lichtgeschwindigkeit an 209Bi82+ und 209Bi80+ -Ionen im Experimentier- Speicherring an der GSI in Darmstadt durchgeführt. Dabei wurde der Übergang im lithiumähnlichen Bismut erstmals beobachtet und dessen Übergangswellenlänge zu 1554,74(74) nm bestimmt. Ein eigens für dieses Experiment optimiertes Fluoreszenz-Nachweissystem stellte dabei die entscheidende Verbesserung gegenüber den gescheiterten Vorgängerexperimenten dar. Der Wellenlängenfehler ist dominiert von der Unsicherheit der Ionengeschwindigkeit, die für die Transformation in das Ruhesystem der Ionen entscheidend ist. Für deren Bestimmung wurden drei Ansätze verfolgt: Die Geschwindigkeit wurde aus der Elektronenkühlerspannung bestimmt, aus dem Produkt von Orbitlänge und Umlauffrequenz und aus dem relativistischen Dopplereffekt unter Annahme der Korrektheit des früher bestimmten Überganges in wasserstoffähnlichen Bismut. Die Spannungskalibration des Elektronenkühlers wurde im Rahmen dieser Arbeit erstmals kritisch evaluiert und bislang unterschätzte systematische Unsicherheiten aufgezeigt, die derzeit einen aussagekräftigen QED-Test verhindern. Umgekehrt konnte unter Verwendung der QED-Berechnungen eine Ionengeschwindigkeit berechnet werden, die ein genaueres und konsistenteres Resultat für die Übergangswellenlängen beider Ionenspezies liefert. Daraus ergibt sich eine Diskrepanz zu dem früher bestimmten Wert des Überganges in wasserstoffähnlichen Bismut, die es weiter zu untersuchen gilt.
Resumo:
Die Dissertationsschrift beschäftigt sich mit der Entwicklung und Anwendung einer alternativen Probenzuführungstechnik für flüssige Proben in der Massenspektrometrie. Obwohl bereits einige Anstrengungen zur Verbesserung unternommen wurden, weisen konventionelle pneumatische Zerstäuber- und Sprühkammersysteme, die in der Elementspurenanalytik mittels induktiv gekoppeltem Plasma (ICP) standardmäßig verwendet werden, eine geringe Gesamteffizienz auf. Pneumatisch erzeugtes Aerosol ist durch eine breite Tropfengrößenverteilung gekennzeichnet, was den Einsatz einer Sprühkammer bedingt, um die Aerosolcharakteristik an die Betriebsbedingungen des ICPs anzupassen.. Die Erzeugung von Tropfen mit einer sehr engen Tropfengrößenverteilung oder sogar monodispersen Tropfen könnte die Effizienz des Probeneintrags verbessern. Ein Ziel dieser Arbeit ist daher, Tropfen, die mittels des thermischen Tintenstrahldruckverfahrens erzeugt werden, zum Probeneintrag in der Elementmassenspektrometrie einzusetzen. Das thermische Tintenstrahldruckverfahren konnte in der analytischen Chemie im Bereich der Oberflächenanalytik mittels TXRF oder Laserablation bisher zur gezielten, reproduzierbaren Deposition von Tropfen auf Oberflächen eingesetzt werden. Um eine kontinuierliche Tropfenerzeugung zu ermöglichen, wurde ein elektronischer Mikrokontroller entwickelt, der eine Dosiereinheit unabhängig von der Hard- und Software des Druckers steuern kann. Dabei sind alle zur Tropfenerzeugung relevanten Parameter (Frequenz, Heizpulsenergie) unabhängig voneinander einstellbar. Die Dosiereinheit, der "drop-on-demand" Aerosolgenerator (DOD), wurde auf eine Aerosoltransportkammer montiert, welche die erzeugten Tropfen in die Ionisationsquelle befördert. Im Bereich der anorganischen Spurenanalytik konnten durch die Kombination des DOD mit einem automatischen Probengeber 53 Elemente untersucht und die erzielbare Empfindlichkeiten sowie exemplarisch für 15 Elemente die Nachweisgrenzen und die Untergrundäquivalentkonzentrationen ermittelt werden. Damit die Vorteile komfortabel genutzt werden können, wurde eine Kopplung des DOD-Systems mit der miniaturisierten Fließinjektionsanalyse (FIA) sowie miniaturisierten Trenntechniken wie der µHPLC entwickelt. Die Fließinjektionsmethode wurde mit einem zertifizierten Referenzmaterial validiert, wobei für Vanadium und Cadmium die zertifizierten Werte gut reproduziert werden konnten. Transiente Signale konnten bei der Kopplung des Dosiersystems in Verbindung mit der ICP-MS an eine µHPLC abgebildet werden. Die Modifikation der Dosiereinheit zum Ankoppeln an einen kontinuierlichen Probenfluss bedarf noch einer weiteren Reduzierung des verbleibenden Totvolumens. Dazu ist die Unabhängigkeit von den bisher verwendeten, kommerziell erhältlichen Druckerpatronen anzustreben, indem die Dosiereinheit selbst gefertigt wird. Die Vielseitigkeit des Dosiersystems wurde mit der Kopplung an eine kürzlich neu entwickelte Atmosphärendruck-Ionisationsmethode, die "flowing atmospheric-pressure afterglow" Desorptions/Ionisations Ionenquelle (FAPA), aufgezeigt. Ein direkter Eintrag von flüssigen Proben in diese Quelle war bislang nicht möglich, es konnte lediglich eine Desorption von eingetrockneten Rückständen oder direkt von der Flüssigkeitsoberfläche erfolgen. Die Präzision der Analyse ist dabei durch die variable Probenposition eingeschränkt. Mit dem Einsatz des DOD-Systems können flüssige Proben nun direkt in die FAPA eingetragen, was ebenfalls das Kalibrieren bei quantitativen Analysen organischer Verbindungen ermöglicht. Neben illegalen Drogen und deren Metaboliten konnten auch frei verkäufliche Medikamente und ein Sprengstoffanalogon in entsprechend präpariertem reinem Lösungsmittel nachgewiesen werden. Ebenso gelang dies in Urinproben, die mit Drogen und Drogenmetaboliten versetzt wurden. Dabei ist hervorzuheben, dass keinerlei Probenvorbereitung notwendig war und zur Ermittlung der NWG der einzelnen Spezies keine interne oder isotopenmarkierte Standards verwendet wurden. Dennoch sind die ermittelten NWG deutlich niedriger, als die mit der bisherigen Prozedur zur Analyse flüssiger Proben erreichbaren. Um im Vergleich zu der bisher verwendeten "pin-to-plate" Geometrie der FAPA die Lösungsmittelverdampfung zu beschleunigen, wurde eine alternative Elektrodenanordnung entwickelt, bei der die Probe länger in Kontakt mit der "afterglow"-Zone steht. Diese Glimmentladungsquelle ist ringförmig und erlaubt einen Probeneintrag mittels eines zentralen Gasflusses. Wegen der ringförmigen Entladung wird der Name "halo-FAPA" (h-FAPA) für diese Entladungsgeometrie verwendet. Eine grundlegende physikalische und spektroskopische Charakterisierung zeigte, dass es sich tatsächlich um eine FAPA Desorptions/Ionisationsquelle handelt.
Potential vorticity and moisture in extratropical cyclones : climatology and sensitivity experiments
Resumo:
The development of extratropical cyclones can be seen as an interplay of three positive potential vorticity (PV) anomalies: an upper-level stratospheric intrusion, low-tropospheric diabatically produced PV, and a warm anomaly at the surface acting as a surrogate PV anomaly. In the mature stage they become vertically aligned and form a “PV tower” associated with strong cyclonic circulation. This paradigm of extratropical cyclone development provides the basis of this thesis, which will use a climatological dataset and numerical model experiments to investigate the amplitude of the three anomalies and the processes leading in particular to the formation of the diabatically produced low-tropospheric PV anomaly.rnrnThe first part of this study, based on the interim ECMWF Re-Analysis (ERA-Interim) dataset, quantifies the amplitude of the three PV anomalies in mature extratropical cyclones in different regions in the Northern Hemisphere on a climatological basis. A tracking algorithm is applied to sea level pressure (SLP) fields to identify cyclone tracks. Surface potential temperature anomalies ∆θ and vertical profiles of PV anomalies ∆PV are calculated at the time of the cyclones’ minimum SLP and during the intensification phase 24 hours before in a vertical cylinder with a radius of 200 km around the surface cyclone center. To compare the characteristics of the cyclones, they are grouped according to their location (8 regions) and intensity, where the central SLP is used as a measure of intensity. Composites of ∆PV profiles and ∆θ are calculated for each region and intensity class at the time of minimum SLP and during the cyclone intensification phase.rnrnDuring the cyclones’ development stage the amplitudes of all three anomalies increase on average. In the mature stage all three anomalies are typically larger for intense than for weak winter cyclones [e.g., 0.6 versus 0.2 potential vorticity units (PVU) at lower levels, and 1.5 versus 0.5 PVU at upper levels].rnThe regional variability of the cyclones’ vertical structure and the profile evolution is prominent (cyclones in some regions are more sensitive to the amplitude of a particular anomaly than in other regions). Values of ∆θ and low-level ∆PV are on average larger in the western parts of the oceans than in the eastern parts. In addition, a large seasonal variability can be identified, with fewer and weaker cyclones especially in the summer, associated with higher low-tropospheric PV values, but also with a higher tropopause and much weaker surface potential temperature anomalies (compared to winter cyclones).rnrnIn the second part, we were interested in the diabatic low-level part of PV towers. Evaporative sources were identified of moisture that was involved in PV production through condensation. Lagrangian backward trajectories were calculated from the region with high PV values at low-levels in the cyclones. PV production regions were identified along these trajectories and from these regions a new set of backward trajectories was calculated and moisture uptakes were traced along them. The main contribution from surface evaporation to the specific humidity of the trajectories is collected 12-72 hours prior to therntime of PV production. The uptake region for weaker cyclones with less PV in the centre is typically more localized with reduced uptake values compared to intense cyclones. However, in a qualitative sense uptakes and other variables along single trajectories do not vary much between cyclones of different intensity in different regions.rnrnA sensitivity study with the COSMO model comprises the last part of this work. The study aims at investigating the influence of synthetic moisture modification in the cyclone environment in different stages of its development. Moisture was eliminated in three regions, which were identified as important moisture source regions for PV production. Moisture suppression affected the cyclone the most in its early phase. It led to cyclolysis shortly after its genesis. Nevertheles, a new cyclone formed on the other side of a dry box and developed relatively quickly. Also in other experiments, moisture elimination led to strong intensity reduction of the surface cyclone, limited upper-level development, and delayed or missing interaction between the two.rnrnIn summary, this thesis provides novel insight into the structure of different intensity categories of extratropical cyclones from a PV perspective, which corroborates the findings from a series of previous case studies. It reveals that all three PV anomalies are typically enhanced for more intense cyclones, with important regional differences concerning the relative amplitude of the three anomalies. The moisture source analysis is the first of this kind to study the evaporation-condensation cycle related to the intensification of extratropical cyclones. Interestingly, most of the evaporation occurs during the 3 days prior to the time of maximum cyclone intensity and typically extends over fairly large areas along the track of the cyclone. The numerical model case study complements this analysis by analyzing the impact of regionally confined moisture sources for the evolution of the cyclone.
Resumo:
The thesis investigates the nucleon structure probed by the electromagnetic interaction. One of the most basic observables, reflecting the electromagnetic structure of the nucleon, are the form factors, which have been studied by means of elastic electron-proton scattering with ever increasing precision for several decades. In the timelike region, corresponding with the proton-antiproton annihilation into a electron-positron pair, the present experimental information is much less accurate. However, in the near future high-precision form factor measurements are planned. About 50 years after the first pioneering measurements of the electromagnetic form factors, polarization experiments stirred up the field since the results were found to be in striking contradiction to the findings of previous form factor investigations from unpolarized measurements. Triggered by the conflicting results, a whole new field studying the influence of two-photon exchange corrections to elastic electron-proton scattering emerged, which appeared as the most likely explanation of the discrepancy. The main part of this thesis deals with theoretical studies of two-photon exchange, which is investigated particularly with regard to form factor measurements in the spacelike as well as in the timelike region. An extraction of the two-photon amplitudes in the spacelike region through a combined analysis using the results of unpolarized cross section measurements and polarization experiments is presented. Furthermore, predictions of the two-photon exchange effects on the e+p/e-p cross section ratio are given for several new experiments, which are currently ongoing. The two-photon exchange corrections are also investigated in the timelike region in the process pbar{p} -> e+ e- by means of two factorization approaches. These corrections are found to be smaller than those obtained for the spacelike scattering process. The influence of the two-photon exchange corrections on cross section measurements as well as asymmetries, which allow a direct access of the two-photon exchange contribution, is discussed. Furthermore, one of the factorization approaches is applied for investigating the two-boson exchange effects in parity-violating electron-proton scattering. In the last part of the underlying work, the process pbar{p} -> pi0 e+e- is analyzed with the aim of determining the form factors in the so-called unphysical, timelike region below the two-nucleon production threshold. For this purpose, a phenomenological model is used, which provides a good description of the available data of the real photoproduction process pbar{p} -> pi0 gamma.
Resumo:
In this thesis the measurement of the effective weak mixing angle wma in proton-proton collisions is described. The results are extracted from the forward-backward asymmetry (AFB) in electron-positron final states at the ATLAS experiment at the LHC. The AFB is defined upon the distribution of the polar angle between the incoming quark and outgoing lepton. The signal process used in this study is the reaction pp to zgamma + X to ee + X taking a total integrated luminosity of 4.8\,fb^(-1) of data into account. The data was recorded at a proton-proton center-of-mass energy of sqrt(s)=7TeV. The weak mixing angle is a central parameter of the electroweak theory of the Standard Model (SM) and relates the neutral current interactions of electromagnetism and weak force. The higher order corrections on wma are related to other SM parameters like the mass of the Higgs boson.rnrnBecause of the symmetric initial state constellation of colliding protons, there is no favoured forward or backward direction in the experimental setup. The reference axis used in the definition of the polar angle is therefore chosen with respect to the longitudinal boost of the electron-positron final state. This leads to events with low absolute rapidity have a higher chance of being assigned to the opposite direction of the reference axis. This effect called dilution is reduced when events at higher rapidities are used. It can be studied including electrons and positrons in the forward regions of the ATLAS calorimeters. Electrons and positrons are further referred to as electrons. To include the electrons from the forward region, the energy calibration for the forward calorimeters had to be redone. This calibration is performed by inter-calibrating the forward electron energy scale using pairs of a central and a forward electron and the previously derived central electron energy calibration. The uncertainty is shown to be dominated by the systematic variations.rnrnThe extraction of wma is performed using chi^2 tests, comparing the measured distribution of AFB in data to a set of template distributions with varied values of wma. The templates are built in a forward folding technique using modified generator level samples and the official fully simulated signal sample with full detector simulation and particle reconstruction and identification. The analysis is performed in two different channels: pairs of central electrons or one central and one forward electron. The results of the two channels are in good agreement and are the first measurements of wma at the Z resonance using electron final states at proton-proton collisions at sqrt(s)=7TeV. The precision of the measurement is already systematically limited mostly by the uncertainties resulting from the knowledge of the parton distribution functions (PDF) and the systematic uncertainties of the energy calibration.rnrnThe extracted results of wma are combined and yield a value of wma_comb = 0.2288 +- 0.0004 (stat.) +- 0.0009 (syst.) = 0.2288 +- 0.0010 (tot.). The measurements are compared to the results of previous measurements at the Z boson resonance. The deviation with respect to the combined result provided by the LEP and SLC experiments is up to 2.7 standard deviations.
Resumo:
In this thesis, we develop high precision tools for the simulation of slepton pair production processes at hadron colliders and apply them to phenomenological studies at the LHC. Our approach is based on the POWHEG method for the matching of next-to-leading order results in perturbation theory to parton showers. We calculate matrix elements for slepton pair production and for the production of a slepton pair in association with a jet perturbatively at next-to-leading order in supersymmetric quantum chromodynamics. Both processes are subsequently implemented in the POWHEG BOX, a publicly available software tool that contains general parts of the POWHEG matching scheme. We investigate phenomenological consequences of our calculations in several setups that respect experimental exclusion limits for supersymmetric particles and provide precise predictions for slepton signatures at the LHC. The inclusion of QCD emissions in the partonic matrix elements allows for an accurate description of hard jets. Interfacing our codes to the multi-purpose Monte-Carlo event generator PYTHIA, we simulate parton showers and slepton decays in fully exclusive events. Advanced kinematical variables and specific search strategies are examined as means for slepton discovery in experimentally challenging setups.
Resumo:
Future experiments in nuclear and particle physics are moving towards the high luminosity regime in order to access rare processes. In this framework, particle detectors require high rate capability together with excellent timing resolution for precise event reconstruction. In order to achieve this, the development of dedicated FrontEnd Electronics (FEE) for detectors has become increasingly challenging and expensive. Thus, a current trend in R&D is towards flexible FEE that can be easily adapted to a great variety of detectors, without impairing the required high performance. This thesis reports on a novel FEE for two different detector types: imaging Cherenkov counters and plastic scintillator arrays. The former requires high sensitivity and precision for detection of single photon signals, while the latter is characterized by slower and larger signals typical of scintillation processes. The FEE design was developed using high-bandwidth preamplifiers and fast discriminators which provide Time-over-Threshold (ToT). The use of discriminators allowed for low power consumption, minimal dead-times and self-triggering capabilities, all fundamental aspects for high rate applications. The output signals of the FEE are readout by a high precision TDC system based on FPGA. The performed full characterization of the analogue signals under realistic conditions proved that the ToT information can be used in a novel way for charge measurements or walk corrections, thus improving the obtainable timing resolution. Detailed laboratory investigations proved the feasibility of the ToT method. The full readout chain was investigated in test experiments at the Mainz Microtron: high counting rates per channel of several MHz were achieved, and a timing resolution of better than 100 ps after walk correction based on ToT was obtained. Ongoing applications to fast Time-of-Flight counters and future developments of FEE have been also recently investigated.
Resumo:
Moderne ESI-LC-MS/MS-Techniken erlauben in Verbindung mit Bottom-up-Ansätzen eine qualitative und quantitative Charakterisierung mehrerer tausend Proteine in einem einzigen Experiment. Für die labelfreie Proteinquantifizierung eignen sich besonders datenunabhängige Akquisitionsmethoden wie MSE und die IMS-Varianten HDMSE und UDMSE. Durch ihre hohe Komplexität stellen die so erfassten Daten besondere Anforderungen an die Analysesoftware. Eine quantitative Analyse der MSE/HDMSE/UDMSE-Daten blieb bislang wenigen kommerziellen Lösungen vorbehalten. rn| In der vorliegenden Arbeit wurden eine Strategie und eine Reihe neuer Methoden zur messungsübergreifenden, quantitativen Analyse labelfreier MSE/HDMSE/UDMSE-Daten entwickelt und als Software ISOQuant implementiert. Für die ersten Schritte der Datenanalyse (Featuredetektion, Peptid- und Proteinidentifikation) wird die kommerzielle Software PLGS verwendet. Anschließend werden die unabhängigen PLGS-Ergebnisse aller Messungen eines Experiments in einer relationalen Datenbank zusammengeführt und mit Hilfe der dedizierten Algorithmen (Retentionszeitalignment, Feature-Clustering, multidimensionale Normalisierung der Intensitäten, mehrstufige Datenfilterung, Proteininferenz, Umverteilung der Intensitäten geteilter Peptide, Proteinquantifizierung) überarbeitet. Durch diese Nachbearbeitung wird die Reproduzierbarkeit der qualitativen und quantitativen Ergebnisse signifikant gesteigert.rn| Um die Performance der quantitativen Datenanalyse zu evaluieren und mit anderen Lösungen zu vergleichen, wurde ein Satz von exakt definierten Hybridproteom-Proben entwickelt. Die Proben wurden mit den Methoden MSE und UDMSE erfasst, mit Progenesis QIP, synapter und ISOQuant analysiert und verglichen. Im Gegensatz zu synapter und Progenesis QIP konnte ISOQuant sowohl eine hohe Reproduzierbarkeit der Proteinidentifikation als auch eine hohe Präzision und Richtigkeit der Proteinquantifizierung erreichen.rn| Schlussfolgernd ermöglichen die vorgestellten Algorithmen und der Analyseworkflow zuverlässige und reproduzierbare quantitative Datenanalysen. Mit der Software ISOQuant wurde ein einfaches und effizientes Werkzeug für routinemäßige Hochdurchsatzanalysen labelfreier MSE/HDMSE/UDMSE-Daten entwickelt. Mit den Hybridproteom-Proben und den Bewertungsmetriken wurde ein umfassendes System zur Evaluierung quantitativer Akquisitions- und Datenanalysesysteme vorgestellt.
Resumo:
L’obiettivo del lavoro esposto nella seguente relazione di tesi ha riguardato lo studio e la simulazione di esperimenti di radar bistatico per missioni di esplorazione planeteria. In particolare, il lavoro si è concentrato sull’uso ed il miglioramento di un simulatore software già realizzato da un consorzio di aziende ed enti di ricerca nell’ambito di uno studio dell’Agenzia Spaziale Europea (European Space Agency – ESA) finanziato nel 2008, e svolto fra il 2009 e 2010. L’azienda spagnola GMV ha coordinato lo studio, al quale presero parte anche gruppi di ricerca dell’Università di Roma “Sapienza” e dell’Università di Bologna. Il lavoro svolto si è incentrato sulla determinazione della causa di alcune inconsistenze negli output relativi alla parte del simulatore, progettato in ambiente MATLAB, finalizzato alla stima delle caratteristiche della superficie di Titano, in particolare la costante dielettrica e la rugosità media della superficie, mediante un esperimento con radar bistatico in modalità downlink eseguito dalla sonda Cassini-Huygens in orbita intorno al Titano stesso. Esperimenti con radar bistatico per lo studio di corpi celesti sono presenti nella storia dell’esplorazione spaziale fin dagli anni ’60, anche se ogni volta le apparecchiature utilizzate e le fasi di missione, durante le quali questi esperimenti erano effettuati, non sono state mai appositamente progettate per lo scopo. Da qui la necessità di progettare un simulatore per studiare varie possibili modalità di esperimenti con radar bistatico in diversi tipi di missione. In una prima fase di approccio al simulatore, il lavoro si è incentrato sullo studio della documentazione in allegato al codice così da avere un’idea generale della sua struttura e funzionamento. È seguita poi una fase di studio dettagliato, determinando lo scopo di ogni linea di codice utilizzata, nonché la verifica in letteratura delle formule e dei modelli utilizzati per la determinazione di diversi parametri. In una seconda fase il lavoro ha previsto l’intervento diretto sul codice con una serie di indagini volte a determinarne la coerenza e l’attendibilità dei risultati. Ogni indagine ha previsto una diminuzione delle ipotesi semplificative imposte al modello utilizzato in modo tale da identificare con maggiore sicurezza la parte del codice responsabile dell’inesattezza degli output del simulatore. I risultati ottenuti hanno permesso la correzione di alcune parti del codice e la determinazione della principale fonte di errore sugli output, circoscrivendo l’oggetto di studio per future indagini mirate.
Resumo:
Computed tomography based navigation for endoscopic sinus surgery is inflationary used despite of major public concern about iatrogenic radiation induced cancer risk. Studies on dose reduction for CAS-CT are almost nonexistent. We validate the use of radiation dose reduced CAS-CT for clinically applied surface registration.
Resumo:
The objective of this pilot investigation was to evaluate the utility and precision of already existing limited cone-beam computed tomography (CBCT) scans in measuring the endodontic working length, and to compare it with standard clinical procedures.