876 resultados para Measured signals


Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the main goals of the COMPASS experiment at CERN is the determination of the gluon polarisation in the nucleon. It is determined from spin asymmetries in the scattering of 160 GeV/c polarised muons on a polarised LiD target. The gluon polarisation is accessed by the selection of photon-gluon fusion (PGF) events. The PGF-process can be tagged through hadrons with high transverse momenta or through charmed hadrons in the final state. The advantage of the open charm channel is that, in leading order, the PGF-process is the only process for charm production, thus no physical background contributes to the selected data sample. This thesis presents a measurement of the gluon polarisation from the COMPASS data taken in the years 2002-2004. In the analysis, charm production is tagged through a reconstructed D0-meson decaying in $D^{0}-> K^{-}pi^{+}$ (and charge conjugates). The reconstruction is done on a combinatorial basis. The background of wrong track pairs is reduced using kinematic cuts to the reconstructed D0-candidate and the information on particle identification from the Ring Imaging Cerenkov counter. In addition, the event sample is separated into D0-candidates, where a soft pion from the decay of the D*-meson to a D0-meson, is found, and the D0-candidates without this tag. Due to the small mass difference between D*-meson and D0-meson the signal purity of the D*-tagged sample is about 7 times higher than in the untagged sample. The gluon polarisation is measured from the event asymmetries for the for the different spin configurations of the COMPASS target. To improve the statistical precision of the final results, the events in the final sample are weighted. This method results in an average value of the gluon polarisation in the x-range covered by the data. For the COMPASS data from 2002-2004, the resulting value of the gluon polarisation is $=-0.47+-0.44 (stat)+-0.15(syst.)$. The result is statistically compatible with the existing measurements of $$ in the high-pT channel. Compared to these, the open charm measurement has the advantage of a considerably smaller model dependence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates context-aware wireless networks, capable to adapt their behavior to the context and the application, thanks to the ability of combining communication, sensing and localization. Problems of signals demodulation, parameters estimation and localization are addressed exploiting analytical methods, simulations and experimentation, for the derivation of the fundamental limits, the performance characterization of the proposed schemes and the experimental validation. Ultrawide-bandwidth (UWB) signals are in certain cases considered and non-coherent receivers, allowing the exploitation of the multipath channel diversity without adopting complex architectures, investigated. Closed-form expressions for the achievable bit error probability of novel proposed architectures are derived. The problem of time delay estimation (TDE), enabling network localization thanks to ranging measurement, is addressed from a theoretical point of view. New fundamental bounds on TDE are derived in the case the received signal is partially known or unknown at receiver side, as often occurs due to propagation or due to the adoption of low-complexity estimators. Practical estimators, such as energy-based estimators, are revised and their performance compared with the new bounds. The localization issue is addressed with experimentation for the characterization of cooperative networks. Practical algorithms able to improve the accuracy in non-line-of-sight (NLOS) channel conditions are evaluated on measured data. With the purpose of enhancing the localization coverage in NLOS conditions, non-regenerative relaying techniques for localization are introduced and ad hoc position estimators are devised. An example of context-aware network is given with the study of the UWB-RFID system for detecting and locating semi-passive tags. In particular a deep investigation involving low-complexity receivers capable to deal with problems of multi-tag interference, synchronization mismatches and clock drift is presented. Finally, theoretical bounds on the localization accuracy of this and others passive localization networks (e.g., radar) are derived, also accounting for different configurations such as in monostatic and multistatic networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proxy data are essential for the investigation of climate variability on time scales larger than the historical meteorological observation period. The potential value of a proxy depends on our ability to understand and quantify the physical processes that relate the corresponding climate parameter and the signal in the proxy archive. These processes can be explored under present-day conditions. In this thesis, both statistical and physical models are applied for their analysis, focusing on two specific types of proxies, lake sediment data and stable water isotopes.rnIn the first part of this work, the basis is established for statistically calibrating new proxies from lake sediments in western Germany. A comprehensive meteorological and hydrological data set is compiled and statistically analyzed. In this way, meteorological times series are identified that can be applied for the calibration of various climate proxies. A particular focus is laid on the investigation of extreme weather events, which have rarely been the objective of paleoclimate reconstructions so far. Subsequently, a concrete example of a proxy calibration is presented. Maxima in the quartz grain concentration from a lake sediment core are compared to recent windstorms. The latter are identified from the meteorological data with the help of a newly developed windstorm index, combining local measurements and reanalysis data. The statistical significance of the correlation between extreme windstorms and signals in the sediment is verified with the help of a Monte Carlo method. This correlation is fundamental for employing lake sediment data as a new proxy to reconstruct windstorm records of the geological past.rnThe second part of this thesis deals with the analysis and simulation of stable water isotopes in atmospheric vapor on daily time scales. In this way, a better understanding of the physical processes determining these isotope ratios can be obtained, which is an important prerequisite for the interpretation of isotope data from ice cores and the reconstruction of past temperature. In particular, the focus here is on the deuterium excess and its relation to the environmental conditions during evaporation of water from the ocean. As a basis for the diagnostic analysis and for evaluating the simulations, isotope measurements from Rehovot (Israel) are used, provided by the Weizmann Institute of Science. First, a Lagrangian moisture source diagnostic is employed in order to establish quantitative linkages between the measurements and the evaporation conditions of the vapor (and thus to calibrate the isotope signal). A strong negative correlation between relative humidity in the source regions and measured deuterium excess is found. On the contrary, sea surface temperature in the evaporation regions does not correlate well with deuterium excess. Although requiring confirmation by isotope data from different regions and longer time scales, this weak correlation might be of major importance for the reconstruction of moisture source temperatures from ice core data. Second, the Lagrangian source diagnostic is combined with a Craig-Gordon fractionation parameterization for the identified evaporation events in order to simulate the isotope ratios at Rehovot. In this way, the Craig-Gordon model can be directly evaluated with atmospheric isotope data, and better constraints for uncertain model parameters can be obtained. A comparison of the simulated deuterium excess with the measurements reveals that a much better agreement can be achieved using a wind speed independent formulation of the non-equilibrium fractionation factor instead of the classical parameterization introduced by Merlivat and Jouzel, which is widely applied in isotope GCMs. Finally, the first steps of the implementation of water isotope physics in the limited-area COSMO model are described, and an approach is outlined that allows to compare simulated isotope ratios to measurements in an event-based manner by using a water tagging technique. The good agreement between model results from several case studies and measurements at Rehovot demonstrates the applicability of the approach. Because the model can be run with high, potentially cloud-resolving spatial resolution, and because it contains sophisticated parameterizations of many atmospheric processes, a complete implementation of isotope physics will allow detailed, process-oriented studies of the complex variability of stable isotopes in atmospheric waters in future research.rn

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Im Rahmen dieser Arbeit wurde ein flugzeuggetragenes Laserablations-Einzelpartikel-Massenspektrometer von Grund auf entworfen, gebaut, charakterisiert und auf verschiedenen Feldmesskampagnen eingesetzt. Das ALABAMA (Aircraft-based Laser ABlation Aerosol MAss Spectrometer) ist in der Lage die chemische Zusammensetzung und Größe von einzelnen Aerosolpartikeln im submikrometer-Bereich (135 – 900 nm) zu untersuchen.rnNach dem Fokussieren in einer aerodynamischen Linse wird dafür zunächst derrnaerodynamische Durchmesser der einzelnen Partikel mit Hilfe einer Flugzeitmessung zwischen zwei Dauerstrichlasern bestimmt. Anschließend werden die zuvor detektierten und klassifizierten Partikel durch einen gezielten Laserpuls einzeln verdampft und ionisiert. Die Ionen werden in einem bipolaren Flugzeit-Massenspektrometer entsprechend ihrem Masse zu- Ladungs Verhältnisses getrennt und detektiert. Die entstehenden Massenspektren bieten einen detaillierten Einblick in die chemische Struktur der einzelnen Partikel.rnDas gesamte Instrument wurde so konzipiert, dass es auf dem neuen Höhenforschungsflugzeug HALO und anderen mobilen Plattformen eingesetzt werden kann. Um dies zu ermöglichen wurden alle Komponenten in einem Rahmen mit weniger als 0.45 m³ Volumen untergebracht. Das gesamte Instrument inklusive Rahmen wiegt weniger als 150 kg und erfüllt die strengen sicherheitsvorschriften für den Betrieb an Bord von Forschungsflugzeugen. Damit ist ALABAMA das kleinste und leichteste Instrument seiner Art.rnNach dem Aufbau wurden die Eigenschaften und Grenzen aller Komponenten detailliert im Labor und auf Messkampagnen charakterisiert. Dafür wurden zunächst die Eigenschaften des Partikelstrahls, wie beispielsweise Strahlbreite und –divergenz, ausführlich untersucht. Die Ergebnisse waren wichtig, um die späteren Messungen der Detektions- und Ablationseffizienz zu validieren.rnBei den anschließenden Effizienzmessungen wurde gezeigt, dass abhängig von ihrer Größe und Beschaffenheit, bis zu 86 % der vorhandenen Aerosolpartikel erfolgreich detektiert und größenklassifiziert werden. Bis zu 99.5 % der detektierten Partikel konnten ionisiert und somit chemisch untersucht werden. Diese sehr hohen Effizienzen sind insbesondere für Messungen in großer Höhe entscheidend, da dort zum Teil nur sehr geringe Partikelkonzentrationen vorliegen.rnDas bipolare Massenspektrometer erzielt durchschnittliche Massenauflösungen von bis zu R=331. Während Labor- und Feldmessungen konnten dadurch Elemente wie Au, Rb, Co, Ni, Si, Ti und Pb eindeutig anhand ihres Isotopenmusters zugeordnet werden.rnErste Messungen an Bord eines ATR-42 Forschungsflugzeuges während der MEGAPOLI Kampagne in Paris ergaben einen umfassenden Datensatz von Aerosolpartikeln innerhalb der planetaren Grenzschicht. Das ALABAMA konnte unter harten physischen Bedingungen (Temperaturen > 40°C, Beschleunigungen +/- 2 g) verlässlich und präzise betrieben werden. Anhand von charakteristischen Signalen in den Massenspektren konnten die Partikel zuverlässig in 8 chemische Klassen unterteilt werden. Einzelne Klassen konnten dabei bestimmten Quellen zugeordnet werden. So ließen sich beispielsweise Partikel mit starkerrnNatrium- und Kaliumsignatur eindeutig auf die Verbrennung von Biomasse zurückführen.rnALABAMA ist damit ein wertvolles Instrument um Partikel in-situ zu charakterisieren und somit verschiedenste wissenschaftliche Fragestellungen, insbesondere im Bereich der Atmosphärenforschung, zu untersuchen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Assessment of the integrity of structural components is of great importance for aerospace systems, land and marine transportation, civil infrastructures and other biological and mechanical applications. Guided waves (GWs) based inspections are an attractive mean for structural health monitoring. In this thesis, the study and development of techniques for GW ultrasound signal analysis and compression in the context of non-destructive testing of structures will be presented. In guided wave inspections, it is necessary to address the problem of the dispersion compensation. A signal processing approach based on frequency warping was adopted. Such operator maps the frequencies axis through a function derived by the group velocity of the test material and it is used to remove the dependence on the travelled distance from the acquired signals. Such processing strategy was fruitfully applied for impact location and damage localization tasks in composite and aluminum panels. It has been shown that, basing on this processing tool, low power embedded system for GW structural monitoring can be implemented. Finally, a new procedure based on Compressive Sensing has been developed and applied for data reduction. Such procedure has also a beneficial effect in enhancing the accuracy of structural defects localization. This algorithm uses the convolutive model of the propagation of ultrasonic guided waves which takes advantage of a sparse signal representation in the warped frequency domain. The recovery from the compressed samples is based on an alternating minimization procedure which achieves both an accurate reconstruction of the ultrasonic signal and a precise estimation of waves time of flight. Such information is used to feed hyperbolic or elliptic localization procedures, for accurate impact or damage localization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Schroeder's backward integration method is the most used method to extract the decay curve of an acoustic impulse response and to calculate the reverberation time from this curve. In the literature the limits and the possible improvements of this method are widely discussed. In this work a new method is proposed for the evaluation of the energy decay curve. The new method has been implemented in a Matlab toolbox. Its performance has been tested versus the most accredited literature method. The values of EDT and reverberation time extracted from the energy decay curves calculated with both methods have been compared in terms of the values themselves and in terms of their statistical representativeness. The main case study consists of nine Italian historical theatres in which acoustical measurements were performed. The comparison of the two extraction methods has also been applied to a critical case, i.e. the structural impulse responses of some building elements. The comparison underlines that both methods return a comparable value of the T30. Decreasing the range of evaluation, they reveal increasing differences; in particular, the main differences are in the first part of the decay, where the EDT is evaluated. This is a consequence of the fact that the new method returns a “locally" defined energy decay curve, whereas the Schroeder's method accumulates energy from the tail to the beginning of the impulse response. Another characteristic of the new method for the energy decay extraction curve is its independence on the background noise estimation. Finally, a statistical analysis is performed on the T30 and EDT values calculated from the impulse responses measurements in the Italian historical theatres. The aim of this evaluation is to know whether a subset of measurements could be considered representative for a complete characterization of these opera houses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation deals with the design and the characterization of novel reconfigurable silicon-on-insulator (SOI) devices to filter and route optical signals on-chip. Design is carried out through circuit simulations based on basic circuit elements (Building Blocks, BBs) in order to prove the feasibility of an approach allowing to move the design of Photonic Integrated Circuits (PICs) toward the system level. CMOS compatibility and large integration scale make SOI one of the most promising material to realize PICs. The concepts of generic foundry and BB based circuit simulations for the design are emerging as a solution to reduce the costs and increase the circuit complexity. To validate the BB based approach, the development of some of the most important BBs is performed first. A novel tunable coupler is also presented and it is demonstrated to be a valuable alternative to the known solutions. Two novel multi-element PICs are then analysed: a narrow linewidth single mode resonator and a passband filter with widely tunable bandwidth. Extensive circuit simulations are carried out to determine their performance, taking into account fabrication tolerances. The first PIC is based on two Grating Assisted Couplers in a ring resonator (RR) configuration. It is shown that a trade-off between performance, resonance bandwidth and device footprint has to be performed. The device could be employed to realize reconfigurable add-drop de/multiplexers. Sensitivity with respect to fabrication tolerances and spurious effects is however observed. The second PIC is based on an unbalanced Mach-Zehnder interferometer loaded with two RRs. Overall good performance and robustness to fabrication tolerances and nonlinear effects have confirmed its applicability for the realization of flexible optical systems. Simulated and measured devices behaviour is shown to be in agreement thus demonstrating the viability of a BB based approach to the design of complex PICs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I present a new experimental method called Total Internal Reflection Fluorescence Cross-Correlation Spectroscopy (TIR-FCCS). It is a method that can probe hydrodynamic flows near solid surfaces, on length scales of tens of nanometres. Fluorescent tracers flowing with the liquid are excited by evanescent light, produced by epi-illumination through the periphery of a high NA oil-immersion objective. Due to the fast decay of the evanescent wave, fluorescence only occurs for tracers in the ~100 nm proximity of the surface, thus resulting in very high normal resolution. The time-resolved fluorescence intensity signals from two laterally shifted (in flow direction) observation volumes, created by two confocal pinholes are independently measured and recorded. The cross-correlation of these signals provides important information for the tracers’ motion and thus their flow velocity. Due to the high sensitivity of the method, fluorescent species with different size, down to single dye molecules can be used as tracers. The aim of my work was to build an experimental setup for TIR-FCCS and use it to experimentally measure the shear rate and slip length of water flowing on hydrophilic and hydrophobic surfaces. However, in order to extract these parameters from the measured correlation curves a quantitative data analysis is needed. This is not straightforward task due to the complexity of the problem, which makes the derivation of analytical expressions for the correlation functions needed to fit the experimental data, impossible. Therefore in order to process and interpret the experimental results I also describe a new numerical method of data analysis of the acquired auto- and cross-correlation curves – Brownian Dynamics techniques are used to produce simulated auto- and cross-correlation functions and to fit the corresponding experimental data. I show how to combine detailed and fairly realistic theoretical modelling of the phenomena with accurate measurements of the correlation functions, in order to establish a fully quantitative method to retrieve the flow properties from the experiments. An importance-sampling Monte Carlo procedure is employed in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for both modern desktop PC machines and massively parallel computers. The latter allows making the data analysis within short computing times. I applied this method to study flow of aqueous electrolyte solution near smooth hydrophilic and hydrophobic surfaces. Generally on hydrophilic surface slip is not expected, while on hydrophobic surface some slippage may exists. Our results show that on both hydrophilic and moderately hydrophobic (contact angle ~85°) surfaces the slip length is ~10-15nm or lower, and within the limitations of the experiments and the model, indistinguishable from zero.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a possible method to calculate sea level variation using geodetic-quality Global Navigate Satellite System (GNSS) receivers. Three antennas are used: two small antennas and a choke ring one, analyzing only Global Positioning System signals. The main goal of the thesis is to test a modified configuration for antenna set up. In particular, measurements obtained tilting one antenna to face the horizon are compared to measurements obtained from antennas looking upward. The location of the experiment is a coastal environment nearby the Onsala Space Observatory in Sweden. Sea level variations are obtained using periodogram analysis of the SNR signal and compared to synthetic gauge generated from two independent tide gauges. The choke ring antenna provides poor result, with an RMS around 6 cm and a correlation coefficients of 0.89. The smaller antennas provide correlation coefficients around 0.93. The antenna pointing upward present an RMS of 4.3 cm and the one pointing the horizon an RMS of 6.7 cm. Notable variation in the statistical parameters is found when modifying the length of the interval analyzed. In particular, doubts are risen on the reliability of certain scattered data. No relation is found between the accuracy of the method and weather conditions. Possible methods to enhance the available data are investigated, and correlation coefficient above 0.97 can be obtained with small antennas when sacrificing data points. Hence, the results provide evidence of the suitability of SNR signal analysis for sea level variation in coastal environment even in the case of adverse weather conditions. In particular, tilted configurations provides comparable result with upward looking geodetic antennas. A SNR signal simulator is also tested to investigate its performance and usability. Various configuration are analyzed in combination with the periodogram procedure used to calculate the height of reflectors. Consistency between the data calculated and those received is found, and the overall accuracy of the height calculation program is found to be around 5 mm for input height below 5 m. The procedure is thus found to be suitable to analyze the data provided by the GNSS antennas at Onsala.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In dieser Arbeit wurden dünne Schichten von Heusler-Verbindungen hergestellt und auf ihre Transporteigenschaften hin untersucht.rnDer Anomale Hall-Effekt (AHE) ist dabei von besonderem Interesse. Er ist ein seit langer Zeit bekannter, jedoch noch nicht vollständig verstandener Transport-Effekt. Die meisten Veröffentlichungen theoretischer Arbeiten konzentrieren sich auf den Einfluss eines bestimmten Beitrags zum AHE. Tatsächlich gemessene experimentelle Daten können jedoch oft nicht in Einklang mit idealisierten Annahmen gebracht werden. rnDie vorliegende Arbeit diskutiert die Ergebnisse, welche aus Messungen von Materialien mit niedrigem Restwiderstand erhalten wurden. rnrnAls prototypische Materialien wurden hier hyphenation Heusler-Verbindungen untersucht. Als Material mit einer komplexen Topologie der Fermi-Fläche zeichnet sich dort der Einfluss von Defekten und der Unordnung der Kristallstruktur deutlich ab.rnrnDurch Verwendung von Filmen mit unterschiedlichem Grad der Unordnung können verschiedene Streumechanismen unterschieden werden. Für Co$_{2}$FeSi$_{0.6}$Al$_{0.4}$ and Co$_{2}$FeGa$_{0.5}$Ge$_{0.5}$ zeigt sich ein positiver AHE bei einer Unordnung vom Typ B2 und bei einer induzierten temperaturabh"angigen Streuung, wo hingegen eine Typ DO$_{3}$-Unordnung zusammen mit anderen möglichen intrinsischen Beiträgen einen negativen Effekt hervorruft.rnrnDarüber hinaus wurden die magneto-optische Kerr-Effekte (MOKE) dieser Verbindungen untersucht. Hierfür wurden Beiträge erster Ordnung als Funktion der intrinsischen und extrinsischen Parameter qualitativ analysiert. Auf den Einfluss der kristallinen Ordnung auf Beiträge zweiter Ordnung des MOKE-Signals wird ebenfalls eingegangen.rnrnDes Weiteren wurden dünne Schichten der Heusler-Verbindung Co$_{2}$MnAl auf MgO- und Si-Subs-traten (beide (100)) mit Hochfrequenz-Mag-netron-Sputtern erzeugt. Die zusammensetzung sowie die magnetischen und Transport-Eigenschaften wurden hinsichtlich unterschiedlicher Abscheidebedingungen systematisch untersucht.rnrnInsbesondere zeigt der AHE-Widerstand ein außerordentliches temperaturunabhängiges Verhalten in einem Bereich moderater Magnetfeldstärken von 0 bis 0.6,T. Hierf"ur wurde der nicht-diagonale Transport bei Temperaturen bis zu 300,$^{circ}$C analysiert. Die Daten zeigen die Eignung des Materials für Hall-Sensoren auch oberhalb der Raumtemperatur.rnrnJüngst wurde der Spin Seebeck-Effekt (SSE) entdeckt. Der Effekt aus dem Bereich der Spin-Kaloritronik erzeugt eine Spin-Spannung'' aufgrund eines Temperaturgradienten in magnetischen Materialien. Hier werden vorläufige Messungen des SSE in Ni$_{80}$Fe$_{20}$ und in Heusler-Verbindungen präsentiert.rn

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding liquid flow at the vicinity of solid surfaces is crucial to the developmentrnof technologies to reduce drag. One possibility to infer flow properties at the liquid-solid interface is to compare the experimental results to solutions of the Navier-Stokes equations assuming the no-slip boundary condition (BC) or the slip BC. There is no consensus in the literature about which BC should be used to model the flow of aqueous solutions over hydrophilic surfaces. Here, the colloidal probe technique is used to systematically address this issue, measuring forces acting during drainage of water over a surface. Results show that experimental variables, especially the cantilever spring constant, lead to the discrepancy observed in the literature. Two different parameters, calculated from experimental variables, could be used to separate the data obtained in this work and those reported in the literature in two groups: one explained with the no-slip BC, and another with the slip BC. The observed residual slippage is a function of instrumental variables, showing a trend incompatible with the available physical justifications. As a result, the no-slip is the more appropriate BC. The parameters can be used to avoid situations where the no-slip BC is not satisfied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In hadronischen Kollisionen entstehen bei einem Großteil der Ereignisse mit einem hohen Impulsübertrag Paare aus hochenergetischen Jets. Deren Produktion und Eigenschaften können mit hoher Genauigkeit durch die Störungstheorie in der Quantenchromodynamik (QCD) vorhergesagt werden. Die Produktion von \textit{bottom}-Quarks in solchen Kollisionen kann als Maßstab genutzt werden, um die Vorhersagen der QCD zu testen, da diese Quarks die Dynamik des Produktionsprozesses bei Skalen wieder spiegelt, in der eine Störungsrechnung ohne Einschränkungen möglich ist. Auf Grund der hohen Masse von Teilchen, die ein \textit{bottom}-Quark enthalten, erhält der gemessene, hadronische Zustand den größten Teil der Information von dem Produktionsprozess der Quarks. Weil sie eine große Produktionsrate besitzen, spielen sie und ihre Zerfallsprodukte eine wichtige Rolle als Untergrund in vielen Analysen, insbesondere in Suchen nach neuer Physik. In ihrer herausragenden Stellung in der dritten Quark-Generation könnten sich vermehrt Zeichen im Vergleich zu den leichteren Quarks für neue Phänomene zeigen. Daher ist die Untersuchung des Verhältnisses zwischen der Produktion von Jets, die solche \textit{bottom}-Quarks enthalten, auch bekannt als $b$-Jets, und aller nachgewiesener Jets ein wichtiger Indikator für neue massive Objekte. In dieser Arbeit werden die Produktionsrate und die Korrelationen von Paaren aus $b$-Jets bestimmt und nach ersten Hinweisen eines neuen massiven Teilchens, das bisher nicht im Standard-Modell enthalten ist, in dem invarianten Massenspektrum der $b$-Jets gesucht. Am Large Hadron Collider (LHC) kollidieren zwei Protonenstrahlen bei einer Schwerpunktsenergie von $\sqrt s = 7$ TeV, und es werden viele solcher Paare aus $b$-Jets produziert. Diese Analyse benutzt die aufgezeichneten Kollisionen des ATLAS-Detektors. Die integrierte Luminosität der verwendbaren Daten beläuft sich auf 34~pb$^{-1}$. $b$-Jets werden mit Hilfe ihrer langen Lebensdauer und den rekonstruierten, geladenen Zerfallsprodukten identifiziert. Für diese Analyse müssen insbesondere die Unterschiede im Verhalten von Jets, die aus leichten Objekten wie Gluonen und leichten Quarks hervorgehen, zu diesen $b$-Jets beachtet werden. Die Energieskala dieser $b$-Jets wird untersucht und die zusätzlichen Unsicherheit in der Energiemessung der Jets bestimmt. Effekte bei der Jet-Rekonstruktion im Detektor, die einzigartig für $b$-Jets sind, werden studiert, um letztlich diese Messung unabhängig vom Detektor und auf Niveau der Hadronen auswerten zu können. Hiernach wird die Messung zu Vorhersagen auf nächst-zu-führender Ordnung verglichen. Dabei stellt sich heraus, dass die Vorhersagen in Übereinstimmung zu den aufgenommenen Daten sind. Daraus lässt sich schließen, dass der zugrunde liegende Produktionsmechanismus auch in diesem neu erschlossenen Energiebereich am LHC gültig ist. Jedoch werden auch erste Hinweise auf Mängel in der Beschreibung der Eigenschaften dieser Ereignisse gefunden. Weiterhin können keine Anhaltspunkte für eine neue Resonanz, die in Paare aus $b$-Jets zerfällt, in dem invarianten Massenspektrum bis etwa 1.7~TeV gefunden werden. Für das Auftreten einer solchen Resonanz mit einer Gauß-förmigen Massenverteilung werden modell-unabhängige Grenzen berechnet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The behaviour of a polymer depends strongly on the length- and time scale as well as on the temperature rnat which it is probed. In this work, I describe investigations of polymer surfaces using scanning probe rnmicroscopy with heatable probes. With these probes, surfaces can be heated within seconds down to rnmicroseconds. I introduce experiments for the local and fast determination of glass transition and melting rntemperatures. I developed a method which allows the determination of glass transition and melting rntemperatures on films with thicknesses below 100 nm: A background measurement on the substrate was rnperformed. The resulting curve was subtracted from the measurement on the polymer film. The rndifferential measurement on polystyrene films with thicknesses between 35 nm and 160 nm showed rncharacteristic signals at 95 ± 1 °C, in accordance with the glass transition of polystyrene. Pressing heated rnprobes into polymer films causes plastic deformation. Nanometer sized deformations are currently rninvestigated in novel concepts for high density data storage. A suitable medium for such a storage system rnhas to be easily indentable on one hand, but on the other hand it also has to be very stable towards rnsurface induced wear. For developing such a medium I investigated a new approach: A comparably soft rnmaterial, namely polystyrene, was protected with a thin but very hard layer made of plasma polymerized rnnorbornene. The resulting bilayered media were tested for surface stability and deformability. I showed rnthat the bilayered material combines the deformability of polystyrene with the surface stability of the rnplasma polymer, and that the material therefore is a very good storage medium. In addition we rninvestigated the glass transition temperature of polystyrene at timescales of 10 µs and found it to be rnapprox. 220 °C. The increase of this characteristic temperature of the polymer results from the short time rnat which the polymer was probed and reflects the well-known time-temperature superposition principle. rnHeatable probes were also used for the characterization of silverazide filled nanocapsules. The use of rnheatable probes allowed determining the decomposition temperature of the capsules from few rnnanograms of material. The measured decomposition temperatures ranged from 180 °C to 225 °C, in rnaccordance with literature values. The investigation of small amounts of sample was necessary due to the rnlimited availability of the material. Furthermore, investigating larger amounts of the capsules using rnconventional thermal gravimetric analysis could lead to contamination or even damage of the instrument. rnBesides the analysis of material parameters I used the heatable probes for the local thermal rndecomposition of pentacene precursor material in order to form nanoscale conductive structures. Here, rnthe thickness of the precursor layer was important for complete thermal decomposition. rnAnother aspect of my work was the investigation of redox active polymers - Poly-10-(4-vinylbenzyl)-10H-rnphenothiazine (PVBPT)- for data storage. Data is stored by changing the local conductivity of the material rnby applying a voltage between tip and surface. The generated structures were stable for more than 16 h. It rnwas shown that the presence of water is essential for succesfull patterning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The availability of a high-intensity antiproton beam with momentum up to 15,GeV/c at the future FAIR will open a unique opportunity to investigate wide areas of nuclear physics with the $overline{P}$ANDA (anti{$overline{P}$}roton ANnihilations at DArmstadt) detector. Part of these investigations concern the Electromagnetic Form Factors of the proton in the time-like region and the study of the Transition Distribution Amplitudes, for which feasibility studies have been performed in this Thesis. rnMoreover, simulations to study the efficiency and the energy resolution of the backward endcap of the electromagnetic calorimeter of $overline{P}$ANDA are presented. This detector is crucial especially for the reconstruction of processes like $bar pprightarrow e^+ e^- pi^0$, investigated in this work. Different arrangements of dead material were studied. The results show that both, the efficiency and the energy resolution of the backward endcap of the electromagnetic calorimeter fullfill the requirements for the detection of backward particles, and that this detector is necessary for the reconstruction of the channels of interest. rnrnThe study of the annihilation channel $bar pprightarrow e^+ e^-$ will improve the knowledge of the Electromagnetic Form Factors in the time-like region, and will help to understand their connection with the Electromagnetic Form Factors in the space-like region. In this Thesis the feasibility of a measurement of the $bar pprightarrow e^+ e^-$ cross section with $overline{P}$ANDA is studied using Monte-Carlo simulations. The major background channel $bar pprightarrow pi^+ pi^-$ is taken into account. The results show a $10^9$ background suppression factor, which assure a sufficiently clean signal with less than 0.1% background contamination. The signal can be measured with an efficiency greater than 30% up to $s=14$,(GeV/c)$^2$. The Electromagnetic Form Factors are extracted from the reconstructed signal and corrected angular distribution. Above this $s$ limit, the low cross section will not allow the direct extraction of the Electromagnetic Form Factors. However, the total cross section can still be measured and an extraction of the Electromagnetic Form Factors is possible considering certain assumptions on the ratio between the electric and magnetic contributions.rnrnThe Transition Distribution Amplitudes are new non-perturbative objects describing the transition between a baryon and a meson. They are accessible in hard exclusive processes like $bar pprightarrow e^+ e^- pi^0$. The study of this process with $overline{P}$ANDA will test the Transition Distribution Amplitudes approach. This work includes a feasibility study for measuring this channel with $overline{P}$ANDA. The main background reaction is here $bar pprightarrow pi^+ pi^- pi^0$. A background suppression factor of $10^8$ has been achieved while keeping a signal efficiency above 20%.rnrnrnPart of this work has been published in the European Physics Journal A 44, 373-384 (2010).rn

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Atmosphärische Aerosolpartikel wirken in vielerlei Hinsicht auf die Menschen und die Umwelt ein. Eine genaue Charakterisierung der Partikel hilft deren Wirken zu verstehen und dessen Folgen einzuschätzen. Partikel können hinsichtlich ihrer Größe, ihrer Form und ihrer chemischen Zusammensetzung charakterisiert werden. Mit der Laserablationsmassenspektrometrie ist es möglich die Größe und die chemische Zusammensetzung einzelner Aerosolpartikel zu bestimmen. Im Rahmen dieser Arbeit wurde das SPLAT (Single Particle Laser Ablation Time-of-flight mass spectrometer) zur besseren Analyse insbesondere von atmosphärischen Aerosolpartikeln weiterentwickelt. Der Aerosoleinlass wurde dahingehend optimiert, einen möglichst weiten Partikelgrößenbereich (80 nm - 3 µm) in das SPLAT zu transferieren und zu einem feinen Strahl zu bündeln. Eine neue Beschreibung für die Beziehung der Partikelgröße zu ihrer Geschwindigkeit im Vakuum wurde gefunden. Die Justage des Einlasses wurde mithilfe von Schrittmotoren automatisiert. Die optische Detektion der Partikel wurde so verbessert, dass Partikel mit einer Größe < 100 nm erfasst werden können. Aufbauend auf der optischen Detektion und der automatischen Verkippung des Einlasses wurde eine neue Methode zur Charakterisierung des Partikelstrahls entwickelt. Die Steuerelektronik des SPLAT wurde verbessert, so dass die maximale Analysefrequenz nur durch den Ablationslaser begrenzt wird, der höchsten mit etwa 10 Hz ablatieren kann. Durch eine Optimierung des Vakuumsystems wurde der Ionenverlust im Massenspektrometer um den Faktor 4 verringert.rnrnNeben den hardwareseitigen Weiterentwicklungen des SPLAT bestand ein Großteil dieser Arbeit in der Konzipierung und Implementierung einer Softwarelösung zur Analyse der mit dem SPLAT gewonnenen Rohdaten. CRISP (Concise Retrieval of Information from Single Particles) ist ein auf IGOR PRO (Wavemetrics, USA) aufbauendes Softwarepaket, das die effiziente Auswertung der Einzelpartikel Rohdaten erlaubt. CRISP enthält einen neu entwickelten Algorithmus zur automatischen Massenkalibration jedes einzelnen Massenspektrums, inklusive der Unterdrückung von Rauschen und von Problemen mit Signalen die ein intensives Tailing aufweisen. CRISP stellt Methoden zur automatischen Klassifizierung der Partikel zur Verfügung. Implementiert sind k-means, fuzzy-c-means und eine Form der hierarchischen Einteilung auf Basis eines minimal aufspannenden Baumes. CRISP bietet die Möglichkeit die Daten vorzubehandeln, damit die automatische Einteilung der Partikel schneller abläuft und die Ergebnisse eine höhere Qualität aufweisen. Daneben kann CRISP auf einfache Art und Weise Partikel anhand vorgebener Kriterien sortieren. Die CRISP zugrundeliegende Daten- und Infrastruktur wurde in Hinblick auf Wartung und Erweiterbarkeit erstellt. rnrnIm Rahmen der Arbeit wurde das SPLAT in mehreren Kampagnen erfolgreich eingesetzt und die Fähigkeiten von CRISP konnten anhand der gewonnen Datensätze gezeigt werden.rnrnDas SPLAT ist nun in der Lage effizient im Feldeinsatz zur Charakterisierung des atmosphärischen Aerosols betrieben zu werden, während CRISP eine schnelle und gezielte Auswertung der Daten ermöglicht.