940 resultados para Lagrangian particle tracking method
Resumo:
Reactive halogen compounds are known to play an important role in a wide variety of atmospheric processes such as atmospheric oxidation capacity and coastal new particle formation. In this work, novel analytical approaches combining diffusion denuder/impinger sampling techniques with gas chromatographicmass spectrometric (GCMS) determination are developed to measure activated chlorine compounds (HOCl and Cl2), activated bromine compounds (HOBr, Br2, BrCl, and BrI), activated iodine compounds (HOI and ICl), and molecular iodine (I2). The denuder/GCMS methods have been used to field measurements in the marine boundary layer (MBL). High mixing ratios (of the order of 100 ppt) of activated halogen compounds and I2 are observed in the coastal MBL in Ireland, which explains the ozone destruction observed. The emission of I2 is found to correlate inversely with tidal height and correlate positively with the levels of O3 in the surrounding air. In addition the release is found to be dominated by algae species compositions and biomass density, which proves the hot-spot hypothesis of atmospheric iodine chemistry. The observations of elevated I2 concentrations substantially support the existence of higher concentrations of littoral iodine oxides and thus the connection to the strong ultra-fine particle formation events in the coastal MBL.
Resumo:
In this thesis, the influence of composition changes on the glass transition behavior of binary liquids in two and three spatial dimensions (2D/3D) is studied in the framework of mode-coupling theory (MCT).The well-established MCT equations are generalized to isotropic and homogeneous multicomponent liquids in arbitrary spatial dimensions. Furthermore, a new method is introduced which allows a fast and precise determination of special properties of glass transition lines. The new equations are then applied to the following model systems: binary mixtures of hard disks/spheres in 2D/3D, binary mixtures of dipolar point particles in 2D, and binary mixtures of dipolar hard disks in 2D. Some general features of the glass transition lines are also discussed. The direct comparison of the binary hard disk/sphere models in 2D/3D shows similar qualitative behavior. Particularly, for binary mixtures of hard disks in 2D the same four so-called mixing effects are identified as have been found before by Gtze and Voigtmann for binary hard spheres in 3D [Phys. Rev. E 67, 021502 (2003)]. For instance, depending on the size disparity, adding a second component to a one-component liquid may lead to a stabilization of either the liquid or the glassy state. The MCT results for the 2D system are on a qualitative level in agreement with available computer simulation data. Furthermore, the glass transition diagram found for binary hard disks in 2D strongly resembles the corresponding random close packing diagram. Concerning dipolar systems, it is demonstrated that the experimental system of Knig et al. [Eur. Phys. J. E 18, 287 (2005)] is well described by binary point dipoles in 2D through a comparison between the experimental partial structure factors and those from computer simulations. For such mixtures of point particles it is demonstrated that MCT predicts always a plasticization effect, i.e. a stabilization of the liquid state due to mixing, in contrast to binary hard disks in 2D or binary hard spheres in 3D. It is demonstrated that the predicted plasticization effect is in qualitative agreement with experimental results. Finally, a glass transition diagram for binary mixtures of dipolar hard disks in 2D is calculated. These results demonstrate that at higher packing fractions there is a competition between the mixing effects occurring for binary hard disks in 2D and those for binary point dipoles in 2D.
Resumo:
This thesis describes the developments of new models and toolkits for the orbit determination codes to support and improve the precise radio tracking experiments of the Cassini-Huygens mission, an interplanetary mission to study the Saturn system. The core of the orbit determination process is the comparison between observed observables and computed observables. Disturbances in either the observed or computed observables degrades the orbit determination process. Chapter 2 describes a detailed study of the numerical errors in the Doppler observables computed by NASA's ODP and MONTE, and ESA's AMFIN. A mathematical model of the numerical noise was developed and successfully validated analyzing against the Doppler observables computed by the ODP and MONTE, with typical relative errors smaller than 10%. The numerical noise proved to be, in general, an important source of noise in the orbit determination process and, in some conditions, it may becomes the dominant noise source. Three different approaches to reduce the numerical noise were proposed. Chapter 3 describes the development of the multiarc library, which allows to perform a multi-arc orbit determination with MONTE. The library was developed during the analysis of the Cassini radio science gravity experiments of the Saturn's satellite Rhea. Chapter 4 presents the estimation of the Rhea's gravity field obtained from a joint multi-arc analysis of Cassini R1 and R4 fly-bys, describing in details the spacecraft dynamical model used, the data selection and calibration procedure, and the analysis method followed. In particular, the approach of estimating the full unconstrained quadrupole gravity field was followed, obtaining a solution statistically not compatible with the condition of hydrostatic equilibrium. The solution proved to be stable and reliable. The normalized moment of inertia is in the range 0.37-0.4 indicating that Rhea's may be almost homogeneous, or at least characterized by a small degree of differentiation.
Resumo:
The Standard Model of elementary particle physics was developed to describe the fundamental particles which constitute matter and the interactions between them. The Large Hadron Collider (LHC) at CERN in Geneva was built to solve some of the remaining open questions in the Standard Model and to explore physics beyond it, by colliding two proton beams at world-record centre-of-mass energies. The ATLAS experiment is designed to reconstruct particles and their decay products originating from these collisions. The precise reconstruction of particle trajectories plays an important role in the identification of particle jets which originate from bottom quarks (b-tagging). This thesis describes the step-wise commissioning of the ATLAS track reconstruction and b-tagging software and one of the first measurements of the b-jet production cross section in pp collisions at sqrt(s)=7 TeV with the ATLAS detector. The performance of the track reconstruction software was studied in great detail, first using data from cosmic ray showers and then collisions at sqrt(s)=900 GeV and 7 TeV. The good understanding of the track reconstruction software allowed a very early deployment of the b-tagging algorithms. First studies of these algorithms and the measurement of the b-tagging efficiency in the data are presented. They agree well with predictions from Monte Carlo simulations. The b-jet production cross section was measured with the 2010 dataset recorded by the ATLAS detector, employing muons in jets to estimate the fraction of b-jets. The measurement is in good agreement with the Standard Model predictions.
Resumo:
Bisher ist bei forensischen Untersuchungen von Explosionen die Rckverfolgung der verwendeten Sprengstoffe begrenzt, da das Material in aller Regel bei der Explosion zerstrt wird. Die Rckverfolgung von Sprengstoffen soll mit Hilfe von Identifikations-Markierungssubstanzen erleichtert werden. Diese stellen einen einzigartigen Code dar, der auch nach einer Sprengung wiedergefunden und identifiziert werden kann. Die dem Code zugeordneten, eindeutigen Informationen knnen somit ausgelesen werden und liefern der Polizei bei der Aufklrung weitere Anstze.rnZiel der vorliegenden Arbeit ist es, das Verhalten von ausgewhlten Seltenerdelementen (SEE) bei Explosion zu untersuchen. Ein auf Lanthanoidphosphaten basierender Identifikations-Markierungsstoff bietet die Mglichkeit, verschiedene Lanthanoide innerhalb eines einzelnen Partikels zu kombinieren, wodurch eine Vielzahl von Codes generiert werden kann. Somit kann eine Vernderung der Ausgangszusammensetzung des Codes auch nach einer Explosion durch die Analyse eines einzelnen Partikels sehr gut nachvollzogen und somit die Eignung des Markierungsstoffes untersucht werden. Eine weitere Zielsetzung ist die berprfung der Anwendbarkeit der Massenspektrometrie mit induktiv gekoppeltem Plasma (ICP-MS) und Partikelanalyse mittels Rasterelektronenmikroskopie (REM) fr die Analyse der versprengten Identifikations-Markierungssubstanzen. rnDie Ergebnisbetrachtungen der ICP-MS-Analyse und REM-Partikelanalyse deuten zusammenfassend auf eine Fraktionierung der untersuchten Lanthanoide oder deren Umsetzungsprodukte nach Explosion in Abhngigkeit ihrer thermischen Belastbarkeit. Die Befunde zeigen eine Anreicherung der Lanthanoide mit hherer Temperaturbestndigkeit in greren Partikeln, was eine Anreicherung von Lanthanoiden mit niedrigerer Temperaturbestndigkeit in kleineren Partikeln impliziert. Dies lsst sich in Anstzen durch einen Fraktionierungsprozess in Abhngigkeit der Temperaturstabilitt der Lanthanoide oder deren Umsetzungsprodukten erklren. Die der Fraktionierung zugrunde liegenden Mechanismen und deren gegenseitige Beeinflussung bei einer Explosion konnten im Rahmen dieser Arbeit nicht abschlieend geklrt werden.rnDie generelle Anwendbarkeit und unter Umstnden notwendige, komplementre Verwendung der zwei Methoden ICP-MS und REM-Partikelanalyse wird in dieser Arbeit gezeigt. Die ICP-MS stellt mit groer untersuchter Probenflche und hoher Genauigkeit eine gute Methode zur Charakterisierung der Konzentrationsverhltnisse der untersuchten Lanthanoide dar. Die REM-Partikelanalyse hingegen ermglicht im Falle von Kontamination der Proben mit anderen Lanthanoid-haltigen Partikeln eine eindeutige Differenzierung der Elementvergesellschaftung pro Partikel. Sie kann somit im Gegensatz zur ICP-MS Aufschluss ber die Art und Zusammensetzung der Kontamination geben. rnInnerhalb der vorgenommenen Untersuchungen stellte die bei der ICP-MS angewandte Probennahmetechnik eine ideale Art der Probennahme dar. Bei anderen Oberflchen knnte diese jedoch in Folge der in verschiedenen Partikelgren resultierenden Fraktionierung zu systematisch verflschten Ergebnissen fhren. Um die generelle Anwendbarkeit der ICP-MS im Hinblick auf die Analyse versprengter Lanthanoide zu gewhrleisten, sollte eine Durchfhrung weiterer Sprengungen auf unterschiedlichen Probenoberflchen erfolgen und gegebenenfalls weitere Probennahme-, Aufschluss- und Anreicherungsverfahren evaluiert werden.rn
Resumo:
Among all possible realizations of quark and antiquark assembly, the nucleon (the proton and the neutron) is the most stable of all hadrons and consequently has been the subject of intensive studies. Mass, shape, radius and more complex representations of its internal structure are measured since several decades using different probes. The proton (spin 1/2) is described by the electric GE and magnetic GM form factors which characterise its internal structure. The simplest way to measure the proton form factors consists in measuring the angular distribution of the electron-proton elastic scattering accessing the so-called Space-Like region where q2 < 0. Using the crossed channel antiproton proton <--> e+e-, one accesses another kinematical region, the so-called Time-Like region where q2 > 0. However, due to the antiproton proton <--> e+e- threshold q2th, only the kinematical domain q2 > q2th > 0 is available. To access the unphysical region, one may use the antiproton proton --> pi0 e+ e- reaction where the pi0 takes away a part of the system energy allowing q2 to be varied between q2th and almost 0. This thesis aims to show the feasibility of such measurements with the PANDA detector which will be installed on the new high intensity antiproton ring at the FAIR facility at Darmstadt. To describe the antiproton proton --> pi0 e+ e- reaction, a Lagrangian based approach is developed. The 5-fold differential cross section is determined and related to linear combinations of hadronic tensors. Under the assumption of one nucleon exchange, the hadronic tensors are expressed in terms of the 2 complex proton electromagnetic form factors. An extraction method which provides an access to the proton electromagnetic form factor ratio R = |GE|/|GM| and for the first time in an unpolarized experiment to the cosine of the phase difference is developed. Such measurements have never been performed in the unphysical region up to now. Extended simulations were performed to show how the ratio R and the cosine can be extracted from the positron angular distribution. Furthermore, a model is developed for the antiproton proton --> pi0 pi+ pi- background reaction considered as the most dangerous one. The background to signal cross section ratio was estimated under different cut combinations of the particle identification information from the different detectors and of the kinematic fits. The background contribution can be reduced to the percent level or even less. The corresponding signal efficiency ranges from a few % to 30%. The precision on the determination of the ratio R and of the cosine is determined using the expected counting rates via Monte Carlo method. A part of this thesis is also dedicated to more technical work with the study of the prototype of the electromagnetic calorimeter and the determination of its resolution.
Resumo:
Detection, localization and tracking of non-collaborative objects moving inside an area is of great interest to many surveillance applications. An ultra- wideband (UWB) multistatic radar is considered as a good infrastructure for such anti-intruder systems, due to the high range resolution provided by the UWB impulse-radio and the spatial diversity achieved with a multistatic configuration. Detection of targets, which are typically human beings, is a challenging task due to reflections from unwanted objects in the area, shadowing, antenna cross-talks, low transmit power, and the blind zones arised from intrinsic peculiarities of UWB multistatic radars. Hence, we propose more effective detection, localization, as well as clutter removal techniques for these systems. However, the majority of the thesis effort is devoted to the tracking phase, which is an essential part for improving the localization accuracy, predicting the target position and filling out the missed detections. Since UWB radars are not linear Gaussian systems, the widely used tracking filters, such as the Kalman filter, are not expected to provide a satisfactory performance. Thus, we propose the Bayesian filter as an appropriate candidate for UWB radars. In particular, we develop tracking algorithms based on particle filtering, which is the most common approximation of Bayesian filtering, for both single and multiple target scenarios. Also, we propose some effective detection and tracking algorithms based on image processing tools. We evaluate the performance of our proposed approaches by numerical simulations. Moreover, we provide experimental results by channel measurements for tracking a person walking in an indoor area, with the presence of a significant clutter. We discuss the existing practical issues and address them by proposing more robust algorithms.
Resumo:
Seit Anbeginn der Menschheitsgeschichte beeinflussen die Menschen ihre Umwelt. Durch anthropogene Emissionen ndert sich die Zusammensetzung der Atmosphre, was einen zunehmenden Einfluss unter anderem auf die Atmosphrenchemie, die Gesundheit von Mensch, Flora und Fauna und das Klima hat. Die steigende Anzahl riesiger, wachsender Metropolen geht einher mit einer rumlichen Konzentration der Emission von Luftschadstoffen, was vor allem einen Einfluss auf die Luftqualitt der windabwrts gelegenen ruralen Regionen hat. In dieser Doktorarbeit wurde im Rahmen des MEGAPOLI-Projektes die Abluftfahne der Megastadt Paris unter Anwendung des mobilen Aerosolforschungslabors MoLa untersucht. Dieses ist mit modernen, zeitlich hochauflsenden Instrumenten zur Messung der chemischen Zusammensetzung und Grenverteilung der Aerosolpartikel sowie einiger Spurengase ausgestattet. Es wurden mobile Messstrategien entwickelt und angewendet, die besonders geeignet zur Charakterisierung urbaner Emissionen sind. Querschnittsmessfahrten durch die Abluftfahne und atmosphrische Hintergrundluftmassen erlaubten sowohl die Bestimmung der Struktur und Homogenitt der Abluftfahne als auch die Berechnung des Beitrags der urbanen Emissionen zur Gesamtbelastung der Atmosphre. Quasi-Lagrangesche Radialmessfahrten dienten der Erkundung der rumlichen Erstreckung der Abluftfahne sowie auftretender Transformationsprozesse der advehierten Luftschadstoffe. In Kombination mit Modellierungen konnte die Struktur der Abluftfahne vertieft untersucht werden. Flexible stationre Messungen ergnzten den Datensatz und lieen zudem Vergleichsmessungen mit anderen Messstationen zu. Die Daten einer ortsfesten Messstation wurden zustzlich verwendet, um die Alterung des organischen Partikelanteils zu beschreiben. Die Analyse der mobilen Messdaten erforderte die Entwicklung einer neuen Methode zur Bereinigung des Datensatzes von lokalen Streinflssen. Des Weiteren wurden die Mglichkeiten, Grenzen und Fehler bei der Anwendung komplexer Analyseprogramme zur Berechnung des O/C-Verhltnisses der Partikel sowie der Klassifizierung der Aerosolorganik untersucht. Eine Validierung verschiedener Methoden zur Bestimmung der Luftmassenherkunft war fr die Auswertung ebenfalls notwendig. Die detaillierte Untersuchung der Abluftfahne von Paris ergab, dass diese sich anhand der Erhhung der Konzentrationen von Indikatoren fr unprozessierte Luftverschmutzung im Vergleich zu Hintergrundwerten identifizieren lsst. Ihre eher homogene Struktur kann zumeist durch eine Gau-Form im Querschnitt mit einem exponentiellen Abfall der unprozessierten Schadstoffkonzentrationen mit zunehmender Distanz zur Stadt beschrieben werden. Hierfr ist hauptschlich die turbulente Vermischung mit Umgebungsluftmassen verantwortlich. Es konnte nachgewiesen werden, dass in der advehierten Abluftfahne eine deutliche Oxidation der Aerosolorganik im Sommer stattfindet; im Winter hingegen lie sich dieser Prozess whrend der durchgefhrten Messungen nicht beobachten. In beiden Jahreszeiten setzt sich die Abluftfahne hauptschlich aus Ru und organischen Partikelkomponenten im PM1-Grenbereich zusammen, wobei die Quellen Verkehr und Kochen sowie zustzlich Heizen in der kalten Jahreszeit dominieren. Die PM1-Partikelmasse erhhte sich durch die urbanen Emissionen im Vergleich zum Hintergrundwert im Sommer in der Abluftfahne im Mittel um 30% und im Winter um 10%. Besonders starke Erhhungen lieen sich fr Polyaromaten beobachten, wo im Sommer eine mittlere Zunahme von 194% und im Winter von 131% vorlag. Jahreszeitliche Unterschiede waren ebenso in der Grenverteilung der Partikel der Abluftfahne zu finden, wo im Winter im Gegensatz zum Sommer keine zustzlichen nukleierten kleinen Partikel, sondern nur durch Kondensation und Koagulation angewachsene Partikel zwischen etwa 10nm und 200nm auftraten. Die Spurengaskonzentrationen unterschieden sich ebenfalls, da chemische Reaktionen temperatur- und mitunter strahlungsabhngig sind. Weitere Anwendungsmglichkeiten des MoLa wurden bei einer berfhrungsfahrt von Deutschland an die spanische Atlantikkste demonstriert, woraus eine Kartierung der Luftqualitt entlang der Fahrtroute resultierte. Es zeigte sich, dass hauptschlich urbane Ballungszentren von unprozessierten Luftschadstoffen betroffen sind, advehierte gealterte Substanzen jedoch jede Region beeinflussen knnen. Die Untersuchung der Luftqualitt an Standorten mit unterschiedlicher Exposition bezglich anthropogener Quellen erweiterte diese Aussage um einen Einblick in die Variation der Luftqualitt, abhngig unter anderem von der Wetterlage und der Nhe zu Emissionsquellen. Damit konnte gezeigt werden, dass sich die entwickelten Messstrategien und Analysemethoden nicht nur zur Untersuchung der Abluftfahne einer Grostadt, sondern auch auf verschiedene andere wissenschaftliche und umweltmesstechnische Fragestellungen anwenden lassen.
Resumo:
Die Herstellung von Polymer-Solarzellen aus wssriger Phase stellt eine attraktive Alternative zu der konventionellen lsemittelbasierten Formulierung dar. Die Vorteile der aus wssriger Lsung hergestellten Solarzellen liegen besonders in dem umweltschonenden Herstellungsprozess und in der Mglichkeit, druckbare optoelektronische Bauteile zu generieren. Die Prozessierbarkeit von hydrophoben Halbleitern im wssrigen Milieu wird durch die Dispergierung der Materialien, in Form von Nanopartikeln, erreicht. Der Transfer der Halbleiter in eine Dispersion erfolgt ber die Lsemittelverdampfungsmethode. Die Idee der Verwendung von partikelbasierte Solarzellen wurde bereits umgesetzt, allerdings blieben eine genaue Charakterisierung der Partikel sowie ein umfassendes Verstndnis des gesamten Fabrikationsvorgangs aus. Deshalb besteht das Ziel dieser Arbeit darin, einen detaillierten Einblick in den Herstellungsprozess von partikelbasierten Solarzellen zu erlangen, mgliche Schwchen aufzudecken, diese zu beseitigen, um so zuknftige Anwendungen zu verbessern. Zur Herstellung von Solarzellen aus wssrigen Dispersionen wurde Poly(3-hexylthiophen-2,5-diyl)/[6,6]-Phenyl-C61-Buttersure-Methylester (P3HT/PCBM) als Donor/Akzeptor-System verwendet. Die Kernpunkte der Untersuchungen richteten sich zum einen die auf Partikelmorphologie und zum anderen auf die Generierung einer geeigneten Partikelschicht. Beide Parameter haben Auswirkungen auf die Solarzelleneffizienz. Die Morphologie wurde sowohl spektroskopisch ber Photolumineszenz-Messungen, als auch visuell mittels Elektronenmikroskopie ermittelt. Auf diese Weise konnte die Partikelmorphologie vollstndig aufgeklrt werden, wobei Parallelen zu der Struktur von lsemittelbasierten Solarzellen gefunden wurden. Zudem wurde eine Abhngigkeit der Morphologie von der Prparationstemperatur beobachtet, was eine einfache Steuerung der Partikelstruktur ermglicht. Im Zuge der Partikelschichtausbildung wurden direkte sowie grenzflchenvermittelnde Beschichtungsmethoden herangezogen. Von diesen Techniken hatte sich aber nur die Rotationsbeschichtung als brauchbare Methode erwiesen, Partikel aus der Dispersion in einen homogenen Film zu berfhren. Des Weiteren stand die Aufarbeitung der Partikelschicht durch Ethanol-Waschung und thermische Behandlung im Fokus dieser Arbeit. Beide Manahmen wirkten sich positiv auf die Effizienz der Solarzellen aus und trugen entscheidend zu einer Verbesserung der Zellen bei. Insgesamt liefern die gewonnen Erkenntnisse einen detaillierten berblick ber die Herausforderungen, welche bei dem Einsatz von wasserbasierten Dispersionen auftreten. Die Anforderungen partikelbasierter Solarzellen konnten offengelegt werden, dadurch gelang die Herstellung einer Solarzelle mit einer Effizienz von 0.53%. Dieses Ergebnis stellt jedoch noch nicht das Optimum dar und lsst noch Mglichkeiten fr Verbesserungen offen.
Resumo:
This thesis deals with the development of a novel simulation technique for macromolecules in electrolyte solutions, with the aim of a performance improvement over current molecular-dynamics based simulation methods. In solutions containing charged macromolecules and salt ions, it is the complex interplay of electrostatic interactions and hydrodynamics that determines the equilibrium and non-equilibrium behavior. However, the treatment of the solvent and dissolved ions makes up the major part of the computational effort. Thus an efficient modeling of both components is essential for the performance of a method. With the novel method we approach the solvent in a coarse-grained fashion and replace the explicit-ion description by a dynamic mean-field treatment. Hence we combine particle- and field-based descriptions in a hybrid method and thereby effectively solve the electrokinetic equations. The developed algorithm is tested extensively in terms of accuracy and performance, and suitable parameter sets are determined. As a first application we study charged polymer solutions (polyelectrolytes) in shear flow with focus on their viscoelastic properties. Here we also include semidilute solutions, which are computationally demanding. Secondly we study the electro-osmotic flow on superhydrophobic surfaces, where we perform a detailed comparison to theoretical predictions.
Resumo:
Since its discovery, top quark has represented one of the most investigated field in particle physics. The aim of this thesis is the reconstruction of hadronic top with high transverse momentum (boosted) with the Template Overlap Method (TOM). Because of the high energy, the decay products of boosted tops are partially or totally overlapped and thus they are contained in a single large radius jet (fat-jet). TOM compares the internal energy distributions of the candidate fat-jet to a sample of tops obtained by a MC simulation (template). The algorithm is based on the definition of an overlap function, which quantifies the level of agreement between the fat-jet and the template, allowing an efficient discrimination of signal from the background contributions. A working point has been decided in order to obtain a signal efficiency close to 90% and a corresponding background rejection at 70%. TOM performances have been tested on MC samples in the muon channel and compared with the previous methods present in literature. All the methods will be merged in a multivariate analysis to give a global top tagging which will be included in ttbar production differential cross section performed on the data acquired in 2012 at sqrt(s)=8 TeV in high phase space region, where new physics processes could be possible. Due to its peculiarity to increase the pT, the Template Overlap Method will play a crucial role in the next data taking at sqrt(s)=13 TeV, where the almost totality of the tops will be produced at high energy, making the standard reconstruction methods inefficient.
Resumo:
Future experiments in nuclear and particle physics are moving towards the high luminosity regime in order to access rare processes. In this framework, particle detectors require high rate capability together with excellent timing resolution for precise event reconstruction. In order to achieve this, the development of dedicated FrontEnd Electronics (FEE) for detectors has become increasingly challenging and expensive. Thus, a current trend in R&D is towards flexible FEE that can be easily adapted to a great variety of detectors, without impairing the required high performance. This thesis reports on a novel FEE for two different detector types: imaging Cherenkov counters and plastic scintillator arrays. The former requires high sensitivity and precision for detection of single photon signals, while the latter is characterized by slower and larger signals typical of scintillation processes. The FEE design was developed using high-bandwidth preamplifiers and fast discriminators which provide Time-over-Threshold (ToT). The use of discriminators allowed for low power consumption, minimal dead-times and self-triggering capabilities, all fundamental aspects for high rate applications. The output signals of the FEE are readout by a high precision TDC system based on FPGA. The performed full characterization of the analogue signals under realistic conditions proved that the ToT information can be used in a novel way for charge measurements or walk corrections, thus improving the obtainable timing resolution. Detailed laboratory investigations proved the feasibility of the ToT method. The full readout chain was investigated in test experiments at the Mainz Microtron: high counting rates per channel of several MHz were achieved, and a timing resolution of better than 100 ps after walk correction based on ToT was obtained. Ongoing applications to fast Time-of-Flight counters and future developments of FEE have been also recently investigated.
Resumo:
An Internet survey demonstrated the existence of problems related to intraoperative tracking camera set-up and alignment. It is hypothesized that these problems are a result of the limited field of view of today's optoelectronic camera systems, which is usually insufficiently large to keep the entire site of surgical action in view during an intervention. A method is proposed to augment a camera's field of view by actively controlling camera orientation, enabling it to track instruments as they are used intraoperatively. In an experimental study, an increase of almost 300% was found in the effective volume in which instruments could be tracked.
Resumo:
The penetration, translocation, and distribution of ultrafine and nanoparticles in tissues and cells are challenging issues in aerosol research. This article describes a set of novel quantitative microscopic methods for evaluating particle distributions within sectional images of tissues and cells by addressing the following questions: (1) is the observed distribution of particles between spatial compartments random? (2) Which compartments are preferentially targeted by particles? and (3) Does the observed particle distribution shift between different experimental groups? Each of these questions can be addressed by testing an appropriate null hypothesis. The methods all require observed particle distributions to be estimated by counting the number of particles associated with each defined compartment. For studying preferential labeling of compartments, the size of each of the compartments must also be estimated by counting the number of points of a randomly superimposed test grid that hit the different compartments. The latter provides information about the particle distribution that would be expected if the particles were randomly distributed, that is, the expected number of particles. From these data, we can calculate a relative deposition index (RDI) by dividing the observed number of particles by the expected number of particles. The RDI indicates whether the observed number of particles corresponds to that predicted solely by compartment size (for which RDI = 1). Within one group, the observed and expected particle distributions are compared by chi-squared analysis. The total chi-squared value indicates whether an observed distribution is random. If not, the partial chi-squared values help to identify those compartments that are preferential targets of the particles (RDI > 1). Particle distributions between different groups can be compared in a similar way by contingency table analysis. We first describe the preconditions and the way to implement these methods, then provide three worked examples, and finally discuss the advantages, pitfalls, and limitations of this method.
Resumo:
The tremendous application potential of nanosized materials stays in sharp contrast to a growing number of critical reports of their potential toxicity. Applications of in vitro methods to assess nanoparticles are severely limited through difficulties in exposing cells of the respiratory tract directly to airborne engineered nanoparticles. We present a completely new approach to expose lung cells to particles generated in situ by flame spray synthesis. Cerium oxide nanoparticles from a single run were produced and simultaneously exposed to the surface of cultured lung cells inside a glovebox. Separately collected samples were used to measure hydrodynamic particle size distribution, shape, and agglomerate morphology. Cell viability was not impaired by the conditions of the glovebox exposure. The tightness of the lung cell monolayer, the mean total lamellar body volume, and the generation of oxidative DNA damage revealed a dose-dependent cellular response to the airborne engineered nanoparticles. The direct combination of production and exposure allows studying particle toxicity in a simple and reproducible way under environmental conditions.