12 resultados para REACTOR PHYSICS

em ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research in fundamental physics with the free neutron is one of the key tools for testing the Standard Model at low energies. Most prominent goals in this field are the search for a neutron electric dipole moment (EDM) and the measurement of the neutron lifetime. Significant improvements of the experimental performance using ultracold neutrons (UCN) require reduction of both systematic and statistical errors.rnThe development and construction of new UCN sources based on the superthermal concept is therefore an important step for the success of future fundamental physics with ultracold neutrons. rnSignificant enhancement of today available UCN densities strongly correlates with an efficient use of an UCN converter material. The UCN converter here is to be understood as a medium which reduces the velocity of cold neutrons (CN, velocity of about 600 m/s) to the velocity of UCN (velocity of about 6 m/s).rnSeveral big research centers around the world are presently planning or constructing new superthermal UCN sources, which are mainly based on the use of either solid deuterium or superfluid helium as UCN converter.rnThanks to the idea of Yu.Pokotilovsky, there exists the opportunity to build competitive UCN sources also at small research reactors of the TRIGA type. Of course these smaller facilities don't promise high UCN densities of several 1000 UCN/cm³, but they are able to provide densities around 100 UCN/cm³ for experiments.rnIn the context of this thesis, it was possible to demonstrate succesfully the feasibility of a superthermal UCN source at the tangential beamport C of the research reactor TRIGA Mainz. Based on a prototype for the future UCN source at the Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRMII) in munich, which was planned and built in collaboration with the Technical University of Munich, further investigations and improvements were done and are presented in this thesis. rnIn parallel, a second UCN source for the radial beamport D was designed and built. The comissioning of this new source is foreseen in spring 2010.rnAt beamport D with its higher thermal neutron flux, it should be possible to increase the available UCN densities of 4 UCN/cm³ by minimum one order of magnitude.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Im Rahmen der vorliegenden Arbeit wurde erstmals Laser-Atomspektroskopie an einem Element durchgeführt, für das bisher keine atomaren Niveaus bekannt waren. Die Experimente wurden am Element Fermium mit der Ordnungszahl Z=100 mit der Resonanzionisationsspektroskopie (RIS) in einer Puffergaszelle durchgeführt. Verwendet wurde das Isotop 255Fm mit einer Halbwertszeit von 20.1 h, das im Hochflusskernreaktor des ORNL, Oak Ridge, USA, hergestellt wurde. Die von einem elektrochemischen Filament in das Argon-Puffergas bei einer Temperatur von 960(20)°C abgedampften Fm-Atome wurden mit Lasern in einem Zweistufenprozess resonant ionisiert. Dazu wurde das Licht eines Excimerlaser gepumpten Farbstofflasers für den ersten Anregungsschritt um die Wellenlänge 400 nm durchgestimmt. Ein Teil des Excimer (XeF) Laser Pumplichtes mit den Wellenlänge 351/353 nm wurde für die nicht-resonante Ionisation verwendet. Die Ionen wurden mit Hilfe elektrischer Felder aus der optischen Zelle extrahiert und nach einem Quadrupol Massenfilter mit einem Channeltron-Detektor massenselektiv nachgewiesen. Trotz der geringen Probenmenge von 2.7 x 10^10 eingesetzten Atomen wurden zwei atomare Resonanzen bei Energien von 25099.8(2) cm-1 und 25111.8(2) cm-1 gefunden und das Sättigungsverhalten dieser Linien gemessen. Es wurde ein theoretisches Modell entwickelt, dass sowohl das spektrale Profil der sättigungsverbreiterten Linien als auch die Sättigungskurven beschreibt. Durch Anpassung an die Messdaten konnten die partiellen Übergangsraten in den 3H6 Grundzustand Aki=3.6(7) x 10^6/s und Aki=3.6(6) x 10^6/s bestimmt werden. Der Vergleich der Niveauenergien und Übergangsraten mit Multikonfigurations Dirac-Fock Rechnungen legt die spektroskopische Klassifizierung der beobachteten Niveaus als 5f12 7s7p 5I6 und 5G6 Terme nahe. Weiterhin wurde ein Übergang bei 25740 cm-1 gefunden, der aufgrund der beobachteten Linienbreite von 1000 GHz als Rydbergzustand Zustand mit der Niveauenergie 51480 cm-1 interpretiert wurde und über einen Zweiphotonen Prozess angeregt werden kann. Basierend auf dieser Annahme wurde die Obergrenze für die Ionisationsenergie IP = 52140 cm-1 = 6.5 eV abgeschätzt. In den Messungen wurden Verschiebungen in den Zeitverteilungsspektren zwischen den mono-atomaren Ionen Fm+ und Cf+ und dem Molekül-Ion UO+ festgestellt und auf Driftzeitunterschiede im elektrischen Feld der gasgefüllten optischen Zelle zurückgeführt. Unter einfachen Modellannahmen wurde daraus auf die relativen Unterschiede Delta_r(Fm+,Cf+)/r(Cf+) ˜ -0.2 % und Delta_r(UO+,Cf+)/r(Cf+) ˜ 20 % in den Ionenradien geschlossen. Über die Bestimmung der Abnahme der Fm-a Aktivität des Filamentes auf der einen Seite und die Messung der Resonanzzählrate auf der anderen Seite, wurde die Nachweiseffizienz der Apparatur zu 4.5(3) x 10^-4 bestimmt. Die Nachweisapparatur wurde mit dem Ziel weiterentwickelt, Laserspektroskopie am Isotop 251Fm durchzuführen, das über die Reaktion 249Cf(a,2n)251Fm direkt in der optischen Zelle erzeugt werden soll. Das Verfahren wurde am chemischen Homolog Erbium getestet. Dabei wurde das Isotop 163Er über die Reaktion 161Dy(a,2n)163Er erzeugt und nach Resonanzionisation nachgewiesen. Die Nachweiseffizienz der Methode wurde zu 1 x 10^-4 bestimmt.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In dieser Arbeit wird zum Erreichen hoher Kernspinpolarisationen die Methode des metastabilen optischen Pumpens benutzt. Das Verfahren beruht auf dem "Ubertrag des Drehimpulses absorbierter Photonen auf das hierbei angeregte Valenzelektron, welches durch Hyperfeinkopplung den Drehimpuls weiter auf den $^3$He-Kern transferiert. Da der Polarisationsprozess nur bei Dr"ucken von ca. 1~mbar effizient funktioniert, f"ur die meisten Anwendungen aber polarisiertes $^3$He bei einem Druck von $geq 1$~bar ben"otigt wird, muss das Gas nach der Aufpolarisation komprimiert werden. In unserer Arbeitsgruppe steht eine Maschine ("`Polarisator"') zur Verf"ugung, die das Gas polarisiert und anschlie"send komprimiert. Ziel dieser Dissertation ist, einerseits die Leistungsf"ahigkeit des Polarisators bez"uglich Maximalpolarisation und Gasfluss zu verbessern und andererseits den metastabilen Pumpprozess selbst genauer zu untersuchen.\ noindent Durch die Verwendung neuer Laser auf Basis der Fasertechnologie sowie einer systematischen Optimierung der optischen Komponenten konnten in abgeschlossenen Pumpzellen Rekord-Polarisationsgrade von $91pm 2$% erzielt werden.\ noindent Mit der Implementierung neuartiger Optiken und Laser am Mainzer Polarisator konnte die Leistungscharakteristik entscheidend verbessert werden. So wurde die erreichbare Polarisation bei identischer Produktionsrate um 20 Prozentpunkte gesteigert. Zurzeit sind maximale Polarisationsgrade von mehr als 75% im optischen Pumpvolumen erreichbar. Eine am Mainzer Triga-Reaktor durchgef"uhrte Polarisationsbestimmung ergab einen Wert von $72.7pm 0.7$%. Dies veranschaulicht die geringen Polarisationsverluste infolge der Gaskompression, des Transports und einer Lagerung "uber mehrere Stunden.\ noindent Zur Dynamik der geschwindigkeitsver"andernden St"o"se sowie zur Bestimmung der mittleren Photonen-Absorptionsrate wurde ein Modell entwickelt, welches auch experimentell best"atigt wurde. Damit konnte erstmalig das gemessene Absorptionsverhalten einer spektral schmalbandigen Laserdiode korrekt beschrieben werden.\ noindent Zudem stimmen die an so genannten abgeschlossenen Pumpzellen gemessenen extrem hohen Polarisationswerte mit theoretischen Vorhersagen "uberein, sofern der Druck im optischen Pumpvolumen geringer als 1~mbar ist und das $^3$He nicht durch Fremdgase verunreinigt ist. Bei derartigen Pumpzellen ist die gemessene Abh"angigkeit der Polarisation von Laserleistung, Metastabilendichte und falscher Zirkularkomponente mit der Theorie kompatibel.\

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A polar stratospheric cloud submodel has been developed and incorporated in a general circulation model including atmospheric chemistry (ECHAM5/MESSy). The formation and sedimentation of polar stratospheric cloud (PSC) particles can thus be simulated as well as heterogeneous chemical reactions that take place on the PSC particles. For solid PSC particle sedimentation, the need for a tailor-made algorithm has been elucidated. A sedimentation scheme based on first order approximations of vertical mixing ratio profiles has been developed. It produces relatively little numerical diffusion and can deal well with divergent or convergent sedimentation velocity fields. For the determination of solid PSC particle sizes, an efficient algorithm has been adapted. It assumes a monodisperse radii distribution and thermodynamic equilibrium between the gas phase and the solid particle phase. This scheme, though relatively simple, is shown to produce particle number densities and radii within the observed range. The combined effects of the representations of sedimentation and solid PSC particles on vertical H2O and HNO3 redistribution are investigated in a series of tests. The formation of solid PSC particles, especially of those consisting of nitric acid trihydrate, has been discussed extensively in recent years. Three particle formation schemes in accordance with the most widely used approaches have been identified and implemented. For the evaluation of PSC occurrence a new data set with unprecedented spatial and temporal coverage was available. A quantitative method for the comparison of simulation results and observations is developed and applied. It reveals that the relative PSC sighting frequency can be reproduced well with the PSC submodel whereas the detailed modelling of PSC events is beyond the scope of coarse global scale models. In addition to the development and evaluation of new PSC submodel components, parts of existing simulation programs have been improved, e.g. a method for the assimilation of meteorological analysis data in the general circulation model, the liquid PSC particle composition scheme, and the calculation of heterogeneous reaction rate coefficients. The interplay of these model components is demonstrated in a simulation of stratospheric chemistry with the coupled general circulation model. Tests against recent satellite data show that the model successfully reproduces the Antarctic ozone hole.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nuclear masses are an important quantity to study nuclear structure since they reflect the sum of all nucleonic interactions. Many experimental possibilities exist to precisely measure masses, out of which the Penning trap is the tool to reach the highest precision. Moreover, absolute mass measurements can be performed using carbon, the atomic-mass standard, as a reference. The new double-Penning trap mass spectrometer TRIGA-TRAP has been installed and commissioned within this thesis work, which is the very first experimental setup of this kind located at a nuclear reactor. New technical developments have been carried out such as a reliable non-resonant laser ablation ion source for the production of carbon cluster ions and are still continued, like a non-destructive ion detection technique for single-ion measurements. Neutron-rich fission products will be available by the reactor that are important for nuclear astrophysics, especially the r-process. Prior to the on-line coupling to the reactor, TRIGA-TRAP already performed off-line mass measurements on stable and long-lived isotopes and will continue this program. The main focus within this thesis was on certain rare-earth nuclides in the well-established region of deformation around N~90. Another field of interest are mass measurements on actinoids to test mass models and to provide direct links to the mass standard. Within this thesis, the mass of 241-Am could be measured directly for the first time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Collinear laser spectroscopy has been used as a tool for nuclear physics for more than 30 years. The unique possibility to extract nuclear properties like spins, radii and nuclear moments in a model-independent manner leads to important physics results to test the predictive power of existing nuclear models. rnThis work presents the construction and the commissioning of a new collinear laser spectroscopy experiment TRIGA-LASER as a part of the TRIGA-SPEC facility at the TRIGA research reactor at the University of Mainz. The goal of the experiment is to study the nuclear structure of radioactive isotopes which will be produced by neutron-induced fission near the reactor core and transported to an ion source by a gas jet system. rnThe versatility of the collinear laser spectroscopy technique will be exploited in the second part of this thesis. The nuclear spin and the magnetic moment of the neutron-deficient isotope Mg-21 will be presented, which were measured by the detection of the beta-decay asymmetry induced by nuclear polarization after optical pumping. A combination of this detection method with the classical fluorescence detection is then used to determine the isotope shifts of the neutron-rich magnesium isotopes from Mg-24 through Mg-32 to study the transition to the ''island of inversion''.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis reports on the creation and analysis of many-body states of interacting fermionic atoms in optical lattices. The realized system can be described by the Fermi-Hubbard hamiltonian, which is an important model for correlated electrons in modern condensed matter physics. In this way, ultra-cold atoms can be utilized as a quantum simulator to study solid state phenomena. The use of a Feshbach resonance in combination with a blue-detuned optical lattice and a red-detuned dipole trap enables an independent control over all relevant parameters in the many-body hamiltonian. By measuring the in-situ density distribution and doublon fraction it has been possible to identify both metallic and insulating phases in the repulsive Hubbard model, including the experimental observation of the fermionic Mott insulator. In the attractive case, the appearance of strong correlations has been detected via an anomalous expansion of the cloud that is caused by the formation of non-condensed pairs. By monitoring the in-situ density distribution of initially localized atoms during the free expansion in a homogeneous optical lattice, a strong influence of interactions on the out-of-equilibrium dynamics within the Hubbard model has been found. The reported experiments pave the way for future studies on magnetic order and fermionic superfluidity in a clean and well-controlled experimental system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, the phenomenology of the Randall-Sundrum setup is investigated. In this context models with and without an enlarged SU(2)_L x SU(2)_R x U(1)_X x P_{LR} gauge symmetry, which removes corrections to the T parameter and to the Z b_L \bar b_L coupling, are compared with each other. The Kaluza-Klein decomposition is formulated within the mass basis, which allows for a clear understanding of various model-specific features. A complete discussion of tree-level flavor-changing effects is presented. Exact expressions for five dimensional propagators are derived, including Yukawa interactions that mediate flavor-off-diagonal transitions. The symmetry that reduces the corrections to the left-handed Z b \bar b coupling is analyzed in detail. In the literature, Randall-Sundrum models have been used to address the measured anomaly in the t \bar t forward-backward asymmetry. However, it will be shown that this is not possible within a natural approach to flavor. The rare decays t \to cZ and t \to ch are investigated, where in particular the latter could be observed at the LHC. A calculation of \Gamma_{12}^{B_s} in the presence of new physics is presented. It is shown that the Randall-Sundrum setup allows for an improved agreement with measurements of A_{SL}^s, S_{\psi\phi}, and \Delta\Gamma_s. For the first time, a complete one-loop calculation of all relevant Higgs-boson production and decay channels in the custodial Randall-Sundrum setup is performed, revealing a sensitivity to large new-physics scales at the LHC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In dieser Arbeit stelle ich Aspekte zu QCD Berechnungen vor, welche eng verknüpft sind mit der numerischen Auswertung von NLO QCD Amplituden, speziell der entsprechenden Einschleifenbeiträge, und der effizienten Berechnung von damit verbundenen Beschleunigerobservablen. Zwei Themen haben sich in der vorliegenden Arbeit dabei herauskristallisiert, welche den Hauptteil der Arbeit konstituieren. Ein großer Teil konzentriert sich dabei auf das gruppentheoretische Verhalten von Einschleifenamplituden in QCD, um einen Weg zu finden die assoziierten Farbfreiheitsgrade korrekt und effizient zu behandeln. Zu diesem Zweck wird eine neue Herangehensweise eingeführt welche benutzt werden kann, um farbgeordnete Einschleifenpartialamplituden mit mehreren Quark-Antiquark Paaren durch Shufflesummation über zyklisch geordnete primitive Einschleifenamplituden auszudrücken. Ein zweiter großer Teil konzentriert sich auf die lokale Subtraktion von zu Divergenzen führenden Poltermen in primitiven Einschleifenamplituden. Hierbei wurde im Speziellen eine Methode entwickelt, um die primitiven Einchleifenamplituden lokal zu renormieren, welche lokale UV Counterterme und effiziente rekursive Routinen benutzt. Zusammen mit geeigneten lokalen soften und kollinearen Subtraktionstermen wird die Subtraktionsmethode dadurch auf den virtuellen Teil in der Berechnung von NLO Observablen erweitert, was die voll numerische Auswertung der Einschleifenintegrale in den virtuellen Beiträgen der NLO Observablen ermöglicht. Die Methode wurde schließlich erfolgreich auf die Berechnung von NLO Jetraten in Elektron-Positron Annihilation im farbführenden Limes angewandt.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study a novel method MicroJet reactor technology was developed to enable the custom preparation of nanoparticles. rnDanazol/HPMCP HP50 and Gliclazide/Eudragit S100 nanoparticles were used as model systems for the investigation of effects of process parameters and microjet reactor setup on the nanoparticle properties during the microjet reactor construction. rnFollowing the feasibility study of the microjet reactor system, three different nanoparticle formulations were prepared using fenofibrate as model drug. Fenofibrate nanoparticles stabilized with poloxamer 407 (FN), fenofibrate nanoparticles in hydroxypropyl methyl cellulose phthalate (HPMCP) matrix (FHN) and fenofibrate nanoparticles in HPMCP and chitosan matrix (FHCN) were prepared under controlled precipitation using MicroJet reactor technology. Particle sizes of all the nanoparticle formulations were adjusted to 200-250 nm. rnThe changes in the experimental parameters altered the system thermodynamics resulting in the production of nanoparticles between 20-1000 nm (PDI<0.2) with high drug loading efficiencies (96.5% in 20:1 polymer:drug ratio).rnDrug releases from all nanoparticle formulations were fast and complete after 15 minutes both in FaSSIF and FeSSIF medium whereas in mucodhesiveness tests, only FHCN formulation was found to be mucoadhesive. Results of the Caco-2 studies revealed that % dose absorbed values were significantly higher (p<0.01) for FHCN in both cases where FaSSIF and FeSSIF were used as transport buffer.rn