681 resultados para thermodynamical observables


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A polarimetric X-band radar has been deployed during one month (April 2011) for a field campaign in Fortaleza, Brazil, together with three additional laser disdrometers. The disdrometers are capable of measuring the raindrop size distributions (DSDs), hence making it possible to forward-model theoretical polarimetric X-band radar observables at the point where the instruments are located. This setup allows to thoroughly test the accuracy of the X-band radar measurements as well as the algorithms that are used to correct the radar data for radome and rain attenuation. For the campaign in Fortaleza it was found that radome attenuation dominantly affects the measurements. With an algorithm that is based on the self-consistency of the polarimetric observables, the radome induced reflectivity offset was estimated. Offset corrected measurements were then further corrected for rain attenuation with two different schemes. The performance of the post-processing steps was analyzed by comparing the data with disdrometer-inferred polarimetric variables that were measured at a distance of 20 km from the radar. Radome attenuation reached values up to 14 dB which was found to be consistent with an empirical radome attenuation vs. rain intensity relation that was previously developed for the same radar type. In contrast to previous work, our results suggest that radome attenuation should be estimated individually for every view direction of the radar in order to obtain homogenous reflectivity fields.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To interpret the mean depth of cosmic ray air shower maximum and its dispersion, we parametrize those two observables as functions of the rst two moments of the lnA distribution. We examine the goodness of this simple method through simulations of test mass distributions. The application of the parameterization to Pierre Auger Observatory data allows one to study the energy dependence of the mean lnA and of its variance under the assumption of selected hadronic interaction models. We discuss possible implications of these dependences in term of interaction models and astrophysical cosmic ray sources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Excitonic dynamics in a hybrid dot-well system composed of InAs quantum dots (QDs) and an InGaAs quantum well (QW) is studied by means of femtosecond pump-probe reflection and continuous wave (cw) photoluminescence (PL) spectroscopy. The system is engineered to bring the QW ground exciton state into resonance with the third QD excited state. The resonant tunneling rate is varied by changing the effective barrier thickness between the QD and QW layers. This strongly affects the exciton dynamics in these hybrid structures as compared to isolated QW or QD systems. Optically measured decay times of the coupled system demonstrate dramatically different response to temperature change depending on the strength of the resonant tunneling or coupling strength. This reflects a competition between purely quantum mechanical and thermodynamical processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La tesi ha per obiettivo di quantificare gli effetti che la variabilità spaziale del mezzo poroso ha sull'evoluzione di un sistema geochimico. Le reazioni di dissoluzione o precipiazione di minerali modificano la struttura microscopica del mezzo, e con essa le proprietà idrodinamiche del sistema, la permeabilità in modo particolare. La variabilità spaziale iniziale del mezzo può essere causa della formazione di digitazioni o canalizzazioni? La prima parte della tesi tratta il cambiamento di scala, necessario per passare da una simulazione geostatistica su griglia fine al calcolo di trasporto su una tessellazione più grossolana. Nel caso del codice di calcolo Hytec, che implementa uno schema ai volumi finiti basato su discretizzazione in poligoni di Voronoï, sono stati confrontati diversi metodi di calcolo della permeabilità equivalente, seguendo differenti criteri. La seconda parte riguarda i calcoli di trasporto reattivo condotti su famiglie di simulazioni geostatistiche del mezzo; l'influenza della variabilità spaziale iniziale sull'evoluzione dei sistemi viene quantificata grazie ad opportune grandezze osservabili. Sono state studiate due reazioni distinte: un caso di dissoluzione, in maniera più approfondita, e più rapidamente un caso di precipitazione, il cui effetto complessivo è quello di riequilibrare il sistema.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Until recently the debate on the ontology of spacetime had only a philosophical significance, since, from a physical point of view, General Relativity has been made "immune" to the consequences of the "Hole Argument" simply by reducing the subject to the assertion that solutions of Einstein equations which are mathematically different and related by an active diffeomorfism are physically equivalent. From a technical point of view, the natural reading of the consequences of the "Hole Argument” has always been to go further and say that the mathematical representation of spacetime in General Relativity inevitably contains a “superfluous structure” brought to light by the gauge freedom of the theory. This position of apparent split between the philosophical outcome and the physical one has been corrected thanks to a meticulous and complicated formal analysis of the theory in a fundamental and recent (2006) work by Luca Lusanna and Massimo Pauri entitled “Explaining Leibniz equivalence as difference of non-inertial appearances: dis-solution of the Hole Argument and physical individuation of point-events”. The main result of this article is that of having shown how, from a physical point of view, point-events of Einstein empty spacetime, in a particular class of models considered by them, are literally identifiable with the autonomous degrees of freedom of the gravitational field (the Dirac observables, DO). In the light of philosophical considerations based on realism assumptions of the theories and entities, the two authors then conclude by saying that spacetime point-events have a degree of "weak objectivity", since they, depending on a NIF (non-inertial frame), unlike the points of the homogeneous newtonian space, are plunged in a rich and complex non-local holistic structure provided by the “ontic part” of the metric field. Therefore according to the complex structure of spacetime that General Relativity highlights and within the declared limits of a methodology based on a Galilean scientific representation, we can certainly assert that spacetime has got "elements of reality", but the inevitably relational elements that are in the physical detection of point-events in the vacuum of matter (highlighted by the “ontic part” of the metric field, the DO) are closely dependent on the choice of the global spatiotemporal laboratory where the dynamics is expressed (NIF). According to the two authors, a peculiar kind of structuralism takes shape: the point structuralism, with common features both of the absolutist and substantival tradition and of the relationalist one. The intention of this thesis is that of proposing a method of approaching the problem that is, at least at the beginning, independent from the previous ones, that is to propose an approach based on the possibility of describing the gravitational field at three distinct levels. In other words, keeping the results achieved by the work of Lusanna and Pauri in mind and following their underlying philosophical assumptions, we intend to partially converge to their structuralist approach, but starting from what we believe is the "foundational peculiarity" of General Relativity, which is that characteristic inherent in the elements that constitute its formal structure: its essentially geometric nature as a theory considered regardless of the empirical necessity of the measure theory. Observing the theory of General Relativity from this perspective, we can find a "triple modality" for describing the gravitational field that is essentially based on a geometric interpretation of the spacetime structure. The gravitational field is now "visible" no longer in terms of its autonomous degrees of freedom (the DO), which, in fact, do not have a tensorial and, therefore, nor geometric nature, but it is analyzable through three levels: a first one, called the potential level (which the theory identifies with the components of the metric tensor), a second one, known as the connections level (which in the theory determine the forces acting on the mass and, as such, offer a level of description related to the one that the newtonian gravitation provides in terms of components of the gravitational field) and, finally, a third level, that of the Riemann tensor, which is peculiar to General Relativity only. Focusing from the beginning on what is called the "third level" seems to present immediately a first advantage: to lead directly to a description of spacetime properties in terms of gauge-invariant quantites, which allows to "short circuit" the long path that, in the treatises analyzed, leads to identify the "ontic part” of the metric field. It is then shown how to this last level it is possible to establish a “primitive level of objectivity” of spacetime in terms of the effects that matter exercises in extended domains of spacetime geometrical structure; these effects are described by invariants of the Riemann tensor, in particular of its irreducible part: the Weyl tensor. The convergence towards the affirmation by Lusanna and Pauri that the existence of a holistic, non-local and relational structure from which the properties quantitatively identified of point-events depend (in addition to their own intrinsic detection), even if it is obtained from different considerations, is realized, in our opinion, in the assignment of a crucial role to the degree of curvature of spacetime that is defined by the Weyl tensor even in the case of empty spacetimes (as in the analysis conducted by Lusanna and Pauri). In the end, matter, regarded as the physical counterpart of spacetime curvature, whose expression is the Weyl tensor, changes the value of this tensor even in spacetimes without matter. In this way, going back to the approach of Lusanna and Pauri, it affects the DOs evolution and, consequently, the physical identification of point-events (as our authors claim). In conclusion, we think that it is possible to see the holistic, relational, and non-local structure of spacetime also through the "behavior" of the Weyl tensor in terms of the Riemann tensor. This "behavior" that leads to geometrical effects of curvature is characterized from the beginning by the fact that it concerns extensive domains of the manifold (although it should be pointed out that the values of the Weyl tensor change from point to point) by virtue of the fact that the action of matter elsewhere indefinitely acts. Finally, we think that the characteristic relationality of spacetime structure should be identified in this "primitive level of organization" of spacetime.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this work is to put forward a statistical mechanics theory of social interaction, generalizing econometric discrete choice models. After showing the formal equivalence linking econometric multinomial logit models to equilibrium statical mechanics, a multi- population generalization of the Curie-Weiss model for ferromagnets is considered as a starting point in developing a model capable of describing sudden shifts in aggregate human behaviour. Existence of the thermodynamic limit for the model is shown by an asymptotic sub-additivity method and factorization of correlation functions is proved almost everywhere. The exact solution for the model is provided in the thermodynamical limit by nding converging upper and lower bounds for the system's pressure, and the solution is used to prove an analytic result regarding the number of possible equilibrium states of a two-population system. The work stresses the importance of linking regimes predicted by the model to real phenomena, and to this end it proposes two possible procedures to estimate the model's parameters starting from micro-level data. These are applied to three case studies based on census type data: though these studies are found to be ultimately inconclusive on an empirical level, considerations are drawn that encourage further refinements of the chosen modelling approach, to be considered in future work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Das Thema der Dissertation ist die analytische Berechnung von QCD-Strahlungskorrekturen erster Ordnung zurPolarisation schwerer und leichter Quarks in der $e^+ e^-$-Vernichtung,und der Polarisation von Gluonen, die bei der Produktion von leichten oderschweren Quarkpaaren in der $e^+ e^-$-Vernichtung abgestrahlt werden. Der erste Teil der Arbeit (Kapitel 1 und 2) befaßt sich mitder vollständigen Analyse der Gluonpolarisation für den Prozeß $e^+ e^- to q bar q G$ und $ Q bar Q G$. Es werdenBerechnungen derQCD-Bremsstrahlungskorrekturen zur ersten Ordnung in$alpha_s$ zur Gluonpolarisation im Prozeß $e^+ e^- to q bar q G$ und$ Q bar Q G$ durchgeführt. Ferner werden die lineare und die zirkulareGluonpolarisation in der Hadronebene und Leptonebene untersucht. Anschließend werden die Polarwinkelabhängigkeit und dieStrahlpolarisationsabhängigkeit der Gluonpolarisation analysiert. Im zweiten Teil der Arbeit (Kapitel 3 und 4) finden sich die Berechnungen der QCD-Strahlungskorrekturen erster Ordnungfür massive Quarkszur Longitudinal-Longitudinal Spin-Korrelation für dieProzeße $e^+ e^- to q bar q$ und $Q bar Q$. In Kapitel 3 wirdeine Mittelung überdie Polarwinkel durchgeführt, während in Kapitel 4 die Polarwinkel-Abhängigkeit explizit untersucht wird. ImKapitel 3 und 4 wird der Effekt der $O(alpha_s)$-Korrektur zur spin-flip-Konfiguration der verschiedenen Komponenten derHadrontensoren diskutiert. Der vorgelegten Arbeit kommt im Rahmen der für die Zukunftgeplanten Hochpräzisionsexperimente eine besondere Bedeutung zu, dasie Vorhersagen für Spinobservablen liefert, die in Experimentenan den geplanten $e^+ e^-$-Linearbeschleunigern gemessen werden können.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this Thesis is to develop a robust and powerful method to classify galaxies from large surveys, in order to establish and confirm the connections between the principal observational parameters of the galaxies (spectral features, colours, morphological indices), and help unveil the evolution of these parameters from $z \sim 1$ to the local Universe. Within the framework of zCOSMOS-bright survey, and making use of its large database of objects ($\sim 10\,000$ galaxies in the redshift range $0 < z \lesssim 1.2$) and its great reliability in redshift and spectral properties determinations, first we adopt and extend the \emph{classification cube method}, as developed by Mignoli et al. (2009), to exploit the bimodal properties of galaxies (spectral, photometric and morphologic) separately, and then combining together these three subclassifications. We use this classification method as a test for a newly devised statistical classification, based on Principal Component Analysis and Unsupervised Fuzzy Partition clustering method (PCA+UFP), which is able to define the galaxy population exploiting their natural global bimodality, considering simultaneously up to 8 different properties. The PCA+UFP analysis is a very powerful and robust tool to probe the nature and the evolution of galaxies in a survey. It allows to define with less uncertainties the classification of galaxies, adding the flexibility to be adapted to different parameters: being a fuzzy classification it avoids the problems due to a hard classification, such as the classification cube presented in the first part of the article. The PCA+UFP method can be easily applied to different datasets: it does not rely on the nature of the data and for this reason it can be successfully employed with others observables (magnitudes, colours) or derived properties (masses, luminosities, SFRs, etc.). The agreement between the two classification cluster definitions is very high. ``Early'' and ``late'' type galaxies are well defined by the spectral, photometric and morphological properties, both considering them in a separate way and then combining the classifications (classification cube) and treating them as a whole (PCA+UFP cluster analysis). Differences arise in the definition of outliers: the classification cube is much more sensitive to single measurement errors or misclassifications in one property than the PCA+UFP cluster analysis, in which errors are ``averaged out'' during the process. This method allowed us to behold the \emph{downsizing} effect taking place in the PC spaces: the migration between the blue cloud towards the red clump happens at higher redshifts for galaxies of larger mass. The determination of $M_{\mathrm{cross}}$ the transition mass is in significant agreement with others values in literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ALICE experiment at the LHC has been designed to cope with the experimental conditions and observables of a Quark Gluon Plasma reaction. One of the main assets of the ALICE experiment with respect to the other LHC experiments is the particle identification. The large Time-Of-Flight (TOF) detector is the main particle identification detector of the ALICE experiment. The overall time resolution, better that 80 ps, allows the particle identification over a large momentum range (up to 2.5 GeV/c for pi/K and 4 GeV/c for K/p). The TOF makes use of the Multi-gap Resistive Plate Chamber (MRPC), a detector with high efficiency, fast response and intrinsic time resoltion better than 40 ps. The TOF detector embeds a highly-segmented trigger system that exploits the fast rise time and the relatively low noise of the MRPC strips, in order to identify several event topologies. This work aims to provide detailed description of the TOF trigger system. The results achieved in the 2009 cosmic-ray run at CERN are presented to show the performances and readiness of TOF trigger system. The proposed trigger configuration for the proton-proton and Pb-Pb beams are detailed as well with estimates of the efficiencies and purity samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diese Arbeit beschreibt ein Experiment zur Photoproduktionneutraler Pionen am Proton im Schwellenbereich. DurchVerwendung linear polarisierter Photonen konnte neben dentotalen und differentiellen Wirkungsquerschnitten zum erstenMal die Photonasymmetrie nahe der Schwelle gemessen werden.Besonderes Interesse galt dem aus diesen physikalischenObservablen bestimmbaren s-Wellen-Multipol E0+ sowie der erstmaligen Bestimmung aller drei p-Wellen-KombinationenP1, P2 und P3 im Bereich der Schwelle.Das Experiment wurde 1995/1996 am ElektronenbeschleunigerMAMI (Mainzer Mikrotron) der Universität Mainz durchgeführt.Durch Verwendung eines Diamanten als Bremsstrahltarget fürdie Elektronen wurden über den Prozeß der kohärentenBremsstrahlung linear polarisierte Photonen erzeugt. DieEnergie der Photonen wurde über die Messung der Energie der gestreuten Elektronen in der MainzerPhotonenmarkierungsanlage bestimmt. Der Detektor TAPS, eineAnordnung aus 504 BaF2-Modulen, war um einFlüssigwasserstofftarget aufgebaut. In den Modulen wurdendie im Target produzierten neutralen Pionen über ihrenZerfall in zwei Photonen nachgewiesen.Die totalen und differentiellen Wirkungsquerschnitte wurdenim Energiebereich zwischen der Schwelle von 144.7 MeV und168 MeV gemessen. Die erstmals gemessene Photonasymmetriefür 159.5 MeV ist positiv und hat einen Wert von+0.217+/-0.046 für einen Polarwinkel von 90 Grad. Der Multipol E0+ und die drei p-Wellen-Kombinationen wurden andie physikalischen Observablen über zwei unterschiedlicheMethoden angepaßt, die übereinstimmende Ergebnisselieferten. Die Vorhersagen der Niederenergietheoreme derchiralen Störungstheorie für P1 und P2 stimmen beiEinbeziehung der statistischen und systematischen Fehler mitden experimentellen Werten überein.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, new precision experiments have become possible withthe high luminosity accelerator facilities at MAMIand JLab, supplyingphysicists with precision data sets for different hadronic reactions inthe intermediate energy region, such as pion photo- andelectroproduction and real and virtual Compton scattering.By means of the low energy theorem (LET), the global properties of thenucleon (its mass, charge, and magnetic moment) can be separated fromthe effects of the internal structure of the nucleon, which areeffectively described by polarizabilities. Thepolarizabilities quantify the deformation of the charge andmagnetization densities inside the nucleon in an applied quasistaticelectromagnetic field. The present work is dedicated to develop atool for theextraction of the polarizabilities from these precise Compton data withminimum model dependence, making use of the detailed knowledge of pionphotoproduction by means of dispersion relations (DR). Due to thepresence of t-channel poles, the dispersion integrals for two ofthe six Compton amplitudes diverge. Therefore, we have suggested to subtract the s-channel dispersion integrals at zero photon energy($nu=0$). The subtraction functions at $nu=0$ are calculated through DRin the momentum transfer t at fixed $nu=0$, subtracted at t=0. For this calculation, we use the information about the t-channel process, $gammagammatopipito Nbar{N}$. In this way, four of thepolarizabilities can be predicted using the unsubtracted DR in the $s$-channel. The other two, $alpha-beta$ and $gamma_pi$, are free parameters in ourformalism and can be obtained from a fit to the Compton data.We present the results for unpolarized and polarized RCS observables,%in the kinematics of the most recent experiments, and indicate anenhanced sensitivity to the nucleon polarizabilities in theenergy range between pion production threshold and the $Delta(1232)$-resonance.newlineindentFurthermore,we extend the DR formalism to virtual Compton scattering (radiativeelectron scattering off the nucleon), in which the concept of thepolarizabilities is generalized to the case of avirtual initial photon by introducing six generalizedpolarizabilities (GPs). Our formalism provides predictions for the fourspin GPs, while the two scalar GPs $alpha(Q^2)$ and $beta(Q^2)$ have to befitted to the experimental data at each value of $Q^2$.We show that at energies betweenpion threshold and the $Delta(1232)$-resonance position, thesensitivity to the GPs can be increased significantly, as compared tolow energies, where the LEX is applicable. Our DR formalism can be used for analysing VCS experiments over a widerange of energy and virtuality $Q^2$, which allows one to extract theGPs from VCS data in different kinematics with a minimum of model dependence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ground-based Earth troposphere calibration systems play an important role in planetary exploration, especially to carry out radio science experiments aimed at the estimation of planetary gravity fields. In these experiments, the main observable is the spacecraft (S/C) range rate, measured from the Doppler shift of an electromagnetic wave transmitted from ground, received by the spacecraft and coherently retransmitted back to ground. If the solar corona and interplanetary plasma noise is already removed from Doppler data, the Earth troposphere remains one of the main error sources in tracking observables. Current Earth media calibration systems at NASA’s Deep Space Network (DSN) stations are based upon a combination of weather data and multidirectional, dual frequency GPS measurements acquired at each station complex. In order to support Cassini’s cruise radio science experiments, a new generation of media calibration systems were developed, driven by the need to achieve the goal of an end-to-end Allan deviation of the radio link in the order of 3×〖10〗^(-15) at 1000 s integration time. The future ESA’s Bepi Colombo mission to Mercury carries scientific instrumentation for radio science experiments (a Ka-band transponder and a three-axis accelerometer) which, in combination with the S/C telecommunication system (a X/X/Ka transponder) will provide the most advanced tracking system ever flown on an interplanetary probe. Current error budget for MORE (Mercury Orbiter Radioscience Experiment) allows the residual uncalibrated troposphere to contribute with a value of 8×〖10〗^(-15) to the two-way Allan deviation at 1000 s integration time. The current standard ESA/ESTRACK calibration system is based on a combination of surface meteorological measurements and mathematical algorithms, capable to reconstruct the Earth troposphere path delay, leaving an uncalibrated component of about 1-2% of the total delay. In order to satisfy the stringent MORE requirements, the short time-scale variations of the Earth troposphere water vapor content must be calibrated at ESA deep space antennas (DSA) with more precise and stable instruments (microwave radiometers). In parallel to this high performance instruments, ESA ground stations should be upgraded to media calibration systems at least capable to calibrate both troposphere path delay components (dry and wet) at sub-centimetre level, in order to reduce S/C navigation uncertainties. The natural choice is to provide a continuous troposphere calibration by processing GNSS data acquired at each complex by dual frequency receivers already installed for station location purposes. The work presented here outlines the troposphere calibration technique to support both Deep Space probe navigation and radio science experiments. After an introduction to deep space tracking techniques, observables and error sources, in Chapter 2 the troposphere path delay is widely investigated, reporting the estimation techniques and the state of the art of the ESA and NASA troposphere calibrations. Chapter 3 deals with an analysis of the status and the performances of the NASA Advanced Media Calibration (AMC) system referred to the Cassini data analysis. Chapter 4 describes the current release of a developed GNSS software (S/W) to estimate the troposphere calibration to be used for ESA S/C navigation purposes. During the development phase of the S/W a test campaign has been undertaken in order to evaluate the S/W performances. A description of the campaign and the main results are reported in Chapter 5. Chapter 6 presents a preliminary analysis of microwave radiometers to be used to support radio science experiments. The analysis has been carried out considering radiometric measurements of the ESA/ESTEC instruments installed in Cabauw (NL) and compared with the requirements of MORE. Finally, Chapter 7 summarizes the results obtained and defines some key technical aspects to be evaluated and taken into account for the development phase of future instrumentation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In dieser Arbeit werden die QCD-Strahlungskorrekturen in erster Ordnung der starken Kopplungskonstanten für verschiedene Polarisationsobservablen zu semileptonischen Zerfällen eines bottom-Quarks in ein charm-Quark und ein Leptonpaar berechnet. Im ersten Teil wird der Zerfall eines unpolarisierten b-Quarks in ein polarisiertes c-Quark sowie ein geladenes Lepton und ein Antineutrino im Ruhesystem des b-Quarks analysiert. Es werden die Strahlungskorrekturen für den unpolarisierten und den polarisierten Beitrag zur differentiellen Zerfallsrate nach der Energie des c-Quarks berechnet, wobei das geladene Lepton als leicht angesehen und seine Masse daher vernachlässigt wird. Die inklusive differentielle Rate wird durch zwei Strukturfunktionen in analytischer Form dargestellt. Anschließend werden die Strukturfunktionen und die Polarisation des c-Quarks numerisch ausgewertet. Nach der Einführung der Helizitäts-Projektoren befaßt sich der zweite Teil mit dem kaskadenartigen Zerfall eines polarisierten b-Quarks in ein unpolarisiertes c-Quark und ein virtuelles W-Boson, welches weiter in ein Paar leichter Leptonen zerfällt. Es werden die inklusiven Strahlungskorrekturen zu drei unpolarisierten und fünf polarisierten Helizitäts-Strukturfunktionen in analytischer Form berechnet, welche die Winkelverteilung für die differentielle Zerfallsrate nach dem Viererimpulsquadrat des W-Bosons beschreiben. Die Strukturfunktionen enthalten die Informationen sowohl über die polare Winkelverteilung zwischen dem Spinvektor des b-Quarks und dem Impulsvektor des W-Bosons als auch über die räumliche Winkelverteilung zwischen den Impulsen des W-Bosons und des Leptonpaars. Der Impuls und der Spinvektor des b-Quarks sowie der Impuls des W-Bosons werden im b-Ruhesystem analysiert, während die Impulse des Leptonpaars im W-Ruhesystem ausgewertet werden. Zusätzlich zu den genannten Strukturfunktionen werden noch die unpolarisierte und die polarisierte skalare Strukturfunktion angegeben, die in Anwendungen bei hadronischen Zerfällen eine Rolle spielen. Anschließend folgt eine numerische Auswertung aller berechneten Strukturfunktionen. Im dritten Teil werden die nichtperturbativen HQET-Korrekturen zu inklusiven semileptonischen Zerfällen schwerer Hadronen diskutiert, welche ein b-Quark enthalten. Sie beschreiben hadronische Korrekturen, die durch die feste Bindung des b-Quarks in Hadronen hervorgerufen werden. Es werden insgesamt fünf unpolarisierte und neun polarisierte Helizitäts-Strukturfunktionen in analytischer Form angegeben, die auch eine endliche Masse und den Spin des geladenen Leptons berücksichtigen. Die Strukturfunktionen werden sowohl in differentieller Form in Abhängigkeit des quadrierten Viererimpulses des W-Bosons als auch in integrierter Form präsentiert. Zum Schluß werden die zuvor erhaltenen Resultate auf die semi-inklusiven hadronischen Zerfälle eines polarisierten Lambda_b-Baryons oder eines B-Mesons in ein D_s- oder ein D_s^*-Meson unter Berücksichtigung der D_s^*-Polarisation angewandt. Für die zugehörigen Winkelverteilungen werden die inklusiven QCD- und die nichtperturbativen HQET-Korrekturen zu den Helizitäts-Strukturfunktionen in analytischer Form angegeben und anschließend numerisch ausgewertet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sterne mit einer Anfangsmasse zwischen etwa 8 und 25 Sonnenmassen enden ihre Existenz mit einer gewaltigen Explosion, einer Typ II Supernova. Die hierbei entstehende Hoch-Entropie-Blase ist ein Bereich am Rande des sich bildenden Neutronensterns und gilt als möglicher Ort für den r-Prozess. Wegen der hohen Temperatur T innerhalb der Blase ist die Materie dort vollkommen photodesintegriert. Das Verhältnis von Neutronen zu Protonen wird durch die Elektronenhäufigkeit Ye beschrieben. Die thermodynamische Entwicklung des Systems wird durch die Entropie S gegeben. Da die Expansion der Blase schnell vonstatten geht, kann sie als adiabatisch betrachtet werden. Die Entropie S ist dann proportional zu T^3/rho, wobei rho die Dichte darstellt. Die explizite Zeitentwicklung von T und rho sowie die Prozessdauer hängen von Vexp, der Expansionsgeschwindigkeit der Blase, ab. Der erste Teil dieser Dissertation beschäftigt sich mit dem Prozess der Reaktionen mit geladenen Teilchen, dem alpha-Prozess. Dieser Prozess endet bei Temperaturen von etwa 3 mal 10^9 K, dem sogenannten "alpha-reichen" Freezeout, wobei überwiegend alpha-Teilchen, freie Neutronen sowie ein kleiner Anteil von mittelschweren "Saat"-Kernen im Massenbereich um A=100 gebildet werden. Das Verhältnis von freien Neutronen zu Saatkernen Yn/Yseed ist entscheidend für den möglichen Ablauf eines r-Prozesses. Der zweite Teil dieser Arbeit beschäftigt sich mit dem eigentlichen r-Prozess, der bei Neutronenanzahldichten von bis zu 10^27 Neutronen pro cm^3 stattfindet, und innerhalb von maximal 400 ms sehr neutronenreiche "Progenitor"-Isotope von Elementen bis zum Thorium und Uran bildet. Bei dem sich anschliessendem Ausfrieren der Neutroneneinfangreaktionen bei 10^9 K und 10^20 Neutronen pro cm^3 erfolgt dann der beta-Rückzerfall der ursprünglichen r-Prozesskerne zum Tal der Stabilität. Diese Nicht-Gleichgewichts-Phase wird in der vorliegenden Arbeit in einer Parameterstudie eingehend untersucht. Abschliessend werden astrophysikalische Bedingungen definiert, unter denen die gesamte Verteilung der solaren r-Prozess-Isotopenhäufigkeiten reproduziert werden können.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Being basic ingredients of numerous daily-life products with significant industrial importance as well as basic building blocks for biomaterials, charged hydrogels continue to pose a series of unanswered challenges for scientists even after decades of practical applications and intensive research efforts. Despite a rather simple internal structure it is mainly the unique combination of short- and long-range forces which render scientific investigations of their characteristic properties to be quite difficult. Hence early on computer simulations were used to link analytical theory and empirical experiments, bridging the gap between the simplifying assumptions of the models and the complexity of real world measurements. Due to the immense numerical effort, even for high performance supercomputers, system sizes and time scales were rather restricted until recently, whereas it only now has become possible to also simulate a network of charged macromolecules. This is the topic of the presented thesis which investigates one of the fundamental and at the same time highly fascinating phenomenon of polymer research: The swelling behaviour of polyelectrolyte networks. For this an extensible simulation package for the research on soft matter systems, ESPResSo for short, was created which puts a particular emphasis on mesoscopic bead-spring-models of complex systems. Highly efficient algorithms and a consistent parallelization reduced the necessary computation time for solving equations of motion even in case of long-ranged electrostatics and large number of particles, allowing to tackle even expensive calculations and applications. Nevertheless, the program has a modular and simple structure, enabling a continuous process of adding new potentials, interactions, degrees of freedom, ensembles, and integrators, while staying easily accessible for newcomers due to a Tcl-script steering level controlling the C-implemented simulation core. Numerous analysis routines provide means to investigate system properties and observables on-the-fly. Even though analytical theories agreed on the modeling of networks in the past years, our numerical MD-simulations show that even in case of simple model systems fundamental theoretical assumptions no longer apply except for a small parameter regime, prohibiting correct predictions of observables. Applying a "microscopic" analysis of the isolated contributions of individual system components, one of the particular strengths of computer simulations, it was then possible to describe the behaviour of charged polymer networks at swelling equilibrium in good solvent and close to the Theta-point by introducing appropriate model modifications. This became possible by enhancing known simple scaling arguments with components deemed crucial in our detailed study, through which a generalized model could be constructed. Herewith an agreement of the final system volume of swollen polyelectrolyte gels with results of computer simulations could be shown successfully over the entire investigated range of parameters, for different network sizes, charge fractions, and interaction strengths. In addition, the "cell under tension" was presented as a self-regulating approach for predicting the amount of swelling based on the used system parameters only. Without the need for measured observables as input, minimizing the free energy alone already allows to determine the the equilibrium behaviour. In poor solvent the shape of the network chains changes considerably, as now their hydrophobicity counteracts the repulsion of like-wise charged monomers and pursues collapsing the polyelectrolytes. Depending on the chosen parameters a fragile balance emerges, giving rise to fascinating geometrical structures such as the so-called pear-necklaces. This behaviour, known from single chain polyelectrolytes under similar environmental conditions and also theoretically predicted, could be detected for the first time for networks as well. An analysis of the total structure factors confirmed first evidences for the existence of such structures found in experimental results.