954 resultados para PHYSICS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A polar stratospheric cloud submodel has been developed and incorporated in a general circulation model including atmospheric chemistry (ECHAM5/MESSy). The formation and sedimentation of polar stratospheric cloud (PSC) particles can thus be simulated as well as heterogeneous chemical reactions that take place on the PSC particles. For solid PSC particle sedimentation, the need for a tailor-made algorithm has been elucidated. A sedimentation scheme based on first order approximations of vertical mixing ratio profiles has been developed. It produces relatively little numerical diffusion and can deal well with divergent or convergent sedimentation velocity fields. For the determination of solid PSC particle sizes, an efficient algorithm has been adapted. It assumes a monodisperse radii distribution and thermodynamic equilibrium between the gas phase and the solid particle phase. This scheme, though relatively simple, is shown to produce particle number densities and radii within the observed range. The combined effects of the representations of sedimentation and solid PSC particles on vertical H2O and HNO3 redistribution are investigated in a series of tests. The formation of solid PSC particles, especially of those consisting of nitric acid trihydrate, has been discussed extensively in recent years. Three particle formation schemes in accordance with the most widely used approaches have been identified and implemented. For the evaluation of PSC occurrence a new data set with unprecedented spatial and temporal coverage was available. A quantitative method for the comparison of simulation results and observations is developed and applied. It reveals that the relative PSC sighting frequency can be reproduced well with the PSC submodel whereas the detailed modelling of PSC events is beyond the scope of coarse global scale models. In addition to the development and evaluation of new PSC submodel components, parts of existing simulation programs have been improved, e.g. a method for the assimilation of meteorological analysis data in the general circulation model, the liquid PSC particle composition scheme, and the calculation of heterogeneous reaction rate coefficients. The interplay of these model components is demonstrated in a simulation of stratospheric chemistry with the coupled general circulation model. Tests against recent satellite data show that the model successfully reproduces the Antarctic ozone hole.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, we extend some ideas of statistical physics to describe the properties of human mobility. By using a database containing GPS measures of individual paths (position, velocity and covered space at a spatial scale of 2 Km or a time scale of 30 sec), which includes the 2% of the private vehicles in Italy, we succeed in determining some statistical empirical laws pointing out "universal" characteristics of human mobility. Developing simple stochastic models suggesting possible explanations of the empirical observations, we are able to indicate what are the key quantities and cognitive features that are ruling individuals' mobility. To understand the features of individual dynamics, we have studied different aspects of urban mobility from a physical point of view. We discuss the implications of the Benford's law emerging from the distribution of times elapsed between successive trips. We observe how the daily travel-time budget is related with many aspects of the urban environment, and describe how the daily mobility budget is then spent. We link the scaling properties of individual mobility networks to the inhomogeneous average durations of the activities that are performed, and those of the networks describing people's common use of space with the fractional dimension of the urban territory. We study entropy measures of individual mobility patterns, showing that they carry almost the same information of the related mobility networks, but are also influenced by a hierarchy among the activities performed. We discover that Wardrop's principles are violated as drivers have only incomplete information on traffic state and therefore rely on knowledge on the average travel-times. We propose an assimilation model to solve the intrinsic scattering of GPS data on the street network, permitting the real-time reconstruction of traffic state at a urban scale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis reports on the creation and analysis of many-body states of interacting fermionic atoms in optical lattices. The realized system can be described by the Fermi-Hubbard hamiltonian, which is an important model for correlated electrons in modern condensed matter physics. In this way, ultra-cold atoms can be utilized as a quantum simulator to study solid state phenomena. The use of a Feshbach resonance in combination with a blue-detuned optical lattice and a red-detuned dipole trap enables an independent control over all relevant parameters in the many-body hamiltonian. By measuring the in-situ density distribution and doublon fraction it has been possible to identify both metallic and insulating phases in the repulsive Hubbard model, including the experimental observation of the fermionic Mott insulator. In the attractive case, the appearance of strong correlations has been detected via an anomalous expansion of the cloud that is caused by the formation of non-condensed pairs. By monitoring the in-situ density distribution of initially localized atoms during the free expansion in a homogeneous optical lattice, a strong influence of interactions on the out-of-equilibrium dynamics within the Hubbard model has been found. The reported experiments pave the way for future studies on magnetic order and fermionic superfluidity in a clean and well-controlled experimental system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, the phenomenology of the Randall-Sundrum setup is investigated. In this context models with and without an enlarged SU(2)_L x SU(2)_R x U(1)_X x P_{LR} gauge symmetry, which removes corrections to the T parameter and to the Z b_L \bar b_L coupling, are compared with each other. The Kaluza-Klein decomposition is formulated within the mass basis, which allows for a clear understanding of various model-specific features. A complete discussion of tree-level flavor-changing effects is presented. Exact expressions for five dimensional propagators are derived, including Yukawa interactions that mediate flavor-off-diagonal transitions. The symmetry that reduces the corrections to the left-handed Z b \bar b coupling is analyzed in detail. In the literature, Randall-Sundrum models have been used to address the measured anomaly in the t \bar t forward-backward asymmetry. However, it will be shown that this is not possible within a natural approach to flavor. The rare decays t \to cZ and t \to ch are investigated, where in particular the latter could be observed at the LHC. A calculation of \Gamma_{12}^{B_s} in the presence of new physics is presented. It is shown that the Randall-Sundrum setup allows for an improved agreement with measurements of A_{SL}^s, S_{\psi\phi}, and \Delta\Gamma_s. For the first time, a complete one-loop calculation of all relevant Higgs-boson production and decay channels in the custodial Randall-Sundrum setup is performed, revealing a sensitivity to large new-physics scales at the LHC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Theories and numerical modeling are fundamental tools for understanding, optimizing and designing present and future laser-plasma accelerators (LPAs). Laser evolution and plasma wave excitation in a LPA driven by a weakly relativistically intense, short-pulse laser propagating in a preformed parabolic plasma channel, is studied analytically in 3D including the effects of pulse steepening and energy depletion. At higher laser intensities, the process of electron self-injection in the nonlinear bubble wake regime is studied by means of fully self-consistent Particle-in-Cell simulations. Considering a non-evolving laser driver propagating with a prescribed velocity, the geometrical properties of the non-evolving bubble wake are studied. For a range of parameters of interest for laser plasma acceleration, The dependence of the threshold for self-injection in the non-evolving wake on laser intensity and wake velocity is characterized. Due to the nonlinear and complex nature of the Physics involved, computationally challenging numerical simulations are required to model laser-plasma accelerators operating at relativistic laser intensities. The numerical and computational optimizations, that combined in the codes INF&RNO and INF&RNO/quasi-static give the possibility to accurately model multi-GeV laser wakefield acceleration stages with present supercomputing architectures, are discussed. The PIC code jasmine, capable of efficiently running laser-plasma simulations on Graphics Processing Units (GPUs) clusters, is presented. GPUs deliver exceptional performance to PIC codes, but the core algorithms had to be redesigned for satisfying the constraints imposed by the intrinsic parallelism of the architecture. The simulation campaigns, run with the code jasmine for modeling the recent LPA experiments with the INFN-FLAME and CNR-ILIL laser systems, are also presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In dieser Arbeit stelle ich Aspekte zu QCD Berechnungen vor, welche eng verknüpft sind mit der numerischen Auswertung von NLO QCD Amplituden, speziell der entsprechenden Einschleifenbeiträge, und der effizienten Berechnung von damit verbundenen Beschleunigerobservablen. Zwei Themen haben sich in der vorliegenden Arbeit dabei herauskristallisiert, welche den Hauptteil der Arbeit konstituieren. Ein großer Teil konzentriert sich dabei auf das gruppentheoretische Verhalten von Einschleifenamplituden in QCD, um einen Weg zu finden die assoziierten Farbfreiheitsgrade korrekt und effizient zu behandeln. Zu diesem Zweck wird eine neue Herangehensweise eingeführt welche benutzt werden kann, um farbgeordnete Einschleifenpartialamplituden mit mehreren Quark-Antiquark Paaren durch Shufflesummation über zyklisch geordnete primitive Einschleifenamplituden auszudrücken. Ein zweiter großer Teil konzentriert sich auf die lokale Subtraktion von zu Divergenzen führenden Poltermen in primitiven Einschleifenamplituden. Hierbei wurde im Speziellen eine Methode entwickelt, um die primitiven Einchleifenamplituden lokal zu renormieren, welche lokale UV Counterterme und effiziente rekursive Routinen benutzt. Zusammen mit geeigneten lokalen soften und kollinearen Subtraktionstermen wird die Subtraktionsmethode dadurch auf den virtuellen Teil in der Berechnung von NLO Observablen erweitert, was die voll numerische Auswertung der Einschleifenintegrale in den virtuellen Beiträgen der NLO Observablen ermöglicht. Die Methode wurde schließlich erfolgreich auf die Berechnung von NLO Jetraten in Elektron-Positron Annihilation im farbführenden Limes angewandt.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Scopo di questa tesi é di evidenziare le connessioni tra le categorie monoidali, l'equazione di Yang-Baxter e l’integrabilità di alcuni modelli. Oggetto prinacipale del nostro lavoro é stato il monoide di Frobenius e come sia connesso alle C∗-algebre. In questo contesto la totalità delle dimostrazioni sfruttano la strumentazione dell'algebra diagrammatica. Nel corso del lavoro di tesi sono state riprodotte tali dimostrazioni tramite il più familiare linguaggio dell’algebra multilineare allo scopo di rendere più fruibili questi risultati ad un raggio più ampio di potenziali lettori.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20\% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning.

Relevância:

20.00% 20.00%

Publicador: