964 resultados para Mean-field Theory
Resumo:
We apply Stochastic Dynamics method for a differential equations model, proposed by Marc Lipsitch and collaborators (Proc. R. Soc. Lond. B 260, 321, 1995), for which the transmission dynamics of parasites occurs from a parent to its offspring (vertical transmission), and by contact with infected host (horizontal transmission). Herpes, Hepatitis and AIDS are examples of diseases for which both horizontal and vertical transmission occur simultaneously during the virus spreading. Understanding the role of each type of transmission in the infection prevalence on a susceptible host population may provide some information about the factors that contribute for the eradication and/or control of those diseases. We present a pair mean-field approximation obtained from the master equation of the model. The pair approximation is formed by the differential equations of the susceptible and infected population densities and the differential equations of pairs that contribute to the former ones. In terms of the model parameters, we obtain the conditions that lead to the disease eradication, and set up the phase diagram based on the local stability analysis of fixed points. We also perform Monte Carlo simulations of the model on complete graphs and Erdös-Rényi graphs in order to investigate the influence of population size and neighborhood on the previous mean-field results; by this way, we also expect to evaluate the contribution of vertical and horizontal transmission on the elimination of parasite. Pair Approximation for a Model of Vertical and Horizontal Transmission of Parasites.
Resumo:
We consider the Shannon mutual information of subsystems of critical quantum chains in their ground states. Our results indicate a universal leading behavior for large subsystem sizes. Moreover, as happens with the entanglement entropy, its finite-size behavior yields the conformal anomaly c of the underlying conformal field theory governing the long-distance physics of the quantum chain. We study analytically a chain of coupled harmonic oscillators and numerically the Q-state Potts models (Q = 2, 3, and 4), the XXZ quantum chain, and the spin-1 Fateev-Zamolodchikov model. The Shannon mutual information is a quantity easily computed, and our results indicate that for relatively small lattice sizes, its finite-size behavior already detects the universality class of quantum critical behavior.
Resumo:
There is very strong evidence that ordinary matter in the Universe is outweighed by almost ten times as much so-called dark matter. Dark matter does neither emit nor absorb light and we do not know what it is. One of the theoretically favoured candidates is a so-called neutralino from the supersymmetric extension of the Standard Model of particle physics. A theoretical calculation of the expected cosmic neutralino density must include the so-called coannihilations. Coannihilations are particle processes in the early Universe with any two supersymmetric particles in the initial state and any two Standard Model particles in the final state. In this thesis we discuss the importance of these processes for the calculation of the relic density. We will go through some details in the calculation of coannihilations with one or two so-called sfermions in the initial state. This includes a discussion of Feynman diagrams with clashing arrows, a calculation of colour factors and a discussion of ghosts in non-Abelian field theory. Supersymmetric models contain a large number of free parameters on which the masses and couplings depend. The requirement, that the predicted density of cosmic neutralinos must agree with the density observed for the unknown dark matter, will constrain the parameters. Other constraints come from experiments which are not related to cosmology. For instance, the supersymmetric loop contribution to the rare b -> sγ decay should agree with the measured branching fraction. The principles of the calculation of the rare decay are discussed in this thesis. Also on-going and planned searches for cosmic neutralinos can constrain the parameters. In one of the accompanying papers in the thesis we compare the detection prospects for several current and future searches for neutralino dark matter.
Resumo:
Zusammmenfassung:Um Phasenseparation in binären Polymermischungen zuuntersuchen, werden zwei dynamische Erweiterungen der selbstkonsistenten Feldtheorie (SCFT)entwickelt. Die erste Methode benutzt eine zeitliche Entwicklung der Dichten und wird dynamische selbstkonsistente Feldtheorie (DSCFT) genannt, während die zweite Methode die zeitliche Propagation der effektiven äußeren Felder der SCFT ausnutzt. Diese Methode wird mit External Potential Dynamics (EPD) bezeichnet. Für DSCFT werden kinetische Koeffizienten verwendet, die entweder die lokale Dynamik von Punktteilchen oder die nichtlokale Dynamik von Rouse'schen Polymeren nachbilden. Die EPD-Methode erzeugt mit einem konstanten kinetischen Koeffizienten die Dynamik von Rouse'schen Ketten und benötigt weniger Rechenzeit als DSCFT. Diese Methoden werden für verschiedene Systeme angewendet.Zuerst wird spinodale Entmischung im Volumen untersucht,wobei der Unterschied zwischen lokaler und nichtlokalerDynamik im Mittelpunkt steht. Um die Gültigkeit derErgebnisse zu überprüfen, werden Monte-Carlo-Simulationen durchgeführt. In Polymermischungen, die von zwei Wänden, die beide die gleiche Sorte Polymere bevorzugen, eingeschränkt werden, wird die Bildung von Anreicherungsschichten an den Wänden untersucht. Für dünne Polymerfilme zwischen antisymmetrischen Wänden, d.h. jede Wand bevorzugt eine andere Polymerspezies, wird die Spannung einer parallel zu den Wänden gebildeten Grenzfläche analysiert und der Phasenübergang von einer anfänglich homogenen Mischung zur lokalisierten Phase betrachtet. Des Weiteren wird die Dynamik von Kapillarwellenmoden untersucht.
Resumo:
Die Arbeit beginnt mit dem Vergleich spezieller Regularisierungsmethoden in der Quantenfeldtheorie mit dem Verfahren zur störungstheoretischen Konstruktion der S-Matrix nach Epstein und Glaser. Da das Epstein-Glaser-Verfahren selbst als Regularisierungsverfahren verwandt werden kann und darüberhinaus ausschließlich auf physikalisch motivierten Postulaten basiert, liefert dieser Vergleich ein Kriterium für die Zulässigkeit anderer Regularisierungsmethoden. Zusätzlich zur Herausstellung dieser Zulässigkeit resultiert aus dieser Gegenüberstellung als weiteres wesentliches Resultat ein neues, in der Anwendung praktikables sowie konsistentes Regularisierungsverfahren, das modifizierte BPHZ-Verfahren. Dieses wird anhand von Ein-Schleifen-Diagrammen aus der QED (Elektronselbstenergie, Vakuumpolarisation und Vertexkorrektur) demonstriert. Im Gegensatz zur vielverwandten Dimensionalen Regularisierung ist dieses Verfahren uneingeschränkt auch für chirale Theorien anwendbar. Als Beispiel hierfür dient die Berechnung der im Rahmen einer axialen Erweiterung der QED-Lagrangedichte auftretenden U(1)-Anomalie. Auf der Stufe von Mehr-Schleifen-Diagrammen zeigt der Vergleich der Epstein-Glaser-Konstruktion mit dem bekannten BPHZ-Verfahren an mehreren Beispielen aus der Phi^4-Theorie, darunter das sog. Sunrise-Diagramm, daß zu deren Berechnung die nach der Waldformel des BPHZ-Verfahrens zur Regularisierung beitragenden Unterdiagramme auf eine kleinere Klasse eingeschränkt werden können. Dieses Resultat ist gleichfalls für die Praxis der Regularisierung bedeutsam, da es bereits auf der Stufe der zu berücksichtigenden Unterdiagramme zu einer Vereinfachung führt.
Resumo:
Skalenargumente werden verwendet, um Rod-Coil Copolymere mit fester Zusammensetzung von steifen Stäbchen und flexiblen Ketten zu studieren. In einem selektiven Lösungsmittel, in dem sich nur die Ketten lösen, bildet ein Rod-Coil Multiblock zylinderförmige Micellen aus aggregierten Stäbchen verbunden durch Kettenstücke. Die Stäbchen aggregieren, um Energie zu gewinnen. Dieser Prozeß wird durch den Entropieverlust der flexiblen Ketten ausgeglichen. Das Adsorptionsverhalten von Aggregaten aus parallel aneinandergelagerten, einzelnen Rod-Coil Diblöcken in selektivem Lösungsmittel wird anhand von erweiterten Skalenbetrachtungen diskutiert. Wenn ein solches Aggregat mit den Stäbchen parallel zur Oberfläche adsorbiert, verschieben sich die Stäbchen gegeneinander. Zusätzlich werden die Stabilität der adsorbierten Aggregate und andere mögliche Konfigurationen untersucht. Um einen Rod-Coil Multiblock mit variabler Zusammensetzung zu studieren, wird eine Feldtheorie entwickelt. Jedes Segment kann entweder steif oder flexibel sein. Das System zeigt drei Phasenzustände, offene Kette, amorphe Globule und flüssig-kristalline Globule. Beim Übergang von amorpher zu flüssig-kristalliner Globule steigt der Anteil an steifen Segmenten rapide an. Dieser Übergang wird durch die isotrope Wechselwirkung zwischen den steifen Segmenten und die anisotrope Oberflächenenergie der Globule verursacht.
Resumo:
The aim of this study was to develop a model capable to capture the different contributions which characterize the nonlinear behaviour of reinforced concrete structures. In particular, especially for non slender structures, the contribution to the nonlinear deformation due to bending may be not sufficient to determine the structural response. Two different models characterized by a fibre beam-column element are here proposed. These models can reproduce the flexure-shear interaction in the nonlinear range, with the purpose to improve the analysis in shear-critical structures. The first element discussed is based on flexibility formulation which is associated with the Modified Compression Field Theory as material constitutive law. The other model described in this thesis is based on a three-field variational formulation which is associated with a 3D generalized plastic-damage model as constitutive relationship. The first model proposed in this thesis was developed trying to combine a fibre beamcolumn element based on the flexibility formulation with the MCFT theory as constitutive relationship. The flexibility formulation, in fact, seems to be particularly effective for analysis in the nonlinear field. Just the coupling between the fibre element to model the structure and the shear panel to model the individual fibres allows to describe the nonlinear response associated to flexure and shear, and especially their interaction in the nonlinear field. The model was implemented in an original matlab® computer code, for describing the response of generic structures. The simulations carried out allowed to verify the field of working of the model. Comparisons with available experimental results related to reinforced concrete shears wall were performed in order to validate the model. These results are characterized by the peculiarity of distinguishing the different contributions due to flexure and shear separately. The presented simulations were carried out, in particular, for monotonic loading. The model was tested also through numerical comparisons with other computer programs. Finally it was applied for performing a numerical study on the influence of the nonlinear shear response for non slender reinforced concrete (RC) members. Another approach to the problem has been studied during a period of research at the University of California Berkeley. The beam formulation follows the assumptions of the Timoshenko shear beam theory for the displacement field, and uses a three-field variational formulation in the derivation of the element response. A generalized plasticity model is implemented for structural steel and a 3D plastic-damage model is used for the simulation of concrete. The transverse normal stress is used to satisfy the transverse equilibrium equations of at each control section, this criterion is also used for the condensation of degrees of freedom from the 3D constitutive material to a beam element. In this thesis is presented the beam formulation and the constitutive relationships, different analysis and comparisons are still carrying out between the two model presented.
Resumo:
The Spin-Statistics theorem states that the statistics of a system of identical particles is determined by their spin: Particles of integer spin are Bosons (i.e. obey Bose-Einstein statistics), whereas particles of half-integer spin are Fermions (i.e. obey Fermi-Dirac statistics). Since the original proof by Fierz and Pauli, it has been known that the connection between Spin and Statistics follows from the general principles of relativistic Quantum Field Theory. In spite of this, there are different approaches to Spin-Statistics and it is not clear whether the theorem holds under assumptions that are different, and even less restrictive, than the usual ones (e.g. Lorentz-covariance). Additionally, in Quantum Mechanics there is a deep relation between indistinguishabilty and the geometry of the configuration space. This is clearly illustrated by Gibbs' paradox. Therefore, for many years efforts have been made in order to find a geometric proof of the connection between Spin and Statistics. Recently, various proposals have been put forward, in which an attempt is made to derive the Spin-Statistics connection from assumptions different from the ones used in the relativistic, quantum field theoretic proofs. Among these, there is the one due to Berry and Robbins (BR), based on the postulation of a certain single-valuedness condition, that has caused a renewed interest in the problem. In the present thesis, we consider the problem of indistinguishability in Quantum Mechanics from a geometric-algebraic point of view. An approach is developed to study configuration spaces Q having a finite fundamental group, that allows us to describe different geometric structures of Q in terms of spaces of functions on the universal cover of Q. In particular, it is shown that the space of complex continuous functions over the universal cover of Q admits a decomposition into C(Q)-submodules, labelled by the irreducible representations of the fundamental group of Q, that can be interpreted as the spaces of sections of certain flat vector bundles over Q. With this technique, various results pertaining to the problem of quantum indistinguishability are reproduced in a clear and systematic way. Our method is also used in order to give a global formulation of the BR construction. As a result of this analysis, it is found that the single-valuedness condition of BR is inconsistent. Additionally, a proposal aiming at establishing the Fermi-Bose alternative, within our approach, is made.
Resumo:
Die vorliegende Arbeit wurde durch die Erkenntnis motiviert, daß die Theorie der Intentionalität ohne eine Theorie der impliziten Intentionalität unvollständig ist. Die Anlage einer solchen Theorie gründet in der Annahme, daß die impliziten ("ergänzenden oder "mit-bewußten") Erfahrungsinhalte Inhalte intentional wirksam sind: daß sie zur "Konstitution" der intentionalen Objekte – im Sinne vom Husserl und Gurwitsch – beitragen. Die Bedingungen und Umstände dieser Wirksamkeit herauszuarbeiten, ist das Hauptziel der vorliegenden Untersuchungen. Dazu wurde (1) eine phänomenologische Theorie des impliziten Inhalts kritisch expliziert, und (2) diese anhand einiger aktueller Ansätze der analytischen Philosophie auf die Probe gestellt. Im phänomenologischen Teil der Arbeit wurden zuerst die methodologischen Voraussetzungen von Gurwitschs gestalttheoretischer Neuformulierung des Husserlschen Projekts unter Berücksichtigung der sogenannten Konstanzannahme kritisch untersucht. Weiterhin wurden Husserls Noema-Konzeption und seine Horizontlehre aus der Perspektive von Gurwitschs Feldtheorie des Bewußtseins expliziert, und in der Folge Gurwitschs dreifache Gliederung des Bewußtseinsfeldes – das Kopräsenz-Kohärenz-Relevanz-Schema – um die phänomenologischen Begriffe "Potentialität", "Typik" und "Motivation" erweitert. Die Beziehungen, die diesen Begriffen zugrunde liegen, erwiesen sich als "mehr denn bloß kontigent, aber als weniger denn logisch oder notwendig" (Mulligan). An Beispielen aus der analytischen Philosphie der Wahrnehmung (Dretske, Peacocke, Dennett, Kelly) und der Sprache (Sperber, Wilson, Searle) wurde das phänomenologische Konzept des impliziten Inhalts kritisch beurteilt und weiterentwickelt. Hierbei wurde(n) unter anderem (1) der Zusammenhang zwischen dem phänomenologischen Begriff "vorprädikativer Inhalt" und dem analytischen Begriff "nichtkonzeptueller Inhalt" aufgezeigt und (2) Kriterien für die Zuschreibung impliziter Überzeugungen in den typischen Fällen der prädikativen Intentionalität zusammengetragen und systematisiert.
Resumo:
Il formalismo Mathai-Quillen (MQ) è un metodo per costruire la classe di Thom di un fibrato vettoriale attraverso una forma differenziale di profilo Gaussiano. Lo scopo di questa tesi è quello di formulare una nuova rappresentazione della classe di Thom usando aspetti geometrici della quantizzazione Batalin-Vilkovisky (BV). Nella prima parte del lavoro vengono riassunti i formalismi BV e MQ entrambi nel caso finito dimensionale. Infine sfrutteremo la trasformata di Fourier “odd" considerando la forma MQ come una funzione definita su un opportuno spazio graduato.
Resumo:
We study the effective interaction between two ellipsoidal particles at the interface of two fluid phases which are mediated by thermal fluctuations of the interface. Within a coarse-grained picture, the properties of fluid interfaces are very well described by an effective capillary wave Hamiltonian which governs both the equilibrium interface configuration and the thermal fluctuations (capillary waves) around this equilibrium (or mean-field) position. As postulated by the Goldstone theorem the capillary waves are long-range correlated. The interface breaks the continuous translational symmetry of the system, and in the limit of vanishing external fields - like gravity - it has to be accompanied by easily excitable long wavelength (Goldstone) modes – precisely the capillary waves. In this system the restriction of the long-ranged interface fluctuations by particles gives rise to fluctuation-induced forces which are equivalent to interactions of Casimir type and which are anisotropic in the interface plane. Since the position and the orientation of the colloids with respect to the interface normal may also fluctuate, this system is an example for the Casimir effect with fluctuating boundary conditions. In the approach taken here, the Casimir interaction is rewritten as the interaction between fluctuating multipole moments of an auxiliary charge density-like field defined on the area enclosed by the contact lines. These fluctuations are coupled to fluctuations of multipole moments of the contact line position (due to the possible position and orientational fluctuations of the colloids). We obtain explicit expressions for the behavior of the Casimir interaction at large distances for arbitrary ellipsoid aspect ratios. If colloid fluctuations are suppressed, the Casimir interaction at large distances is isotropic, attractive and long ranged (double-logarithmic in the distance). If, however, colloid fluctuations are included, the Casimir interaction at large distances changes to a power law in the inverse distance and becomes anisotropic. The leading power is 4 if only vertical fluctuations of the colloid center are allowed, and it becomes 8 if also orientational fluctuations are included.
Resumo:
In the present dissertation we consider Feynman integrals in the framework of dimensional regularization. As all such integrals can be expressed in terms of scalar integrals, we focus on this latter kind of integrals in their Feynman parametric representation and study their mathematical properties, partially applying graph theory, algebraic geometry and number theory. The three main topics are the graph theoretic properties of the Symanzik polynomials, the termination of the sector decomposition algorithm of Binoth and Heinrich and the arithmetic nature of the Laurent coefficients of Feynman integrals.rnrnThe integrand of an arbitrary dimensionally regularised, scalar Feynman integral can be expressed in terms of the two well-known Symanzik polynomials. We give a detailed review on the graph theoretic properties of these polynomials. Due to the matrix-tree-theorem the first of these polynomials can be constructed from the determinant of a minor of the generic Laplacian matrix of a graph. By use of a generalization of this theorem, the all-minors-matrix-tree theorem, we derive a new relation which furthermore relates the second Symanzik polynomial to the Laplacian matrix of a graph.rnrnStarting from the Feynman parametric parameterization, the sector decomposition algorithm of Binoth and Heinrich serves for the numerical evaluation of the Laurent coefficients of an arbitrary Feynman integral in the Euclidean momentum region. This widely used algorithm contains an iterated step, consisting of an appropriate decomposition of the domain of integration and the deformation of the resulting pieces. This procedure leads to a disentanglement of the overlapping singularities of the integral. By giving a counter-example we exhibit the problem, that this iterative step of the algorithm does not terminate for every possible case. We solve this problem by presenting an appropriate extension of the algorithm, which is guaranteed to terminate. This is achieved by mapping the iterative step to an abstract combinatorial problem, known as Hironaka's polyhedra game. We present a publicly available implementation of the improved algorithm. Furthermore we explain the relationship of the sector decomposition method with the resolution of singularities of a variety, given by a sequence of blow-ups, in algebraic geometry.rnrnMotivated by the connection between Feynman integrals and topics of algebraic geometry we consider the set of periods as defined by Kontsevich and Zagier. This special set of numbers contains the set of multiple zeta values and certain values of polylogarithms, which in turn are known to be present in results for Laurent coefficients of certain dimensionally regularized Feynman integrals. By use of the extended sector decomposition algorithm we prove a theorem which implies, that the Laurent coefficients of an arbitrary Feynman integral are periods if the masses and kinematical invariants take values in the Euclidean momentum region. The statement is formulated for an even more general class of integrals, allowing for an arbitrary number of polynomials in the integrand.
Resumo:
It is well known that many realistic mathematical models of biological systems, such as cell growth, cellular development and differentiation, gene expression, gene regulatory networks, enzyme cascades, synaptic plasticity, aging and population growth need to include stochasticity. These systems are not isolated, but rather subject to intrinsic and extrinsic fluctuations, which leads to a quasi equilibrium state (homeostasis). The natural framework is provided by Markov processes and the Master equation (ME) describes the temporal evolution of the probability of each state, specified by the number of units of each species. The ME is a relevant tool for modeling realistic biological systems and allow also to explore the behavior of open systems. These systems may exhibit not only the classical thermodynamic equilibrium states but also the nonequilibrium steady states (NESS). This thesis deals with biological problems that can be treat with the Master equation and also with its thermodynamic consequences. It is organized into six chapters with four new scientific works, which are grouped in two parts: (1) Biological applications of the Master equation: deals with the stochastic properties of a toggle switch, involving a protein compound and a miRNA cluster, known to control the eukaryotic cell cycle and possibly involved in oncogenesis and with the propose of a one parameter family of master equations for the evolution of a population having the logistic equation as mean field limit. (2) Nonequilibrium thermodynamics in terms of the Master equation: where we study the dynamical role of chemical fluxes that characterize the NESS of a chemical network and we propose a one parameter parametrization of BCM learning, that was originally proposed to describe plasticity processes, to study the differences between systems in DB and NESS.
Resumo:
Während das Standardmodell der Elementarteilchenphysik eine konsistente, renormierbare Quantenfeldtheorie dreier der vier bekannten Wechselwirkungen darstellt, bleibt die Quantisierung der Gravitation ein bislang ungelöstes Problem. In den letzten Jahren haben sich jedoch Hinweise ergeben, nach denen metrische Gravitation asymptotisch sicher ist. Das bedeutet, daß sich auch für diese Wechselwirkung eine Quantenfeldtheorie konstruieren läßt. Diese ist dann in einem verallgemeinerten Sinne renormierbar, der nicht mehr explizit Bezug auf die Störungstheorie nimmt. Zudem sagt dieser Zugang, der auf der Wilsonschen Renormierungsgruppe beruht, die korrekte mikroskopische Wirkung der Theorie voraus. Klassisch ist metrische Gravitation auf dem Niveau der Vakuumfeldgleichungen äquivalent zur Einstein-Cartan-Theorie, die das Vielbein und den Spinzusammenhang als fundamentale Variablen verwendet. Diese Theorie besitzt allerdings mehr Freiheitsgrade, eine größere Eichgruppe, und die zugrundeliegende Wirkung ist von erster Ordnung. Alle diese Eigenschaften erschweren eine zur metrischen Gravitation analoge Behandlung.rnrnIm Rahmen dieser Arbeit wird eine dreidimensionale Trunkierung von der Art einer verallgemeinerten Hilbert-Palatini-Wirkung untersucht, die neben dem Laufen der Newton-Konstante und der kosmologischen Konstante auch die Renormierung des Immirzi-Parameters erfaßt. Trotz der angedeuteten Schwierigkeiten war es möglich, das Spektrum des freien Hilbert-Palatini-Propagators analytisch zu berechnen. Auf dessen Grundlage wird eine Flußgleichung vom Propertime-Typ konstruiert. Zudem werden geeignete Eichbedingungen gewählt und detailliert analysiert. Dabei macht die Struktur der Eichgruppe eine Kovariantisierung der Eichtransformationen erforderlich. Der resultierende Fluß wird für verschiedene Regularisierungsschemata und Eichparameter untersucht. Dies liefert auch im Einstein-Cartan-Zugang berzeugende Hinweise auf asymptotische Sicherheit und damit auf die mögliche Existenz einer mathematisch konsistenten und prädiktiven fundamentalen Quantentheorie der Gravitation. Insbesondere findet man ein Paar nicht-Gaußscher Fixpunkte, das Anti-Screening aufweist. An diesen sind die Newton-Konstante und die kosmologische Konstante jeweils relevante Kopplungen, wohingegen der Immirzi-Parameter an einem Fixpunkt irrelevant und an dem anderen relevant ist. Zudem ist die Beta-Funktion des Immirzi-Parameters von bemerkenswert einfacher Form. Die Resultate sind robust gegenüber Variationen des Regularisierungsschemas. Allerdings sollten zukünftige Untersuchungen die bestehenden Eichabhängigkeiten reduzieren.