927 resultados para Random finite set theory
Resumo:
In this paper we propose a generalization of the density functional theory. The theory leads to single-particle equations of motion with a quasilocal mean-field operator, which contains a quasiparticle position-dependent effective mass and a spin-orbit potential. The energy density functional is constructed using the extended Thomas-Fermi approximation and the ground-state properties of doubly magic nuclei are considered within the framework of this approach. Calculations were performed using the finite-range Gogny D1S forces and the results are compared with the exact Hartree-Fock calculations
Resumo:
Semiclassical theories such as the Thomas-Fermi and Wigner-Kirkwood methods give a good description of the smooth average part of the total energy of a Fermi gas in some external potential when the chemical potential is varied. However, in systems with a fixed number of particles N, these methods overbind the actual average of the quantum energy as N is varied. We describe a theory that accounts for this effect. Numerical illustrations are discussed for fermions trapped in a harmonic oscillator potential and in a hard-wall cavity, and for self-consistent calculations of atomic nuclei. In the latter case, the influence of deformations on the average behavior of the energy is also considered.
Resumo:
Routine activity theory introduced by Cohen& Felson in 1979 states that criminal acts are caused due to the presenceof criminals, vic-timsand the absence of guardians in time and place. As the number of collision of these elements in place and time increases, criminal acts will also increase even if the number of criminals or civilians remains the same within the vicinity of a city. Street robbery is a typical example of routine ac-tivity theory and the occurrence of which can be predicted using routine activity theory. Agent-based models allow simulation of diversity among individuals. Therefore agent based simulation of street robbery can be used to visualize how chronological aspects of human activity influence the incidence of street robbery.The conceptual model identifies three classes of people-criminals, civilians and police with certain activity areas for each. Police exist only as agents of formal guardianship. Criminals with a tendency for crime will be in the search for their victims. Civilians without criminal tendencycan be either victims or guardians. In addition to criminal tendency, each civilian in the model has a unique set of characteristicslike wealth, employment status, ability for guardianship etc. These agents are subjected to random walk through a street environment guided by a Q –learning module and the possible outcomes are analyzed
Resumo:
In many situations probability models are more realistic than deterministic models. Several phenomena occurring in physics are studied as random phenomena changing with time and space. Stochastic processes originated from the needs of physicists.Let X(t) be a random variable where t is a parameter assuming values from the set T. Then the collection of random variables {X(t), t ∈ T} is called a stochastic process. We denote the state of the process at time t by X(t) and the collection of all possible values X(t) can assume, is called state space
Resumo:
Der Vielelektronen Aspekt wird in einteilchenartigen Formulierungen berücksichtigt, entweder in Hartree-Fock Näherung oder unter dem Einschluß der Elektron-Elektron Korrelationen durch die Dichtefunktional Theorie. Da die Physik elektronischer Systeme (Atome, Moleküle, Cluster, Kondensierte Materie, Plasmen) relativistisch ist, habe ich von Anfang an die relativistische 4 Spinor Dirac Theorie eingesetzt, in jüngster Zeit aber, und das wird der hauptfortschritt in den relativistischen Beschreibung durch meine Promotionsarbeit werden, eine ebenfalls voll relativistische, auf dem sogenannten Minimax Prinzip beruhende 2-Spinor Theorie umgesetzt. Im folgenden ist eine kurze Beschreibung meiner Dissertation: Ein wesentlicher Effizienzgewinn in der relativistischen 4-Spinor Dirac Rechnungen konnte durch neuartige singuläre Koordinatentransformationen erreicht werden, so daß sich auch noch für das superschwere Th2 179+ hächste Lösungsgenauigkeiten mit moderatem Computer Aufwand ergaben, und zu zwei weiteren interessanten Veröffentlichungen führten (Publikationsliste). Trotz der damit bereits ermöglichten sehr viel effizienteren relativistischen Berechnung von Molekülen und Clustern blieben diese Rechnungen Größenordnungen aufwendiger als entsprechende nicht-relativistische. Diese behandeln das tatsächliche (relativitische) Verhalten elektronischer Systeme nur näherungsweise richtig, um so besser jedoch, je leichter die beteiligten Atome sind (kleine Kernladungszahl Z). Deshalb habe ich nach einem neuen Formalismus gesucht, der dem möglichst gut Rechnung trägt und trotzdem die Physik richtig relativistisch beschreibt. Dies gelingt durch ein 2-Spinor basierendes Minimax Prinzip: Systeme mit leichten Atomen sind voll relativistisch nunmehr nahezu ähnlich effizient beschrieben wie nicht-relativistisch, was natürlich große Hoffnungen für genaue (d.h. relativistische) Berechnungen weckt. Es ergab sich eine erste grundlegende Veröffentlichung (Publikationsliste). Die Genauigkeit in stark relativistischen Systemen wie Th2 179+ ist ähnlich oder leicht besser als in 4-Spinor Dirac-Formulierung. Die Vorteile der neuen Formulierung gehen aber entscheidend weiter: A. Die neue Minimax Formulierung der Dirac-Gl. ist frei von spuriosen Zuständen und hat keine positronischen Kontaminationen. B. Der Aufwand ist weit reduziert, da nur ein 1/3 der Matrix Elemente gegenüber 4-Spinor noch zu berechnen ist, und alle Matrixdimensionen Faktor 2 kleiner sind. C. Numerisch verhält sich die neue Formulierung ähnlilch gut wie die nichtrelativistische Schrödinger Gleichung (Obwohl es eine exakte Formulierung und keine Näherung der Dirac-Gl. ist), und hat damit bessere Konvergenzeigenschaften als 4-Spinor. Insbesondere die Fehlerwichtung (singulärer und glatter Anteil) ist in 2-Spinor anders, und diese zeigt die guten Extrapolationseigenschaften wie bei der nichtrelativistischen Schrödinger Gleichung. Die Ausweitung des Anwendungsbereichs von (relativistischen) 2-Spinor ist bereits in FEM Dirac-Fock-Slater, mit zwei Beispielen CO und N2, erfolgreich gemacht. Weitere Erweiterungen sind nahezu möglich. Siehe Minmax LCAO Nährung.
Resumo:
Bei der Bestimmung der irreduziblen Charaktere einer Gruppe vom Lie-Typ entwickelte Lusztig eine Theorie, in der eine sogenannte Fourier-Transformation auftaucht. Dies ist eine Matrix, die nur von der Weylgruppe der Gruppe vom Lie-Typ abhängt. Anhand der Eigenschaften, die eine solche Fourier- Matrix erfüllen muß, haben Geck und Malle ein Axiomensystem aufgestellt. Dieses ermöglichte es Broue, Malle und Michel füur die Spetses, über die noch vieles unbekannt ist, Fourier-Matrizen zu bestimmen. Das Ziel dieser Arbeit ist eine Untersuchung und neue Interpretation dieser Fourier-Matrizen, die hoffentlich weitere Informationen zu den Spetses liefert. Die Werkzeuge, die dabei entstehen, sind sehr vielseitig verwendbar, denn diese Matrizen entsprechen gewissen Z-Algebren, die im Wesentlichen die Eigenschaften von Tafelalgebren besitzen. Diese spielen in der Darstellungstheorie eine wichtige Rolle, weil z.B. Darstellungsringe Tafelalgebren sind. In der Theorie der Kac-Moody-Algebren gibt es die sogenannte Kac-Peterson-Matrix, die auch die Eigenschaften unserer Fourier-Matrizen besitzt. Ein wichtiges Resultat dieser Arbeit ist, daß die Fourier-Matrizen, die G. Malle zu den imprimitiven komplexen Spiegelungsgruppen definiert, die Eigenschaft besitzen, daß die Strukturkonstanten der zugehörigen Algebren ganze Zahlen sind. Dazu müssen äußere Produkte von Gruppenringen von zyklischen Gruppen untersucht werden. Außerdem gibt es einen Zusammenhang zu den Kac-Peterson-Matrizen: Wir beweisen, daß wir durch Bildung äußerer Produkte von den Matrizen vom Typ A(1)1 zu denen vom Typ C(1) l gelangen. Lusztig erkannte, daß manche seiner Fourier-Matrizen zum Darstellungsring des Quantendoppels einer endlichen Gruppe gehören. Deswegen ist es naheliegend zu versuchen, die noch ungeklärten Matrizen als solche zu identifizieren. Coste, Gannon und Ruelle untersuchen diesen Darstellungsring. Sie stellen eine Reihe von wichtigen Fragen. Eine dieser Fragen beantworten wir, nämlich inwieweit rekonstruiert werden kann, zu welcher endlichen Gruppe gegebene Matrizen gehören. Den Darstellungsring des getwisteten Quantendoppels berechnen wir für viele Beispiele am Computer. Dazu müssen unter anderem Elemente aus der dritten Kohomologie-Gruppe H3(G,C×) explizit berechnet werden, was bisher anscheinend in noch keinem Computeralgebra-System implementiert wurde. Leider ergibt sich hierbei kein Zusammenhang zu den von Spetses herrührenden Matrizen. Die Werkzeuge, die in der Arbeit entwickelt werden, ermöglichen eine strukturelle Zerlegung der Z-Ringe mit Basis in bekannte Anteile. So können wir für die meisten Matrizen der Spetses Konstruktionen angeben: Die zugehörigen Z-Algebren sind Faktorringe von Tensorprodukten von affinen Ringe Charakterringen und von Darstellungsringen von Quantendoppeln.
Resumo:
The object of research presented here is Vessiot's theory of partial differential equations: for a given differential equation one constructs a distribution both tangential to the differential equation and contained within the contact distribution of the jet bundle. Then within it, one seeks n-dimensional subdistributions which are transversal to the base manifold, the integral distributions. These consist of integral elements, and these again shall be adapted so that they make a subdistribution which closes under the Lie-bracket. This then is called a flat Vessiot connection. Solutions to the differential equation may be regarded as integral manifolds of these distributions. In the first part of the thesis, I give a survey of the present state of the formal theory of partial differential equations: one regards differential equations as fibred submanifolds in a suitable jet bundle and considers formal integrability and the stronger notion of involutivity of differential equations for analyzing their solvability. An arbitrary system may (locally) be represented in reduced Cartan normal form. This leads to a natural description of its geometric symbol. The Vessiot distribution now can be split into the direct sum of the symbol and a horizontal complement (which is not unique). The n-dimensional subdistributions which close under the Lie bracket and are transversal to the base manifold are the sought tangential approximations for the solutions of the differential equation. It is now possible to show their existence by analyzing the structure equations. Vessiot's theory is now based on a rigorous foundation. Furthermore, the relation between Vessiot's approach and the crucial notions of the formal theory (like formal integrability and involutivity of differential equations) is clarified. The possible obstructions to involution of a differential equation are deduced explicitly. In the second part of the thesis it is shown that Vessiot's approach for the construction of the wanted distributions step by step succeeds if, and only if, the given system is involutive. Firstly, an existence theorem for integral distributions is proven. Then an existence theorem for flat Vessiot connections is shown. The differential-geometric structure of the basic systems is analyzed and simplified, as compared to those of other approaches, in particular the structure equations which are considered for the proofs of the existence theorems: here, they are a set of linear equations and an involutive system of differential equations. The definition of integral elements given here links Vessiot theory and the dual Cartan-Kähler theory of exterior systems. The analysis of the structure equations not only yields theoretical insight but also produces an algorithm which can be used to derive the coefficients of the vector fields, which span the integral distributions, explicitly. Therefore implementing the algorithm in the computer algebra system MuPAD now is possible.
Resumo:
A fully relativistic four-component Dirac-Fock-Slater program for diatomics, with numerically given AO's as basis functions is presented. We discuss the problem of the errors due to the finite basis-set, and due to the influence of the negative energy solutions of the Dirac Hamiltonian. The negative continuum contributions are found to be very small.
Resumo:
In many real world contexts individuals find themselves in situations where they have to decide between options of behaviour that serve a collective purpose or behaviours which satisfy one’s private interests, ignoring the collective. In some cases the underlying social dilemma (Dawes, 1980) is solved and we observe collective action (Olson, 1965). In others social mobilisation is unsuccessful. The central topic of social dilemma research is the identification and understanding of mechanisms which yield to the observed cooperation and therefore resolve the social dilemma. It is the purpose of this thesis to contribute this research field for the case of public good dilemmas. To do so, existing work that is relevant to this problem domain is reviewed and a set of mandatory requirements is derived which guide theory and method development of the thesis. In particular, the thesis focusses on dynamic processes of social mobilisation which can foster or inhibit collective action. The basic understanding is that success or failure of the required process of social mobilisation is determined by heterogeneous individual preferences of the members of a providing group, the social structure in which the acting individuals are contained, and the embedding of the individuals in economic, political, biophysical, or other external contexts. To account for these aspects and for the involved dynamics the methodical approach of the thesis is computer simulation, in particular agent-based modelling and simulation of social systems. Particularly conductive are agent models which ground the simulation of human behaviour in suitable psychological theories of action. The thesis develops the action theory HAPPenInGS (Heterogeneous Agents Providing Public Goods) and demonstrates its embedding into different agent-based simulations. The thesis substantiates the particular added value of the methodical approach: Starting out from a theory of individual behaviour, in simulations the emergence of collective patterns of behaviour becomes observable. In addition, the underlying collective dynamics may be scrutinised and assessed by scenario analysis. The results of such experiments reveal insights on processes of social mobilisation which go beyond classical empirical approaches and yield policy recommendations on promising intervention measures in particular.
Resumo:
To study the behaviour of beam-to-column composite connection more sophisticated finite element models is required, since component model has some severe limitations. In this research a generic finite element model for composite beam-to-column joint with welded connections is developed using current state of the art local modelling. Applying mechanically consistent scaling method, it can provide the constitutive relationship for a plane rectangular macro element with beam-type boundaries. Then, this defined macro element, which preserves local behaviour and allows for the transfer of five independent states between local and global models, can be implemented in high-accuracy frame analysis with the possibility of limit state checks. In order that macro element for scaling method can be used in practical manner, a generic geometry program as a new idea proposed in this study is also developed for this finite element model. With generic programming a set of global geometric variables can be input to generate a specific instance of the connection without much effort. The proposed finite element model generated by this generic programming is validated against testing results from University of Kaiserslautern. Finally, two illustrative examples for applying this macro element approach are presented. In the first example how to obtain the constitutive relationships of macro element is demonstrated. With certain assumptions for typical composite frame the constitutive relationships can be represented by bilinear laws for the macro bending and shear states that are then coupled by a two-dimensional surface law with yield and failure surfaces. In second example a scaling concept that combines sophisticated local models with a frame analysis using a macro element approach is presented as a practical application of this numerical model.
Resumo:
Im Rahmen der Dichtefunktionaltheorie wurden Orbitalfunktionale wie z.B. B3LYP entwickelt. Diese lassen sich mit der „optimized effective potential“ – Methode selbstkonsistent auswerten. Während sie früher nur im 1D-Fall genau berechnet werden konnte, entwickelten Kümmel und Perdew eine Methode, bei der das OEP-Problem unter Verwendung einer Differentialgleichung selbstkonsistent gelöst werden kann. In dieser Arbeit wird ein Finite-Elemente-Mehrgitter-Verfahren verwendet, um die entstehenden Gleichungen zu lösen und damit Energien, Dichten und Ionisationsenergien für Atome und zweiatomige Moleküle zu berechnen. Als Orbitalfunktional wird dabei der „exakte Austausch“ verwendet; das Programm ist aber leicht auf jedes beliebige Funktional erweiterbar. Für das Be-Atom ließ sich mit 8.Ordnung –FEM die Gesamtenergien etwa um 2 Größenordnungen genauer berechnen als der Finite-Differenzen-Code von Makmal et al. Für die Eigenwerte und die Eigenschaften der Atome N und Ne wurde die Genauigkeit anderer numerischer Methoden erreicht. Die Rechenzeit wuchs erwartungsgemäß linear mit der Punktzahl. Trotz recht langsamer scf-Konvergenz wurden für das Molekül LiH Genauigkeiten wie bei FD und bei HF um 2-3 Größenordnungen bessere als mit Basismethoden erzielt. Damit zeigt sich, dass auf diese Weise benchmark-Rechnungen durchgeführt werden können. Diese dürften wegen der schnellen Konvergenz über der Punktzahl und dem geringen Zeitaufwand auch auf schwerere Systeme ausweitbar sein.
Resumo:
We show that optimizing a quantum gate for an open quantum system requires the time evolution of only three states irrespective of the dimension of Hilbert space. This represents a significant reduction in computational resources compared to the complete basis of Liouville space that is commonly believed necessary for this task. The reduction is based on two observations: the target is not a general dynamical map but a unitary operation; and the time evolution of two properly chosen states is sufficient to distinguish any two unitaries. We illustrate gate optimization employing a reduced set of states for a controlled phasegate with trapped atoms as qubit carriers and a iSWAP gate with superconducting qubits.
Resumo:
One objective of artificial intelligence is to model the behavior of an intelligent agent interacting with its environment. The environment's transformations can be modeled as a Markov chain, whose state is partially observable to the agent and affected by its actions; such processes are known as partially observable Markov decision processes (POMDPs). While the environment's dynamics are assumed to obey certain rules, the agent does not know them and must learn. In this dissertation we focus on the agent's adaptation as captured by the reinforcement learning framework. This means learning a policy---a mapping of observations into actions---based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. The set of policies is constrained by the architecture of the agent's controller. POMDPs require a controller to have a memory. We investigate controllers with memory, including controllers with external memory, finite state controllers and distributed controllers for multi-agent systems. For these various controllers we work out the details of the algorithms which learn by ascending the gradient of expected cumulative reinforcement. Building on statistical learning theory and experiment design theory, a policy evaluation algorithm is developed for the case of experience re-use. We address the question of sufficient experience for uniform convergence of policy evaluation and obtain sample complexity bounds for various estimators. Finally, we demonstrate the performance of the proposed algorithms on several domains, the most complex of which is simulated adaptive packet routing in a telecommunication network.
Resumo:
In order to estimate the motion of an object, the visual system needs to combine multiple local measurements, each of which carries some degree of ambiguity. We present a model of motion perception whereby measurements from different image regions are combined according to a Bayesian estimator --- the estimated motion maximizes the posterior probability assuming a prior favoring slow and smooth velocities. In reviewing a large number of previously published phenomena we find that the Bayesian estimator predicts a wide range of psychophysical results. This suggests that the seemingly complex set of illusions arise from a single computational strategy that is optimal under reasonable assumptions.
Resumo:
First discussion on compositional data analysis is attributable to Karl Pearson, in 1897. However, notwithstanding the recent developments on algebraic structure of the simplex, more than twenty years after Aitchison’s idea of log-transformations of closed data, scientific literature is again full of statistical treatments of this type of data by using traditional methodologies. This is particularly true in environmental geochemistry where besides the problem of the closure, the spatial structure (dependence) of the data have to be considered. In this work we propose the use of log-contrast values, obtained by a simplicial principal component analysis, as LQGLFDWRUV of given environmental conditions. The investigation of the log-constrast frequency distributions allows pointing out the statistical laws able to generate the values and to govern their variability. The changes, if compared, for example, with the mean values of the random variables assumed as models, or other reference parameters, allow defining monitors to be used to assess the extent of possible environmental contamination. Case study on running and ground waters from Chiavenna Valley (Northern Italy) by using Na+, K+, Ca2+, Mg2+, HCO3-, SO4 2- and Cl- concentrations will be illustrated