948 resultados para Boltzmann s H theorem
Resumo:
Large values for the mass-to-light ratio (ϒ) in self-gravitating systems is one of the most important evidences of dark matter. We propose a expression for the mass-to-light ratio in spherical systems using MOND. Results for the COMA cluster reveal that a modification of the gravity, as proposed by MOND, can reduce significantly this value.
Resumo:
We show that a self-generated set of combinatorial games, S. may not be hereditarily closed but, strong self-generation and hereditary closure are equivalent in the universe of short games. In [13], the question "Is there a set which will give a non-distributive but modular lattice?" appears. A useful necessary condition for the existence of a finite non-distributive modular L(S) is proved. We show the existence of S such that L(S) is modular and not distributive, exhibiting the first known example. More, we prove a Representation Theorem with Games that allows the generation of all finite lattices in game context. Finally, a computational tool for drawing lattices of games is presented. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
This paper presents the applicability of a reinforcement learning algorithm based on the application of the Bayesian theorem of probability. The proposed reinforcement learning algorithm is an advantageous and indispensable tool for ALBidS (Adaptive Learning strategic Bidding System), a multi-agent system that has the purpose of providing decision support to electricity market negotiating players. ALBidS uses a set of different strategies for providing decision support to market players. These strategies are used accordingly to their probability of success for each different context. The approach proposed in this paper uses a Bayesian network for deciding the most probably successful action at each time, depending on past events. The performance of the proposed methodology is tested using electricity market simulations in MASCEM (Multi-Agent Simulator of Competitive Electricity Markets). MASCEM provides the means for simulating a real electricity market environment, based on real data from real electricity market operators.
Resumo:
La verificación y el análisis de programas con características probabilistas es una tarea necesaria del quehacer científico y tecnológico actual. El éxito y su posterior masificación de las implementaciones de protocolos de comunicación a nivel hardware y soluciones probabilistas a problemas distribuidos hacen más que interesante el uso de agentes estocásticos como elementos de programación. En muchos de estos casos el uso de agentes aleatorios produce soluciones mejores y más eficientes; en otros proveen soluciones donde es imposible encontrarlas por métodos tradicionales. Estos algoritmos se encuentran generalmente embebidos en múltiples mecanismos de hardware, por lo que un error en los mismos puede llegar a producir una multiplicación no deseada de sus efectos nocivos.Actualmente el mayor esfuerzo en el análisis de programas probabilísticos se lleva a cabo en el estudio y desarrollo de herramientas denominadas chequeadores de modelos probabilísticos. Las mismas, dado un modelo finito del sistema estocástico, obtienen de forma automática varias medidas de performance del mismo. Aunque esto puede ser bastante útil a la hora de verificar programas, para sistemas de uso general se hace necesario poder chequear especificaciones más completas que hacen a la corrección del algoritmo. Incluso sería interesante poder obtener automáticamente las propiedades del sistema, en forma de invariantes y contraejemplos.En este proyecto se pretende abordar el problema de análisis estático de programas probabilísticos mediante el uso de herramientas deductivas como probadores de teoremas y SMT solvers. Las mismas han mostrado su madurez y eficacia en atacar problemas de la programación tradicional. Con el fin de no perder automaticidad en los métodos, trabajaremos dentro del marco de "Interpretación Abstracta" el cual nos brinda un delineamiento para nuestro desarrollo teórico. Al mismo tiempo pondremos en práctica estos fundamentos mediante implementaciones concretas que utilicen aquellas herramientas.
Resumo:
The classical central limit theorem states the uniform convergence of the distribution functions of the standardized sums of independent and identically distributed square integrable real-valued random variables to the standard normal distribution function. While first versions of the central limit theorem are already due to Moivre (1730) and Laplace (1812), a systematic study of this topic started at the beginning of the last century with the fundamental work of Lyapunov (1900, 1901). Meanwhile, extensions of the central limit theorem are available for a multitude of settings. This includes, e.g., Banach space valued random variables as well as substantial relaxations of the assumptions of independence and identical distributions. Furthermore, explicit error bounds are established and asymptotic expansions are employed to obtain better approximations. Classical error estimates like the famous bound of Berry and Esseen are stated in terms of absolute moments of the random summands and therefore do not reflect a potential closeness of the distributions of the single random summands to a normal distribution. Non-classical approaches take this issue into account by providing error estimates based on, e.g., pseudomoments. The latter field of investigation was initiated by work of Zolotarev in the 1960's and is still in its infancy compared to the development of the classical theory. For example, non-classical error bounds for asymptotic expansions seem not to be available up to now ...
Resumo:
We quantify the long-time behavior of a system of (partially) inelastic particles in a stochastic thermostat by means of the contractivity of a suitable metric in the set of probability measures. Existence, uniqueness, boundedness of moments and regularity of a steady state are derived from this basic property. The solutions of the kinetic model are proved to converge exponentially as t→ ∞ to this diffusive equilibrium in this distance metrizing the weak convergence of measures. Then, we prove a uniform bound in time on Sobolev norms of the solution, provided the initial data has a finite norm in the corresponding Sobolev space. These results are then combined, using interpolation inequalities, to obtain exponential convergence to the diffusive equilibrium in the strong L¹-norm, as well as various Sobolev norms.
Resumo:
The main aim of this short paper is to advertize the Koosis theorem in the mathematical community, especially among those who study orthogonal polynomials. We (try to) do this by proving a new theorem about asymptotics of orthogonal polynomi- als for which the Koosis theorem seems to be the most natural tool. Namely, we consider the case when a SzegÄo measure on the unit circumference is perturbed by an arbitrary measure inside the unit disk and an arbitrary Blaschke sequence of point masses outside the unit disk.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
We present Shelah’s famous theorem in a version for modules, together with a self-contained proof and some examples. This exposition is based on lectures given at CRM in October 2006.
Resumo:
We prove a double commutant theorem for hereditary subalgebras of a large class of C*-algebras, partially resolving a problem posed by Pedersen[8]. Double commutant theorems originated with von Neumann, whose seminal result evolved into an entire field now called von Neumann algebra theory. Voiculescu proved a C*-algebraic double commutant theorem for separable subalgebras of the Calkin algebra. We prove a similar result for hereditary subalgebras which holds for arbitrary corona C*-algebras. (It is not clear how generally Voiculescu's double commutant theorem holds.)
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
We examine the proof of a classical localization theorem of Bousfield and Friedlander and we remove the assumption that the underlying model category be right proper. The key to the argument is a lemma about factoring in morphisms in the arrow category of a model category.
Resumo:
In this paper, we consider an exchange economy µa la Shitovitz (1973), with atoms and an atomless set. We associate with it a strategic market game of the kind first proposed by Lloyd S. Shapley and known as the Shapley window model. We analyze the relationship between the set of the Cournot-Nash equilibrium allocations of the strategic market game and the Walras equilibrium allocations of the exchange economy with which it is associated. We show, with an example, that even when atoms are countably in¯nite, any Cournot-Nash equilibrium allocation of the game is not a Walras equilibrium of the underlying exchange economy. Accordingly, in the original spirit of Cournot (1838), we par- tially replicate the mixed exchange economy by increasing the number of atoms, without a®ecting the atomless part, and ensuring that the measure space of agents remains finite. We show that any sequence of Cournot-Nash equilibrium allocations of the strategic market games associated with the partially replicated exchange economies approximates a Walras equilibrium allocation of the original exchange economy.
Resumo:
We present an envelope theorem for establishing first-order conditions in decision problems involving continuous and discrete choices. Our theorem accommodates general dynamic programming problems, even with unbounded marginal utilities. And, unlike classical envelope theorems that focus only on differentiating value functions, we accommodate other endogenous functions such as default probabilities and interest rates. Our main technical ingredient is how we establish the differentiability of a function at a point: we sandwich the function between two differentiable functions from above and below. Our theory is widely applicable. In unsecured credit models, neither interest rates nor continuation values are globally differentiable. Nevertheless, we establish an Euler equation involving marginal prices and values. In adjustment cost models, we show that first-order conditions apply universally, even if optimal policies are not (S,s). Finally, we incorporate indivisible choices into a classic dynamic insurance analysis.