982 resultados para von Neumann-Schatten Class


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Opera biografica su John von Neumann che punta a considerare tutti i suoi contributi alla comunità scientifica (come logico, matematico, fisico, economista) oltre a quelli più noti come informatico.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Duality can be viewed as the soul of each von Neumann growth model. This is not at all surprising because von Neumann (1955), a mathematical genius, extensively studied quantum mechanics which involves a “dual nature” (electromagnetic waves and discrete corpuscules or light quanta). This may have had some influence on developing his own economic duality concept. The main object of this paper is to restore the spirit of economic duality in the investigations of the multiple von Neumann equilibria. By means of the (ir)reducibility taxonomy in Móczár (1995) the author transforms the primal canonical decomposition given by Bromek (1974) in the von Neumann growth model into the synergistic primal and dual canonical decomposition. This enables us to obtain all the information about the steadily maintainable states of growth sustained by the compatible price-constellations at each distinct expansion factor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper can be regarded as a result of basic research on the technological characteristics of the von Neumann models and their consequences. It introduces a new taxonomy of reducible technologies, explores their key distinguishing features, and specifies which ones ensure the uniqueness of von Neumann equilibrium. A comprehensive comparison is also given between the familiar (in)decomposability ideas and the reducibility concepts suggested here. All these are carried out with a modern approach. Simultaneously, the reader may also acquire a complete picture of and guidance on the fundamental von Neumann models here.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Topics include: Free groups and presentations; Automorphism groups; Semidirect products; Classification of groups of small order; Normal series: composition, derived, and solvable series; Algebraic field extensions, splitting fields, algebraic closures; Separable algebraic extensions, the Primitive Element Theorem; Inseparability, purely inseparable extensions; Finite fields; Cyclotomic field extensions; Galois theory; Norm and trace maps of an algebraic field extension; Solvability by radicals, Galois' theorem; Transcendence degree; Rings and modules: Examples and basic properties; Exact sequences, split short exact sequences; Free modules, projective modules; Localization of (commutative) rings and modules; The prime spectrum of a ring; Nakayama's lemma; Basic category theory; The Hom functors; Tensor products, adjointness; Left/right Noetherian and Artinian modules; Composition series, the Jordan-Holder Theorem; Semisimple rings; The Artin-Wedderburn Theorem; The Density Theorem; The Jacobson radical; Artinian rings; von Neumann regular rings; Wedderburn's theorem on finite division rings; Group representations, character theory; Integral ring extensions; Burnside's paqb Theorem; Injective modules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The box scheme proposed by H. B. Keller is a numerical method for solving parabolic partial differential equations. We give a convergence proof of this scheme for the heat equation, for a linear parabolic system, and for a class of nonlinear parabolic equations. Von Neumann stability is shown to hold for the box scheme combined with the method of fractional steps to solve the two-dimensional heat equation. Computations were performed on Burgers' equation with three different initial conditions, and Richardson extrapolation is shown to be effective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the case of a simple quantum system, we investigate the possibility of defining meaningful probabilities for a quantity that cannot be represented by a Hermitian operator. We find that the consistent-histories approach, recently applied to the case of quantum traversal time [N. Yamada, Phys. Rev. Lett. 83, 3350 (1999)], does not provide a suitable criterion and we dispute Yamada's claim of finding a simple solution to the tunneling-time problem. Rather, we define the probabilities for certain types of generally nonorthogonal decomposition of the system's quantum state. These relate to the interaction between the system and its environment, can be observed in a generalized von Neumann measurement, and are consistent with a particular class of positive-operator-valued measures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a probabilistic approach to the problem of assigning k indivisible identical objects to a set of agents with single-peaked preferences. Using the ordinal extension of preferences, we characterize the class of uniform probabilistic rules by Pareto efficiency, strategy-proofness, and no-envy. We also show that in this characterization no-envy cannot be replaced by anonymity. When agents are strictly risk averse von-Neumann-Morgenstern utility maximizers, then we reduce the problem of assigning k identical objects to a problem of allocating the amount k of an infinitely divisible commodity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a previous paper (J. of Differential Equations, Vol. 249 (2010), 3081-3098) we examined a family of periodic Sturm-Liouville problems with boundary and interior singularities which are highly non-self-adjoint but have only real eigenvalues. We now establish Schatten class properties of the associated resolvent operator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article we propose an exact efficient simulation algorithm for the generalized von Mises circular distribution of order two. It is an acceptance-rejection algorithm with a piecewise linear envelope based on the local extrema and the inflexion points of the generalized von Mises density of order two. We show that these points can be obtained from the roots of polynomials and degrees four and eight, which can be easily obtained by the methods of Ferrari and Weierstrass. A comparative study with the von Neumann acceptance-rejection, with the ratio-of-uniforms and with a Markov chain Monte Carlo algorithms shows that this new method is generally the most efficient.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La informática teórica es una disciplina básica ya que la mayoría de los avances en informática se sustentan en un sólido resultado de esa materia. En los últimos a~nos debido tanto al incremento de la potencia de los ordenadores, como a la cercanía del límite físico en la miniaturización de los componentes electrónicos, resurge el interés por modelos formales de computación alternativos a la arquitectura clásica de von Neumann. Muchos de estos modelos se inspiran en la forma en la que la naturaleza resuelve eficientemente problemas muy complejos. La mayoría son computacionalmente completos e intrínsecamente paralelos. Por este motivo se les está llegando a considerar como nuevos paradigmas de computación (computación natural). Se dispone, por tanto, de un abanico de arquitecturas abstractas tan potentes como los computadores convencionales y, a veces, más eficientes: alguna de ellas mejora el rendimiento, al menos temporal, de problemas NPcompletos proporcionando costes no exponenciales. La representación formal de las redes de procesadores evolutivos requiere de construcciones, tanto independientes, como dependientes del contexto, dicho de otro modo, en general una representación formal completa de un NEP implica restricciones, tanto sintácticas, como semánticas, es decir, que muchas representaciones aparentemente (sintácticamente) correctas de casos particulares de estos dispositivos no tendrían sentido porque podrían no cumplir otras restricciones semánticas. La aplicación de evolución gramatical semántica a los NEPs pasa por la elección de un subconjunto de ellos entre los que buscar los que solucionen un problema concreto. En este trabajo se ha realizado un estudio sobre un modelo inspirado en la biología celular denominado redes de procesadores evolutivos [55, 53], esto es, redes cuyos nodos son procesadores muy simples capaces de realizar únicamente un tipo de mutación puntual (inserción, borrado o sustitución de un símbolo). Estos nodos están asociados con un filtro que está definido por alguna condición de contexto aleatorio o de pertenencia. Las redes están formadas a lo sumo de seis nodos y, teniendo los filtros definidos por una pertenencia a lenguajes regulares, son capaces de generar todos los lenguajes enumerables recursivos independientemente del grafo subyacente. Este resultado no es sorprendente ya que semejantes resultados han sido documentados en la literatura. Si se consideran redes con nodos y filtros definidos por contextos aleatorios {que parecen estar más cerca a las implementaciones biológicas{ entonces se pueden generar lenguajes más complejos como los lenguajes no independientes del contexto. Sin embargo, estos mecanismos tan simples son capaces de resolver problemas complejos en tiempo polinomial. Se ha presentado una solución lineal para un problema NP-completo, el problema de los 3-colores. Como primer aporte significativo se ha propuesto una nueva dinámica de las redes de procesadores evolutivos con un comportamiento no determinista y masivamente paralelo [55], y por tanto todo el trabajo de investigación en el área de la redes de procesadores se puede trasladar a las redes masivamente paralelas. Por ejemplo, las redes masivamente paralelas se pueden modificar de acuerdo a determinadas reglas para mover los filtros hacia las conexiones. Cada conexión se ve como un canal bidireccional de manera que los filtros de entrada y salida coinciden. A pesar de esto, estas redes son computacionalmente completas. Se pueden también implementar otro tipo de reglas para extender este modelo computacional. Se reemplazan las mutaciones puntuales asociadas a cada nodo por la operación de splicing. Este nuevo tipo de procesador se denomina procesador splicing. Este modelo computacional de Red de procesadores con splicing ANSP es semejante en cierto modo a los sistemas distribuidos en tubos de ensayo basados en splicing. Además, se ha definido un nuevo modelo [56] {Redes de procesadores evolutivos con filtros en las conexiones{ , en el cual los procesadores tan solo tienen reglas y los filtros se han trasladado a las conexiones. Dicho modelo es equivalente, bajo determinadas circunstancias, a las redes de procesadores evolutivos clásicas. Sin dichas restricciones el modelo propuesto es un superconjunto de los NEPs clásicos. La principal ventaja de mover los filtros a las conexiones radica en la simplicidad de la modelización. Otras aportaciones de este trabajo ha sido el dise~no de un simulador en Java [54, 52] para las redes de procesadores evolutivos propuestas en esta Tesis. Sobre el término "procesador evolutivo" empleado en esta Tesis, el proceso computacional descrito aquí no es exactamente un proceso evolutivo en el sentido Darwiniano. Pero las operaciones de reescritura que se han considerado pueden interpretarse como mutaciones y los procesos de filtrado se podrían ver como procesos de selección. Además, este trabajo no abarca la posible implementación biológica de estas redes, a pesar de ser de gran importancia. A lo largo de esta tesis se ha tomado como definición de la medida de complejidad para los ANSP, una que denotaremos como tama~no (considerando tama~no como el número de nodos del grafo subyacente). Se ha mostrado que cualquier lenguaje enumerable recursivo L puede ser aceptado por un ANSP en el cual el número de procesadores está linealmente acotado por la cardinalidad del alfabeto de la cinta de una máquina de Turing que reconoce dicho lenguaje L. Siguiendo el concepto de ANSP universales introducido por Manea [65], se ha demostrado que un ANSP con una estructura de grafo fija puede aceptar cualquier lenguaje enumerable recursivo. Un ANSP se puede considerar como un ente capaz de resolver problemas, además de tener otra propiedad relevante desde el punto de vista práctico: Se puede definir un ANSP universal como una subred, donde solo una cantidad limitada de parámetros es dependiente del lenguaje. La anterior característica se puede interpretar como un método para resolver cualquier problema NP en tiempo polinomial empleando un ANSP de tama~no constante, concretamente treinta y uno. Esto significa que la solución de cualquier problema NP es uniforme en el sentido de que la red, exceptuando la subred universal, se puede ver como un programa; adaptándolo a la instancia del problema a resolver, se escogerín los filtros y las reglas que no pertenecen a la subred universal. Un problema interesante desde nuestro punto de vista es el que hace referencia a como elegir el tama~no optimo de esta red.---ABSTRACT---This thesis deals with the recent research works in the area of Natural Computing {bio-inspired models{, more precisely Networks of Evolutionary Processors first developed by Victor Mitrana and they are based on P Systems whose father is Georghe Paun. In these models, they are a set of processors connected in an underlying undirected graph, such processors have an object multiset (strings) and a set of rules, named evolution rules, that transform objects inside processors[55, 53],. These objects can be sent/received using graph connections provided they accomplish constraints defined at input and output filters processors have. This symbolic model, non deterministic one (processors are not synchronized) and massive parallel one[55] (all rules can be applied in one computational step) has some important properties regarding solution of NP-problems in lineal time and of course, lineal resources. There are a great number of variants such as hybrid networks, splicing processors, etc. that provide the model a computational power equivalent to Turing machines. The origin of networks of evolutionary processors (NEP for short) is a basic architecture for parallel and distributed symbolic processing, related to the Connection Machine as well as the Logic Flow paradigm, which consists of several processors, each of them being placed in a node of a virtual complete graph, which are able to handle data associated with the respective node. All the nodes send simultaneously their data and the receiving nodes handle also simultaneously all the arriving messages, according to some strategies. In a series of papers one considers that each node may be viewed as a cell having genetic information encoded in DNA sequences which may evolve by local evolutionary events, that is point mutations. Each node is specialized just for one of these evolutionary operations. Furthermore, the data in each node is organized in the form of multisets of words (each word appears in an arbitrarily large number of copies), and all the copies are processed in parallel such that all the possible events that can take place do actually take place. Obviously, the computational process just described is not exactly an evolutionary process in the Darwinian sense. But the rewriting operations we have considered might be interpreted as mutations and the filtering process might be viewed as a selection process. Recombination is missing but it was asserted that evolutionary and functional relationships between genes can be captured by taking only local mutations into consideration. It is clear that filters associated with each node allow a strong control of the computation. Indeed, every node has an input and output filter; two nodes can exchange data if it passes the output filter of the sender and the input filter of the receiver. Moreover, if some data is sent out by some node and not able to enter any node, then it is lost. In this paper we simplify the ANSP model considered in by moving the filters from the nodes to the edges. Each edge is viewed as a two-way channel such that the input and output filters coincide. Clearly, the possibility of controlling the computation in such networks seems to be diminished. For instance, there is no possibility to loose data during the communication steps. In spite of this and of the fact that splicing is not a powerful operation (remember that splicing systems generates only regular languages) we prove here that these devices are computationally complete. As a consequence, we propose characterizations of two complexity classes, namely NP and PSPACE, in terms of accepting networks of restricted splicing processors with filtered connections. We proposed a uniform linear time solution to SAT based on ANSPFCs with linearly bounded resources. This solution should be understood correctly: we do not solve SAT in linear time and space. Since any word and auxiliary word appears in an arbitrarily large number of copies, one can generate in linear time, by parallelism and communication, an exponential number of words each of them having an exponential number of copies. However, this does not seem to be a major drawback since by PCR (Polymerase Chain Reaction) one can generate an exponential number of identical DNA molecules in a linear number of reactions. It is worth mentioning that the ANSPFC constructed above remains unchanged for any instance with the same number of variables. Therefore, the solution is uniform in the sense that the network, excepting the input and output nodes, may be viewed as a program according to the number of variables, we choose the filters, the splicing words and the rules, then we assign all possible values to the variables, and compute the formula.We proved that ANSP are computationally complete. Do the ANSPFC remain still computationally complete? If this is not the case, what other problems can be eficiently solved by these ANSPFCs? Moreover, the complexity class NP is exactly the class of all languages decided by ANSP in polynomial time. Can NP be characterized in a similar way with ANSPFCs?

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a general class of su(1|1) supersymmetric spin chains with long-range interactions which includes as particular cases the su(1|1) Inozemtsev (elliptic) and Haldane-Shastry chains, as well as the XX model. We show that this class of models can be fermionized with the help of the algebraic properties of the su(1|1) permutation operator and take advantage of this fact to analyze their quantum criticality when a chemical potential term is present in the Hamiltonian. We first study the low-energy excitations and the low-temperature behavior of the free energy, which coincides with that of a (1+1)-dimensional conformal field theory (CFT) with central charge c=1 when the chemical potential lies in the critical interval (0,E(π)), E(p) being the dispersion relation. We also analyze the von Neumann and Rényi ground state entanglement entropies, showing that they exhibit the logarithmic scaling with the size of the block of spins characteristic of a one-boson (1+1)-dimensional CFT. Our results thus show that the models under study are quantum critical when the chemical potential belongs to the critical interval, with central charge c=1. From the analysis of the fermion density at zero temperature, we also conclude that there is a quantum phase transition at both ends of the critical interval. This is further confirmed by the behavior of the fermion density at finite temperature, which is studied analytically (at low temperature), as well as numerically for the su(1|1) elliptic chain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a new class of generalized isotropic Lipkin–Meshkov–Glick models with su(m+1) spin and long-range non-constant interactions, whose non-degenerate ground state is a Dicke state of su(m+1) type. We evaluate in closed form the reduced density matrix of a block of Lspins when the whole system is in its ground state, and study the corresponding von Neumann and Rényi entanglement entropies in the thermodynamic limit. We show that both of these entropies scale as a log L when L tends to infinity, where the coefficient a is equal to (m  −  k)/2 in the ground state phase with k vanishing magnon densities. In particular, our results show that none of these generalized Lipkin–Meshkov–Glick models are critical, since when L-->∞ their Rényi entropy R_q becomes independent of the parameter q. We have also computed the Tsallis entanglement entropy of the ground state of these generalized su(m+1) Lipkin–Meshkov–Glick models, finding that it can be made extensive by an appropriate choice of its parameter only when m-k≥3. Finally, in the su(3) case we construct in detail the phase diagram of the ground state in parameter space, showing that it is determined in a simple way by the weights of the fundamental representation of su(3). This is also true in the su(m+1) case; for instance, we prove that the region for which all the magnon densities are non-vanishing is an (m  +  1)-simplex in R^m whose vertices are the weights of the fundamental representation of su(m+1).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A dolgozatban a Neumann-modell lehetséges elméleti és módszertani rokonságát elemezzük annak fényében, hogy mind a neoklasszikusok, mind a klasszikus hagyományokat felélesztő neoricardiánusok a magukénak vallják. Ennek során megvizsgáljuk a klasszikus és a neoklasszikus gazdaságfelfogás, az ex post és az ex ante szemléletű modellek közötti különbségeket, és azt a forradalmi jelentőségű módszertani változást, amely a sok szempontból joggal bírálható modern matematikai közgazdaságtan kialakulásához vezetett. Összevetjük Neumann modelljét az osztrák iskola árbeszámítási elméletével, a Walras­Cassel- és a Schlesinger­Wald-féle modellekkel, illetve a Ricardo, Marx, Dmitriev, Leontief nevekkel fémjelezhető klasszikus vonulat eredményeivel. Rámutatunk arra, hogy Neumann voltaképpen az "igazságos és értelmes gazdaság" ősi ideáját öntötte kora modern fizikájában honos matematikai modell formájába. /===/ The paper investigates the potential theoretical and methodological sources of inspiration of the von Neumann model, in view of the fact that both the neoclassical and the neo-Ricardian economists claim heritage to it. In the course of that the author assesses the main differences of the classical and neoclassical, the ex post and ex ante modeling approaches. He also confronts the von Neumann model with the Walras–Cassel and the Schlesinger–Wald models, and with models worked out in the classical tradition a’la Ricardo, Marx, Dmitriev and Leontief. He concludes that the Neumann-model is, in fact, nothing but a reformulation of a very old belief in a “just and reasonable economic system” based on the modern modeling approach of contemporary physics and mathematics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A dolgozat a klasszikusnak tekinthető Neumann-féle növekedési modell egy új alapra helyezését tartalmazza. Az eredeti Neumann-modellben expliciten vállalatok nem szerepelnek, csak technológiák vagy eljárások. A dolgozat egy olyan Neumann-típusú modellt vizsgál, amelyben az egyes technológiáknak vállalatokat feleltet meg, és azt vizsgálja, hogy ilyen feltételezés mellett egy ilyen gazdaságban léteznek-e olyan megoldások, amelyek mellett a vállalatok maximalizálják a nyereségüket. Ennek vizsgálata közben arra az eredményre juthatunk, hogy erre az esetre a klasszikus Neumann-modell által feltételezett nempozitív nyereséget felül kell vizsgálni, ami a klasszikus matematikai közgazdaságtan dualitáson alapuló alapfeltételezése. ______ The paper investigates a generalization of the classical growth model of John von Neumann. There are only technologies in model of von Neumann. The aim of the paper is to rename technologies as firms and it is analyzed whether there exist equilibrium prices and quantities for firms to maximize the total profit. The paper reexamines the classical assumption about the duality of prices, i.e. it is allowed a nonnegative profit of firms.