983 resultados para mesure de von Neumann équatoriale
Resumo:
We study the Von Neumann and Renyi entanglement entropy of long-range harmonic oscillators (LRHO) by both theoretical and numerical means. We show that the entanglement entropy in massless harmonic oscillators increases logarithmically with the sub-system size as S - c(eff)/3 log l. Although the entanglement entropy of LRHO's shares some similarities with the entanglement entropy at conformal critical points we show that the Renyi entanglement entropy presents some deviations from the expected conformal behaviour. In the massive case we demonstrate that the behaviour of the entanglement entropy with respect to the correlation length is also logarithmic as the short-range case. Copyright (c) EPLA, 2012
Resumo:
In a previous paper, we connected the phenomenological noncommutative inflation of Alexander, Brandenberger and Magueijo [ Phys. Rev. D 67 081301 (2003)] and Koh and Brandenberger [ J. Cosmol. Astropart Phys. 2007 21 ()] with the formal representation theory of groups and algebras and analyzed minimal conditions that the deformed dispersion relation should satisfy in order to lead to a successful inflation. In that paper, we showed that elementary tools of algebra allow a group-like procedure in which even Hopf algebras (roughly the symmetries of noncommutative spaces) could lead to the equation of state of inflationary radiation. Nevertheless, in this paper, we show that there exists a conceptual problem with the kind of representation that leads to the fundamental equations of the model. The problem comes from an incompatibility between one of the minimal conditions for successful inflation (the momentum of individual photons being bounded from above) and the Fock-space structure of the representation which leads to the fundamental inflationary equations of state. We show that the Fock structure, although mathematically allowed, would lead to problems with the overall consistency of physics, like leading to a problematic scattering theory, for example. We suggest replacing the Fock space by one of two possible structures that we propose. One of them relates to the general theory of Hopf algebras (here explained at an elementary level) while the other is based on a representation theorem of von Neumann algebras (a generalization of the Clebsch-Gordan coefficients), a proposal already suggested by us to take into account interactions in the inflationary equation of state.
Resumo:
In the first part of the thesis, we propose an exactly-solvable one-dimensional model for fermions with long-range p-wave pairing decaying with distance as a power law. We studied the phase diagram by analyzing the critical lines, the decay of correlation functions and the scaling of the von Neumann entropy with the system size. We found two gapped regimes, where correlation functions decay (i) exponentially at short range and algebraically at long range, (ii) purely algebraically. In the latter the entanglement entropy is found to diverge logarithmically. Most interestingly, along the critical lines, long-range pairing breaks also the conformal symmetry. This can be detected via the dynamics of entanglement following a quench. In the second part of the thesis we studied the evolution in time of the entanglement entropy for the Ising model in a transverse field varying linearly in time with different velocities. We found different regimes: an adiabatic one (small velocities) when the system evolves according the instantaneous ground state; a sudden quench (large velocities) when the system is essentially frozen to its initial state; and an intermediate one, where the entropy starts growing linearly but then displays oscillations (also as a function of the velocity). Finally, we discussed the Kibble-Zurek mechanism for the transition between the paramagnetic and the ordered phase.
Resumo:
In questa tesi abbiamo studiato il comportamento delle entropie di Entanglement e dello spettro di Entanglement nel modello XYZ attraverso delle simulazioni numeriche. Le formule per le entropie di Von Neumann e di Renyi nel caso di una catena bipartita infinita esistevano già, ma mancavano ancora dei test numerici dettagliati. Inoltre, rispetto alla formula per l'Entropia di Entanglement di J. Cardy e P. Calabrese per sistemi non critici, tali relazioni presentano delle correzioni che non hanno ancora una spiegazione analitica: i risultati delle simulazioni numeriche ne hanno confermato la presenza. Abbiamo inoltre testato l'ipotesi che lo Schmidt Gap sia proporzionale a uno dei parametri d'ordine della teoria, e infine abbiamo simulato numericamente l'andamento delle Entropie e dello spettro di Entanglement in funzione della lunghezza della catena di spin. Ciò è stato possibile solo introducendo dei campi magnetici ''ad hoc'' nella catena, con la proprietà che l'andamento delle suddette quantità varia a seconda di come vengono disposti tali campi. Abbiamo quindi discusso i vari risultati ottenuti.
Resumo:
Si consideri un insieme X non vuoto su cui si costruisce una sigma-algebra F, una trasformazione T dall'insieme X in se stesso F-misurabile si dice che conserva la misura se, preso un elemento della sigma-algebra, la misura della controimmagine di tale elemento è uguale a quella dell'elemento stesso. Con questa nozione si possono costruire vari esempi di applicazioni che conservano la misura, nell'elaborato si presenta la trasformazione di Gauss. Questo tipo di trasformazioni vengono utilizzate nella teoria ergodica dove ha senso considerare il sistema dinamico a tempi discreti T^j x; dove x = T^0 x è un dato iniziale, e studiare come la dinamica dipende dalla condizione iniziale x. Il Teorema Ergodico di Von Neumann afferma che dato uno spazio di Hilbert H su cui si definisce un'isometria U è possibile considerare, per ogni elemento f dello spazio di Hilbert, la media temporale di f che converge ad un elemento dell'autospazio relativo all'autovalore 1 dell'isometria. Il Teorema di Birkhoff invece asserisce che preso uno spazio X sigma-finito ed una trasformazione T non necessariamente invertibile è possibile considerare la media temporale di una funzione f sommabile, questa converge sempre ad una funzione f* misurabile e se la misura di X è finita f* è distribuita come f. In particolare, se la trasformazione T è ergodica si avrà che la media temporale e spaziale coincideranno.
Resumo:
Questa tesi illustra il teorema di decomposizione delle misure e come questo viene applicato alle trasformazioni che conservano la misura. Dopo aver dato le definizioni di σ-algebra e di misura ed aver enunciato alcuni teoremi di teoria della misura, si introducono due differenti concetti di separabilità: quello di separabilità stretta e quello di separabilità, collegati mediante un lemma. Si descrivono poi la funzione di densità relativa e le relative proprietà e, dopo aver definito il concetto di somma diretta di spazi di misura, si dimostra il teorema di decomposizione delle misure, che permette sotto certe ipotesi di esprimere uno spazio di misura come somma diretta di spazi di misura. Infine, dopo aver spiegato cosa significa che una trasformazione conserva la misura e che è ergodica, si dimostra il teorema di Von Neumann, per il quale le trasformazioni che conservano la misura risultano decomponibili in parti ergodiche.
Resumo:
In questa tesi si mostrano alcune applicazioni degli integrali ellittici nella meccanica Hamiltoniana, allo scopo di risolvere i sistemi integrabili. Vengono descritte le funzioni ellittiche, in particolare la funzione ellittica di Weierstrass, ed elenchiamo i tipi di integrali ellittici costruendoli dalle funzioni di Weierstrass. Dopo aver considerato le basi della meccanica Hamiltoniana ed il teorema di Arnold Liouville, studiamo un esempio preso dal libro di Moser-Integrable Hamiltonian Systems and Spectral Theory, dove si prendono in considerazione i sistemi integrabili lungo la geodetica di un'ellissoide, e il sistema di Von Neumann. In particolare vediamo che nel caso n=2 abbiamo un integrale ellittico.
Resumo:
"National Socialism": 1. Ankündigung einer Vorlesungsreihe November/Dezember 1941 von: Herbert Marcuse, A.R.L. Gurland, Franz Neumann, Otto Kirchheimer, Frederick Pollock. a) als Typoskript verfielfältigt, 1 Blatt, b) Typoskript, 1 Blatt; 2. Antwortbrief auf Einladungen zur Vorlesungsreihe, von Neilson, William A.; Packelis, Alexander H.; Michael, Jerome; McClung Lee, Alfred; Youtz, R.P.; Ginsburg, Isidor; Ganey, G.; Nunhauer, Arthur. 8 Blätter; "Autoritarian doctrines and modern European institutions" (1924): 1. Vorlesungs-Ankündigung Typoskript, 2 Blatt; 2. Ankündigungen der Vorlesungen von Neumann, Franz L.: "Stratification and Dominance in Germany"; "Bureaucracy as a Social and Political Institution", Typoskript, 2 Blatt; 3. Evans, Austin P.: 1 Brief (Abschrift) an Frederick Pollock, New York, 26.2.1924; "Eclipse of Reason", Fünf Vorlesungen 1943/44:; 1. I. Lecture. a) Typoskript mit eigenhändigen Korrekturen, 38 Blatt b) Typoskript, 29 Blatt c) Typoskript mit eigenhändigen und handschriftlichen Korrekturen, 31 Blatt d) Teilstück, Typoskript mit eigenhändigen Korrekturen, 2 Blatt e) Entwürfe, Typoskript mit eigenhändigen Korrekturen, 6 Blatt; 2. II. Lecture. a) Typoskript mit eigenhändigen Korrekturen, 27 Blatt, b) Typoskript mit handschriftlichen Korrekturen, 37 Blatt; 3. III. Lecture. Typoskript mit eigenhändigen Korrekturen, 27 Blatt; 4. IV. Lecture. Typoskript mit eigenhändigen Korrekturen, 23 Blatt; 5. V. Lecture. a) Typoskript mit eigenhändigen Korrekturen, 25 Blatt, b) Teilstücke, Typoskript mit eigenhändigen und handschriftlichen Korrekturen, 3 Blatt;
Resumo:
Three long-term temperature data series measured in Portugal were studied to detect and correct non-climatic homogeneity breaks and are now available for future studies of climate variability. Series of monthly minimum (Tmin) and maximum (Tmax) temperatures measured in the three Portuguese meteorological stations of Lisbon (from 1856 to 2008), Coimbra (from 1865 to 2005) and Porto (from 1888 to 2001) were studied to detect and correct non-climatic homogeneity breaks. These series together with monthly series of average temperature (Taver) and temperature range (DTR) derived from them were tested in order to detect homogeneity breaks, using, firstly, metadata, secondly, a visual analysis and, thirdly, four widely used homogeneity tests: von Neumann ratio test, Buishand test, standard normal homogeneity test and Pettitt test. The homogeneity tests were used in absolute (using temperature series themselves) and relative (using sea-surface temperature anomalies series obtained from HadISST2 close to the Portuguese coast or already corrected temperature series as reference series) modes. We considered the Tmin, Tmax and DTR series as most informative for the detection of homogeneity breaks due to the fact that Tmin and Tmax could respond differently to changes in position of a thermometer or other changes in the instrument's environment; Taver series have been used, mainly, as control. The homogeneity tests show strong inhomogeneity of the original data series, which could have both internal climatic and non-climatic origins. Homogeneity breaks which have been identified by the last three mentioned homogeneity tests were compared with available metadata containing data, such as instrument changes, changes in station location and environment, observing procedures, etc. Significant homogeneity breaks (significance 95% or more) that coincide with known dates of instrumental changes have been corrected using standard procedures. It was also noted that some significant homogeneity breaks, which could not be connected to the known dates of any changes in the park of instruments or stations location and environment, could be caused by large volcanic eruptions. The corrected series were again tested for homogeneity: the corrected series were considered free of non-climatic breaks when the tests of most of monthly series showed no significant (significance 95% or more) homogeneity breaks that coincide with dates of known instrument changes. Corrected series are now available in the frame of ERA-CLIM FP7 project for future studies of climate variability.
Resumo:
EL matemático Bronowski ha dejado escrito que John Von Neumann era, en su opinión, el más inteligente de todos los hombres y mujeres que ha conocido. Esta opinión es muy significativa porque Bronowski ha tratado a casi todos los matemáticos y físicos importantes entre los años treintas y setentas, y sitúa en segundo lugar nada menos que a Enrico Germi, Premio Nobel y genio de la Física.
Resumo:
La informática teórica es una disciplina básica ya que la mayoría de los avances en informática se sustentan en un sólido resultado de esa materia. En los últimos a~nos debido tanto al incremento de la potencia de los ordenadores, como a la cercanía del límite físico en la miniaturización de los componentes electrónicos, resurge el interés por modelos formales de computación alternativos a la arquitectura clásica de von Neumann. Muchos de estos modelos se inspiran en la forma en la que la naturaleza resuelve eficientemente problemas muy complejos. La mayoría son computacionalmente completos e intrínsecamente paralelos. Por este motivo se les está llegando a considerar como nuevos paradigmas de computación (computación natural). Se dispone, por tanto, de un abanico de arquitecturas abstractas tan potentes como los computadores convencionales y, a veces, más eficientes: alguna de ellas mejora el rendimiento, al menos temporal, de problemas NPcompletos proporcionando costes no exponenciales. La representación formal de las redes de procesadores evolutivos requiere de construcciones, tanto independientes, como dependientes del contexto, dicho de otro modo, en general una representación formal completa de un NEP implica restricciones, tanto sintácticas, como semánticas, es decir, que muchas representaciones aparentemente (sintácticamente) correctas de casos particulares de estos dispositivos no tendrían sentido porque podrían no cumplir otras restricciones semánticas. La aplicación de evolución gramatical semántica a los NEPs pasa por la elección de un subconjunto de ellos entre los que buscar los que solucionen un problema concreto. En este trabajo se ha realizado un estudio sobre un modelo inspirado en la biología celular denominado redes de procesadores evolutivos [55, 53], esto es, redes cuyos nodos son procesadores muy simples capaces de realizar únicamente un tipo de mutación puntual (inserción, borrado o sustitución de un símbolo). Estos nodos están asociados con un filtro que está definido por alguna condición de contexto aleatorio o de pertenencia. Las redes están formadas a lo sumo de seis nodos y, teniendo los filtros definidos por una pertenencia a lenguajes regulares, son capaces de generar todos los lenguajes enumerables recursivos independientemente del grafo subyacente. Este resultado no es sorprendente ya que semejantes resultados han sido documentados en la literatura. Si se consideran redes con nodos y filtros definidos por contextos aleatorios {que parecen estar más cerca a las implementaciones biológicas{ entonces se pueden generar lenguajes más complejos como los lenguajes no independientes del contexto. Sin embargo, estos mecanismos tan simples son capaces de resolver problemas complejos en tiempo polinomial. Se ha presentado una solución lineal para un problema NP-completo, el problema de los 3-colores. Como primer aporte significativo se ha propuesto una nueva dinámica de las redes de procesadores evolutivos con un comportamiento no determinista y masivamente paralelo [55], y por tanto todo el trabajo de investigación en el área de la redes de procesadores se puede trasladar a las redes masivamente paralelas. Por ejemplo, las redes masivamente paralelas se pueden modificar de acuerdo a determinadas reglas para mover los filtros hacia las conexiones. Cada conexión se ve como un canal bidireccional de manera que los filtros de entrada y salida coinciden. A pesar de esto, estas redes son computacionalmente completas. Se pueden también implementar otro tipo de reglas para extender este modelo computacional. Se reemplazan las mutaciones puntuales asociadas a cada nodo por la operación de splicing. Este nuevo tipo de procesador se denomina procesador splicing. Este modelo computacional de Red de procesadores con splicing ANSP es semejante en cierto modo a los sistemas distribuidos en tubos de ensayo basados en splicing. Además, se ha definido un nuevo modelo [56] {Redes de procesadores evolutivos con filtros en las conexiones{ , en el cual los procesadores tan solo tienen reglas y los filtros se han trasladado a las conexiones. Dicho modelo es equivalente, bajo determinadas circunstancias, a las redes de procesadores evolutivos clásicas. Sin dichas restricciones el modelo propuesto es un superconjunto de los NEPs clásicos. La principal ventaja de mover los filtros a las conexiones radica en la simplicidad de la modelización. Otras aportaciones de este trabajo ha sido el dise~no de un simulador en Java [54, 52] para las redes de procesadores evolutivos propuestas en esta Tesis. Sobre el término "procesador evolutivo" empleado en esta Tesis, el proceso computacional descrito aquí no es exactamente un proceso evolutivo en el sentido Darwiniano. Pero las operaciones de reescritura que se han considerado pueden interpretarse como mutaciones y los procesos de filtrado se podrían ver como procesos de selección. Además, este trabajo no abarca la posible implementación biológica de estas redes, a pesar de ser de gran importancia. A lo largo de esta tesis se ha tomado como definición de la medida de complejidad para los ANSP, una que denotaremos como tama~no (considerando tama~no como el número de nodos del grafo subyacente). Se ha mostrado que cualquier lenguaje enumerable recursivo L puede ser aceptado por un ANSP en el cual el número de procesadores está linealmente acotado por la cardinalidad del alfabeto de la cinta de una máquina de Turing que reconoce dicho lenguaje L. Siguiendo el concepto de ANSP universales introducido por Manea [65], se ha demostrado que un ANSP con una estructura de grafo fija puede aceptar cualquier lenguaje enumerable recursivo. Un ANSP se puede considerar como un ente capaz de resolver problemas, además de tener otra propiedad relevante desde el punto de vista práctico: Se puede definir un ANSP universal como una subred, donde solo una cantidad limitada de parámetros es dependiente del lenguaje. La anterior característica se puede interpretar como un método para resolver cualquier problema NP en tiempo polinomial empleando un ANSP de tama~no constante, concretamente treinta y uno. Esto significa que la solución de cualquier problema NP es uniforme en el sentido de que la red, exceptuando la subred universal, se puede ver como un programa; adaptándolo a la instancia del problema a resolver, se escogerín los filtros y las reglas que no pertenecen a la subred universal. Un problema interesante desde nuestro punto de vista es el que hace referencia a como elegir el tama~no optimo de esta red.---ABSTRACT---This thesis deals with the recent research works in the area of Natural Computing {bio-inspired models{, more precisely Networks of Evolutionary Processors first developed by Victor Mitrana and they are based on P Systems whose father is Georghe Paun. In these models, they are a set of processors connected in an underlying undirected graph, such processors have an object multiset (strings) and a set of rules, named evolution rules, that transform objects inside processors[55, 53],. These objects can be sent/received using graph connections provided they accomplish constraints defined at input and output filters processors have. This symbolic model, non deterministic one (processors are not synchronized) and massive parallel one[55] (all rules can be applied in one computational step) has some important properties regarding solution of NP-problems in lineal time and of course, lineal resources. There are a great number of variants such as hybrid networks, splicing processors, etc. that provide the model a computational power equivalent to Turing machines. The origin of networks of evolutionary processors (NEP for short) is a basic architecture for parallel and distributed symbolic processing, related to the Connection Machine as well as the Logic Flow paradigm, which consists of several processors, each of them being placed in a node of a virtual complete graph, which are able to handle data associated with the respective node. All the nodes send simultaneously their data and the receiving nodes handle also simultaneously all the arriving messages, according to some strategies. In a series of papers one considers that each node may be viewed as a cell having genetic information encoded in DNA sequences which may evolve by local evolutionary events, that is point mutations. Each node is specialized just for one of these evolutionary operations. Furthermore, the data in each node is organized in the form of multisets of words (each word appears in an arbitrarily large number of copies), and all the copies are processed in parallel such that all the possible events that can take place do actually take place. Obviously, the computational process just described is not exactly an evolutionary process in the Darwinian sense. But the rewriting operations we have considered might be interpreted as mutations and the filtering process might be viewed as a selection process. Recombination is missing but it was asserted that evolutionary and functional relationships between genes can be captured by taking only local mutations into consideration. It is clear that filters associated with each node allow a strong control of the computation. Indeed, every node has an input and output filter; two nodes can exchange data if it passes the output filter of the sender and the input filter of the receiver. Moreover, if some data is sent out by some node and not able to enter any node, then it is lost. In this paper we simplify the ANSP model considered in by moving the filters from the nodes to the edges. Each edge is viewed as a two-way channel such that the input and output filters coincide. Clearly, the possibility of controlling the computation in such networks seems to be diminished. For instance, there is no possibility to loose data during the communication steps. In spite of this and of the fact that splicing is not a powerful operation (remember that splicing systems generates only regular languages) we prove here that these devices are computationally complete. As a consequence, we propose characterizations of two complexity classes, namely NP and PSPACE, in terms of accepting networks of restricted splicing processors with filtered connections. We proposed a uniform linear time solution to SAT based on ANSPFCs with linearly bounded resources. This solution should be understood correctly: we do not solve SAT in linear time and space. Since any word and auxiliary word appears in an arbitrarily large number of copies, one can generate in linear time, by parallelism and communication, an exponential number of words each of them having an exponential number of copies. However, this does not seem to be a major drawback since by PCR (Polymerase Chain Reaction) one can generate an exponential number of identical DNA molecules in a linear number of reactions. It is worth mentioning that the ANSPFC constructed above remains unchanged for any instance with the same number of variables. Therefore, the solution is uniform in the sense that the network, excepting the input and output nodes, may be viewed as a program according to the number of variables, we choose the filters, the splicing words and the rules, then we assign all possible values to the variables, and compute the formula.We proved that ANSP are computationally complete. Do the ANSPFC remain still computationally complete? If this is not the case, what other problems can be eficiently solved by these ANSPFCs? Moreover, the complexity class NP is exactly the class of all languages decided by ANSP in polynomial time. Can NP be characterized in a similar way with ANSPFCs?
Resumo:
A concept of orientation is relevant for the passage from Jordan structure to associative structure in operator algebras. The research reported in this paper bridges the approach of Connes for von Neumann algebras and ourselves for C*-algebras in a general theory of orientation that is of geometric nature and is related to dynamics.
Resumo:
We use Voiculescu’s free probability theory to prove the existence of prime factors, hence answering a longstanding problem in the theory of von Neumann algebras.
Resumo:
We introduce a general class of su(1|1) supersymmetric spin chains with long-range interactions which includes as particular cases the su(1|1) Inozemtsev (elliptic) and Haldane-Shastry chains, as well as the XX model. We show that this class of models can be fermionized with the help of the algebraic properties of the su(1|1) permutation operator and take advantage of this fact to analyze their quantum criticality when a chemical potential term is present in the Hamiltonian. We first study the low-energy excitations and the low-temperature behavior of the free energy, which coincides with that of a (1+1)-dimensional conformal field theory (CFT) with central charge c=1 when the chemical potential lies in the critical interval (0,E(π)), E(p) being the dispersion relation. We also analyze the von Neumann and Rényi ground state entanglement entropies, showing that they exhibit the logarithmic scaling with the size of the block of spins characteristic of a one-boson (1+1)-dimensional CFT. Our results thus show that the models under study are quantum critical when the chemical potential belongs to the critical interval, with central charge c=1. From the analysis of the fermion density at zero temperature, we also conclude that there is a quantum phase transition at both ends of the critical interval. This is further confirmed by the behavior of the fermion density at finite temperature, which is studied analytically (at low temperature), as well as numerically for the su(1|1) elliptic chain.
Resumo:
We introduce a new class of generalized isotropic Lipkin–Meshkov–Glick models with su(m+1) spin and long-range non-constant interactions, whose non-degenerate ground state is a Dicke state of su(m+1) type. We evaluate in closed form the reduced density matrix of a block of Lspins when the whole system is in its ground state, and study the corresponding von Neumann and Rényi entanglement entropies in the thermodynamic limit. We show that both of these entropies scale as a log L when L tends to infinity, where the coefficient a is equal to (m − k)/2 in the ground state phase with k vanishing magnon densities. In particular, our results show that none of these generalized Lipkin–Meshkov–Glick models are critical, since when L-->∞ their Rényi entropy R_q becomes independent of the parameter q. We have also computed the Tsallis entanglement entropy of the ground state of these generalized su(m+1) Lipkin–Meshkov–Glick models, finding that it can be made extensive by an appropriate choice of its parameter only when m-k≥3. Finally, in the su(3) case we construct in detail the phase diagram of the ground state in parameter space, showing that it is determined in a simple way by the weights of the fundamental representation of su(3). This is also true in the su(m+1) case; for instance, we prove that the region for which all the magnon densities are non-vanishing is an (m + 1)-simplex in R^m whose vertices are the weights of the fundamental representation of su(m+1).