919 resultados para real von Neumann measurement
Resumo:
Si consideri un insieme X non vuoto su cui si costruisce una sigma-algebra F, una trasformazione T dall'insieme X in se stesso F-misurabile si dice che conserva la misura se, preso un elemento della sigma-algebra, la misura della controimmagine di tale elemento è uguale a quella dell'elemento stesso. Con questa nozione si possono costruire vari esempi di applicazioni che conservano la misura, nell'elaborato si presenta la trasformazione di Gauss. Questo tipo di trasformazioni vengono utilizzate nella teoria ergodica dove ha senso considerare il sistema dinamico a tempi discreti T^j x; dove x = T^0 x è un dato iniziale, e studiare come la dinamica dipende dalla condizione iniziale x. Il Teorema Ergodico di Von Neumann afferma che dato uno spazio di Hilbert H su cui si definisce un'isometria U è possibile considerare, per ogni elemento f dello spazio di Hilbert, la media temporale di f che converge ad un elemento dell'autospazio relativo all'autovalore 1 dell'isometria. Il Teorema di Birkhoff invece asserisce che preso uno spazio X sigma-finito ed una trasformazione T non necessariamente invertibile è possibile considerare la media temporale di una funzione f sommabile, questa converge sempre ad una funzione f* misurabile e se la misura di X è finita f* è distribuita come f. In particolare, se la trasformazione T è ergodica si avrà che la media temporale e spaziale coincideranno.
Resumo:
Questa tesi illustra il teorema di decomposizione delle misure e come questo viene applicato alle trasformazioni che conservano la misura. Dopo aver dato le definizioni di σ-algebra e di misura ed aver enunciato alcuni teoremi di teoria della misura, si introducono due differenti concetti di separabilità: quello di separabilità stretta e quello di separabilità, collegati mediante un lemma. Si descrivono poi la funzione di densità relativa e le relative proprietà e, dopo aver definito il concetto di somma diretta di spazi di misura, si dimostra il teorema di decomposizione delle misure, che permette sotto certe ipotesi di esprimere uno spazio di misura come somma diretta di spazi di misura. Infine, dopo aver spiegato cosa significa che una trasformazione conserva la misura e che è ergodica, si dimostra il teorema di Von Neumann, per il quale le trasformazioni che conservano la misura risultano decomponibili in parti ergodiche.
Resumo:
In questa tesi si mostrano alcune applicazioni degli integrali ellittici nella meccanica Hamiltoniana, allo scopo di risolvere i sistemi integrabili. Vengono descritte le funzioni ellittiche, in particolare la funzione ellittica di Weierstrass, ed elenchiamo i tipi di integrali ellittici costruendoli dalle funzioni di Weierstrass. Dopo aver considerato le basi della meccanica Hamiltoniana ed il teorema di Arnold Liouville, studiamo un esempio preso dal libro di Moser-Integrable Hamiltonian Systems and Spectral Theory, dove si prendono in considerazione i sistemi integrabili lungo la geodetica di un'ellissoide, e il sistema di Von Neumann. In particolare vediamo che nel caso n=2 abbiamo un integrale ellittico.
Resumo:
"National Socialism": 1. Ankündigung einer Vorlesungsreihe November/Dezember 1941 von: Herbert Marcuse, A.R.L. Gurland, Franz Neumann, Otto Kirchheimer, Frederick Pollock. a) als Typoskript verfielfältigt, 1 Blatt, b) Typoskript, 1 Blatt; 2. Antwortbrief auf Einladungen zur Vorlesungsreihe, von Neilson, William A.; Packelis, Alexander H.; Michael, Jerome; McClung Lee, Alfred; Youtz, R.P.; Ginsburg, Isidor; Ganey, G.; Nunhauer, Arthur. 8 Blätter; "Autoritarian doctrines and modern European institutions" (1924): 1. Vorlesungs-Ankündigung Typoskript, 2 Blatt; 2. Ankündigungen der Vorlesungen von Neumann, Franz L.: "Stratification and Dominance in Germany"; "Bureaucracy as a Social and Political Institution", Typoskript, 2 Blatt; 3. Evans, Austin P.: 1 Brief (Abschrift) an Frederick Pollock, New York, 26.2.1924; "Eclipse of Reason", Fünf Vorlesungen 1943/44:; 1. I. Lecture. a) Typoskript mit eigenhändigen Korrekturen, 38 Blatt b) Typoskript, 29 Blatt c) Typoskript mit eigenhändigen und handschriftlichen Korrekturen, 31 Blatt d) Teilstück, Typoskript mit eigenhändigen Korrekturen, 2 Blatt e) Entwürfe, Typoskript mit eigenhändigen Korrekturen, 6 Blatt; 2. II. Lecture. a) Typoskript mit eigenhändigen Korrekturen, 27 Blatt, b) Typoskript mit handschriftlichen Korrekturen, 37 Blatt; 3. III. Lecture. Typoskript mit eigenhändigen Korrekturen, 27 Blatt; 4. IV. Lecture. Typoskript mit eigenhändigen Korrekturen, 23 Blatt; 5. V. Lecture. a) Typoskript mit eigenhändigen Korrekturen, 25 Blatt, b) Teilstücke, Typoskript mit eigenhändigen und handschriftlichen Korrekturen, 3 Blatt;
Resumo:
Three long-term temperature data series measured in Portugal were studied to detect and correct non-climatic homogeneity breaks and are now available for future studies of climate variability. Series of monthly minimum (Tmin) and maximum (Tmax) temperatures measured in the three Portuguese meteorological stations of Lisbon (from 1856 to 2008), Coimbra (from 1865 to 2005) and Porto (from 1888 to 2001) were studied to detect and correct non-climatic homogeneity breaks. These series together with monthly series of average temperature (Taver) and temperature range (DTR) derived from them were tested in order to detect homogeneity breaks, using, firstly, metadata, secondly, a visual analysis and, thirdly, four widely used homogeneity tests: von Neumann ratio test, Buishand test, standard normal homogeneity test and Pettitt test. The homogeneity tests were used in absolute (using temperature series themselves) and relative (using sea-surface temperature anomalies series obtained from HadISST2 close to the Portuguese coast or already corrected temperature series as reference series) modes. We considered the Tmin, Tmax and DTR series as most informative for the detection of homogeneity breaks due to the fact that Tmin and Tmax could respond differently to changes in position of a thermometer or other changes in the instrument's environment; Taver series have been used, mainly, as control. The homogeneity tests show strong inhomogeneity of the original data series, which could have both internal climatic and non-climatic origins. Homogeneity breaks which have been identified by the last three mentioned homogeneity tests were compared with available metadata containing data, such as instrument changes, changes in station location and environment, observing procedures, etc. Significant homogeneity breaks (significance 95% or more) that coincide with known dates of instrumental changes have been corrected using standard procedures. It was also noted that some significant homogeneity breaks, which could not be connected to the known dates of any changes in the park of instruments or stations location and environment, could be caused by large volcanic eruptions. The corrected series were again tested for homogeneity: the corrected series were considered free of non-climatic breaks when the tests of most of monthly series showed no significant (significance 95% or more) homogeneity breaks that coincide with dates of known instrument changes. Corrected series are now available in the frame of ERA-CLIM FP7 project for future studies of climate variability.
Resumo:
EL matemático Bronowski ha dejado escrito que John Von Neumann era, en su opinión, el más inteligente de todos los hombres y mujeres que ha conocido. Esta opinión es muy significativa porque Bronowski ha tratado a casi todos los matemáticos y físicos importantes entre los años treintas y setentas, y sitúa en segundo lugar nada menos que a Enrico Germi, Premio Nobel y genio de la Física.
Resumo:
La informática teórica es una disciplina básica ya que la mayoría de los avances en informática se sustentan en un sólido resultado de esa materia. En los últimos a~nos debido tanto al incremento de la potencia de los ordenadores, como a la cercanía del límite físico en la miniaturización de los componentes electrónicos, resurge el interés por modelos formales de computación alternativos a la arquitectura clásica de von Neumann. Muchos de estos modelos se inspiran en la forma en la que la naturaleza resuelve eficientemente problemas muy complejos. La mayoría son computacionalmente completos e intrínsecamente paralelos. Por este motivo se les está llegando a considerar como nuevos paradigmas de computación (computación natural). Se dispone, por tanto, de un abanico de arquitecturas abstractas tan potentes como los computadores convencionales y, a veces, más eficientes: alguna de ellas mejora el rendimiento, al menos temporal, de problemas NPcompletos proporcionando costes no exponenciales. La representación formal de las redes de procesadores evolutivos requiere de construcciones, tanto independientes, como dependientes del contexto, dicho de otro modo, en general una representación formal completa de un NEP implica restricciones, tanto sintácticas, como semánticas, es decir, que muchas representaciones aparentemente (sintácticamente) correctas de casos particulares de estos dispositivos no tendrían sentido porque podrían no cumplir otras restricciones semánticas. La aplicación de evolución gramatical semántica a los NEPs pasa por la elección de un subconjunto de ellos entre los que buscar los que solucionen un problema concreto. En este trabajo se ha realizado un estudio sobre un modelo inspirado en la biología celular denominado redes de procesadores evolutivos [55, 53], esto es, redes cuyos nodos son procesadores muy simples capaces de realizar únicamente un tipo de mutación puntual (inserción, borrado o sustitución de un símbolo). Estos nodos están asociados con un filtro que está definido por alguna condición de contexto aleatorio o de pertenencia. Las redes están formadas a lo sumo de seis nodos y, teniendo los filtros definidos por una pertenencia a lenguajes regulares, son capaces de generar todos los lenguajes enumerables recursivos independientemente del grafo subyacente. Este resultado no es sorprendente ya que semejantes resultados han sido documentados en la literatura. Si se consideran redes con nodos y filtros definidos por contextos aleatorios {que parecen estar más cerca a las implementaciones biológicas{ entonces se pueden generar lenguajes más complejos como los lenguajes no independientes del contexto. Sin embargo, estos mecanismos tan simples son capaces de resolver problemas complejos en tiempo polinomial. Se ha presentado una solución lineal para un problema NP-completo, el problema de los 3-colores. Como primer aporte significativo se ha propuesto una nueva dinámica de las redes de procesadores evolutivos con un comportamiento no determinista y masivamente paralelo [55], y por tanto todo el trabajo de investigación en el área de la redes de procesadores se puede trasladar a las redes masivamente paralelas. Por ejemplo, las redes masivamente paralelas se pueden modificar de acuerdo a determinadas reglas para mover los filtros hacia las conexiones. Cada conexión se ve como un canal bidireccional de manera que los filtros de entrada y salida coinciden. A pesar de esto, estas redes son computacionalmente completas. Se pueden también implementar otro tipo de reglas para extender este modelo computacional. Se reemplazan las mutaciones puntuales asociadas a cada nodo por la operación de splicing. Este nuevo tipo de procesador se denomina procesador splicing. Este modelo computacional de Red de procesadores con splicing ANSP es semejante en cierto modo a los sistemas distribuidos en tubos de ensayo basados en splicing. Además, se ha definido un nuevo modelo [56] {Redes de procesadores evolutivos con filtros en las conexiones{ , en el cual los procesadores tan solo tienen reglas y los filtros se han trasladado a las conexiones. Dicho modelo es equivalente, bajo determinadas circunstancias, a las redes de procesadores evolutivos clásicas. Sin dichas restricciones el modelo propuesto es un superconjunto de los NEPs clásicos. La principal ventaja de mover los filtros a las conexiones radica en la simplicidad de la modelización. Otras aportaciones de este trabajo ha sido el dise~no de un simulador en Java [54, 52] para las redes de procesadores evolutivos propuestas en esta Tesis. Sobre el término "procesador evolutivo" empleado en esta Tesis, el proceso computacional descrito aquí no es exactamente un proceso evolutivo en el sentido Darwiniano. Pero las operaciones de reescritura que se han considerado pueden interpretarse como mutaciones y los procesos de filtrado se podrían ver como procesos de selección. Además, este trabajo no abarca la posible implementación biológica de estas redes, a pesar de ser de gran importancia. A lo largo de esta tesis se ha tomado como definición de la medida de complejidad para los ANSP, una que denotaremos como tama~no (considerando tama~no como el número de nodos del grafo subyacente). Se ha mostrado que cualquier lenguaje enumerable recursivo L puede ser aceptado por un ANSP en el cual el número de procesadores está linealmente acotado por la cardinalidad del alfabeto de la cinta de una máquina de Turing que reconoce dicho lenguaje L. Siguiendo el concepto de ANSP universales introducido por Manea [65], se ha demostrado que un ANSP con una estructura de grafo fija puede aceptar cualquier lenguaje enumerable recursivo. Un ANSP se puede considerar como un ente capaz de resolver problemas, además de tener otra propiedad relevante desde el punto de vista práctico: Se puede definir un ANSP universal como una subred, donde solo una cantidad limitada de parámetros es dependiente del lenguaje. La anterior característica se puede interpretar como un método para resolver cualquier problema NP en tiempo polinomial empleando un ANSP de tama~no constante, concretamente treinta y uno. Esto significa que la solución de cualquier problema NP es uniforme en el sentido de que la red, exceptuando la subred universal, se puede ver como un programa; adaptándolo a la instancia del problema a resolver, se escogerín los filtros y las reglas que no pertenecen a la subred universal. Un problema interesante desde nuestro punto de vista es el que hace referencia a como elegir el tama~no optimo de esta red.---ABSTRACT---This thesis deals with the recent research works in the area of Natural Computing {bio-inspired models{, more precisely Networks of Evolutionary Processors first developed by Victor Mitrana and they are based on P Systems whose father is Georghe Paun. In these models, they are a set of processors connected in an underlying undirected graph, such processors have an object multiset (strings) and a set of rules, named evolution rules, that transform objects inside processors[55, 53],. These objects can be sent/received using graph connections provided they accomplish constraints defined at input and output filters processors have. This symbolic model, non deterministic one (processors are not synchronized) and massive parallel one[55] (all rules can be applied in one computational step) has some important properties regarding solution of NP-problems in lineal time and of course, lineal resources. There are a great number of variants such as hybrid networks, splicing processors, etc. that provide the model a computational power equivalent to Turing machines. The origin of networks of evolutionary processors (NEP for short) is a basic architecture for parallel and distributed symbolic processing, related to the Connection Machine as well as the Logic Flow paradigm, which consists of several processors, each of them being placed in a node of a virtual complete graph, which are able to handle data associated with the respective node. All the nodes send simultaneously their data and the receiving nodes handle also simultaneously all the arriving messages, according to some strategies. In a series of papers one considers that each node may be viewed as a cell having genetic information encoded in DNA sequences which may evolve by local evolutionary events, that is point mutations. Each node is specialized just for one of these evolutionary operations. Furthermore, the data in each node is organized in the form of multisets of words (each word appears in an arbitrarily large number of copies), and all the copies are processed in parallel such that all the possible events that can take place do actually take place. Obviously, the computational process just described is not exactly an evolutionary process in the Darwinian sense. But the rewriting operations we have considered might be interpreted as mutations and the filtering process might be viewed as a selection process. Recombination is missing but it was asserted that evolutionary and functional relationships between genes can be captured by taking only local mutations into consideration. It is clear that filters associated with each node allow a strong control of the computation. Indeed, every node has an input and output filter; two nodes can exchange data if it passes the output filter of the sender and the input filter of the receiver. Moreover, if some data is sent out by some node and not able to enter any node, then it is lost. In this paper we simplify the ANSP model considered in by moving the filters from the nodes to the edges. Each edge is viewed as a two-way channel such that the input and output filters coincide. Clearly, the possibility of controlling the computation in such networks seems to be diminished. For instance, there is no possibility to loose data during the communication steps. In spite of this and of the fact that splicing is not a powerful operation (remember that splicing systems generates only regular languages) we prove here that these devices are computationally complete. As a consequence, we propose characterizations of two complexity classes, namely NP and PSPACE, in terms of accepting networks of restricted splicing processors with filtered connections. We proposed a uniform linear time solution to SAT based on ANSPFCs with linearly bounded resources. This solution should be understood correctly: we do not solve SAT in linear time and space. Since any word and auxiliary word appears in an arbitrarily large number of copies, one can generate in linear time, by parallelism and communication, an exponential number of words each of them having an exponential number of copies. However, this does not seem to be a major drawback since by PCR (Polymerase Chain Reaction) one can generate an exponential number of identical DNA molecules in a linear number of reactions. It is worth mentioning that the ANSPFC constructed above remains unchanged for any instance with the same number of variables. Therefore, the solution is uniform in the sense that the network, excepting the input and output nodes, may be viewed as a program according to the number of variables, we choose the filters, the splicing words and the rules, then we assign all possible values to the variables, and compute the formula.We proved that ANSP are computationally complete. Do the ANSPFC remain still computationally complete? If this is not the case, what other problems can be eficiently solved by these ANSPFCs? Moreover, the complexity class NP is exactly the class of all languages decided by ANSP in polynomial time. Can NP be characterized in a similar way with ANSPFCs?
Resumo:
Este proyecto consistira en la realization de un estudio aciistico sobre la sala 8 de los cines Kinepolis de Ciudad de la Imagen, que dispone de 408 butacas. Los cines Kinepolis es uno de los mayores complejos multisala de Europa. Cuenta con mas de 9.200 butacas en total distribuidas en 25 salas a las que se accede mediante dos grandes pasillos conectados por el hall. En 1998, ano de su apertura, el complejo recibio el Record Guinness a la sala cinematografica mas grande del mundo, que dispone de 996 butacas. El objetivo de este proyecto es conseguir caracterizar acusticamente una sala de cine a traves de la medicion de parametros acusticos de la sala y de un modelo virtual de la misma. Para llevar a cabo el proyecto, primero se van a realizar tanto una medicion geometrica como acustica de la sala mediante el sistema de medida DIRAC. Los resultados de estas mediciones nos serviran para construir y validar un modelo virtual de la sala real con el software de simulacion EASE. La medicion acustica se va a realizar con el sistema de medicion DIRAC. Este sistema nos dara information sobre una amplia variedad de parametros acusticos. En este proyecto no se va a trabajar con todos ellos, solo con los mas significativos. Estos se describen a continuacion en la introduccion teorica. La medicion geometrica nos va a servir para construir un modelo virtual que tenga las mismas dimensiones que la sala original. Esta medicion la realizaremos mediante un medidor laser y una cinta metrica. Una vez construido el modelo virtual, se procedera a su validacion. Este proceso se realiza ajustando el tiempo de reverberacion del modelo mediante la introduccion de distintos materiales acusticos en las superficies del mismo, de manera que, variando la absorcion de la sala, el tiempo de reverberacion promedio del modelo se asemeje lo mas posible al medido en la sala real. Este proceso tiene como objetivo comprobar que el modelo virtual tiene un comportamiento acustico similar al de la sala real. Es necesario validar adecuadamente el modelo para que las comparaciones y conclusiones sean fiables. Por ultimo, tras la simulacion acustica del modelo, se compararan los resultados simulados con los medidos en la sala. En este proceso se contrastaran algunos de los parametros que guardan relation con el tiempo de reverberacion. De esta manera se verificara si el tiempo de reverberacion es o no un parametro acustico fiable para la validacion de un modelo virtual de una sala de cine. Anteriormente se han realizado proyectos iguales de otras salas de diferente tamano de Kinepolis. El objetivo de realizar el mismo estudio en distintas salas, es comprobar si el tamano de la sala influye en la validacion de los modelos virtuales mediante el tiempo de reverberacion. ABSTRACT. This Project consists on the development of an acoustic research of the movie theater 8 of the Kinepolis complex in Ciudad de la Imagen, Madrid. This room has 408 spots. Kinepolis is one of the biggest multiplex complexes in Europe. It has 9,200 locations disposed in 25 rooms. There are two large corridors which give access to all of theaters. In the middle of the structure, there is the main hall that connects these corridors. In 1998, at the time when the complex was open, it was awarded with the Record Guinness for the biggest theater in the world, which has 996 locations. The target of this project is to successfully characterize the acoustics of a movie theater through reverberation time and a virtual model. In order to reach this goal, in the first place, we are going to perform both, an acoustic and a geometric measurement of the room using DIRAC measurement system. The results of these measures will allow us to build and validate a virtual model of the room, using the simulation software EASE. We are going to use the DIRAC system in order to accomplish the acoustic measure. This operation gives us a huge variety of acoustic parameters. Not all of these are going to be used for this research, only the most significant ones. These are described in the theoretical introduction. The geometric measure is essential to help us to build the virtual model, because the model has to be exactly equal as the real room. This measurement will be performed with an electronic distance meter and a measuring tape. Once the virtual model is finished, it will be proved. This validation process will be realized by adjusting the reverberation time in the model. We will change the walls materials, therefore, the overall absorption of the room will change. We want the model reverberation time resemble to the real one. This practice is going to ensure that the model acoustic performance is close to the real one. In addition, it has to be successfully validate of we want the future comparisons to be reliable. Finally, after the model virtual simulation, we will compare the simulated results with the measure in the room. In this process, we will compare not only the reverberation time, but others parameters that keep relation with the reverberation time. We will verify this way, if the reverberation time is or is not an appropriate acoustic parameter to validate a virtual model of a movie theater. There have been done others similar acoustic researches in different theaters with different sizes. The aim of performing similar researches in different rooms is to determine if the size of the room infers in the validation process.
Resumo:
A concept of orientation is relevant for the passage from Jordan structure to associative structure in operator algebras. The research reported in this paper bridges the approach of Connes for von Neumann algebras and ourselves for C*-algebras in a general theory of orientation that is of geometric nature and is related to dynamics.
Resumo:
We use Voiculescu’s free probability theory to prove the existence of prime factors, hence answering a longstanding problem in the theory of von Neumann algebras.
Resumo:
We introduce a general class of su(1|1) supersymmetric spin chains with long-range interactions which includes as particular cases the su(1|1) Inozemtsev (elliptic) and Haldane-Shastry chains, as well as the XX model. We show that this class of models can be fermionized with the help of the algebraic properties of the su(1|1) permutation operator and take advantage of this fact to analyze their quantum criticality when a chemical potential term is present in the Hamiltonian. We first study the low-energy excitations and the low-temperature behavior of the free energy, which coincides with that of a (1+1)-dimensional conformal field theory (CFT) with central charge c=1 when the chemical potential lies in the critical interval (0,E(π)), E(p) being the dispersion relation. We also analyze the von Neumann and Rényi ground state entanglement entropies, showing that they exhibit the logarithmic scaling with the size of the block of spins characteristic of a one-boson (1+1)-dimensional CFT. Our results thus show that the models under study are quantum critical when the chemical potential belongs to the critical interval, with central charge c=1. From the analysis of the fermion density at zero temperature, we also conclude that there is a quantum phase transition at both ends of the critical interval. This is further confirmed by the behavior of the fermion density at finite temperature, which is studied analytically (at low temperature), as well as numerically for the su(1|1) elliptic chain.
Resumo:
We introduce a new class of generalized isotropic Lipkin–Meshkov–Glick models with su(m+1) spin and long-range non-constant interactions, whose non-degenerate ground state is a Dicke state of su(m+1) type. We evaluate in closed form the reduced density matrix of a block of Lspins when the whole system is in its ground state, and study the corresponding von Neumann and Rényi entanglement entropies in the thermodynamic limit. We show that both of these entropies scale as a log L when L tends to infinity, where the coefficient a is equal to (m − k)/2 in the ground state phase with k vanishing magnon densities. In particular, our results show that none of these generalized Lipkin–Meshkov–Glick models are critical, since when L-->∞ their Rényi entropy R_q becomes independent of the parameter q. We have also computed the Tsallis entanglement entropy of the ground state of these generalized su(m+1) Lipkin–Meshkov–Glick models, finding that it can be made extensive by an appropriate choice of its parameter only when m-k≥3. Finally, in the su(3) case we construct in detail the phase diagram of the ground state in parameter space, showing that it is determined in a simple way by the weights of the fundamental representation of su(3). This is also true in the su(m+1) case; for instance, we prove that the region for which all the magnon densities are non-vanishing is an (m + 1)-simplex in R^m whose vertices are the weights of the fundamental representation of su(m+1).
Resumo:
Arguably the deepest fact known about the von Neumann entropy, the strong subadditivity inequality is a potent hammer in the quantum information theorist's toolkit. This short tutorial describes a simple proof of strong subadditivity due to Petz [Rep. on Math. Phys. 23 (1), 57-65 (1986)]. It assumes only knowledge of elementary linear algebra and quantum mechanics.
Resumo:
In order to quantify quantum entanglement in two-impurity Kondo systems, we calculate the concurrence, negativity, and von Neumann entropy. The entanglement of the two Kondo impurities is shown to be determined by two competing many-body effects, namely the Kondo effect and the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction, I. Due to the spin-rotational invariance of the ground state, the concurrence and negativity are uniquely determined by the spin-spin correlation between the impurities. It is found that there exists a critical minimum value of the antiferromagnetic correlation between the impurity spins which is necessary for entanglement of the two impurity spins. The critical value is discussed in relation with the unstable fixed point in the two-impurity Kondo problem. Specifically, at the fixed point there is no entanglement between the impurity spins. Entanglement will only be created [and quantum information processing (QIP) will only be possible] if the RKKY interaction exchange energy, I, is at least several times larger than the Kondo temperature, T-K. Quantitative criteria for QIP are given in terms of the impurity spin-spin correlation.
Resumo:
We investigate boundary critical phenomena from a quantum-information perspective. Bipartite entanglement in the ground state of one-dimensional quantum systems is quantified using the Renyi entropy S-alpha, which includes the von Neumann entropy (alpha -> 1) and the single-copy entanglement (alpha ->infinity) as special cases. We identify the contribution of the boundaries to the Renyi entropy, and show that there is an entanglement loss along boundary renormalization group (RG) flows. This property, which is intimately related to the Affleck-Ludwig g theorem, is a consequence of majorization relations between the spectra of the reduced density matrix along the boundary RG flows. We also point out that the bulk contribution to the single-copy entanglement is half of that to the von Neumann entropy, whereas the boundary contribution is the same.