980 resultados para spazi Hilbert,operatori lineari,operatori autoaggiunti
Resumo:
On révise les prérequis de géométrie différentielle nécessaires à une première approche de la théorie de la quantification géométrique, c'est-à-dire des notions de base en géométrie symplectique, des notions de groupes et d'algèbres de Lie, d'action d'un groupe de Lie, de G-fibré principal, de connexion, de fibré associé et de structure presque-complexe. Ceci mène à une étude plus approfondie des fibrés en droites hermitiens, dont une condition d'existence de fibré préquantique sur une variété symplectique. Avec ces outils en main, nous commençons ensuite l'étude de la quantification géométrique, étape par étape. Nous introduisons la théorie de la préquantification, i.e. la construction des opérateurs associés à des observables classiques et la construction d'un espace de Hilbert. Des problèmes majeurs font surface lors de l'application concrète de la préquantification : les opérateurs ne sont pas ceux attendus par la première quantification et l'espace de Hilbert formé est trop gros. Une première correction, la polarisation, élimine quelques problèmes, mais limite grandement l'ensemble des observables classiques que l'on peut quantifier. Ce mémoire n'est pas un survol complet de la quantification géométrique, et cela n'est pas son but. Il ne couvre ni la correction métaplectique, ni le noyau BKS. Il est un à-côté de lecture pour ceux qui s'introduisent à la quantification géométrique. D'une part, il introduit des concepts de géométrie différentielle pris pour acquis dans (Woodhouse [21]) et (Sniatycki [18]), i.e. G-fibrés principaux et fibrés associés. Enfin, il rajoute des détails à quelques preuves rapides données dans ces deux dernières références.
Resumo:
Dans ce mémoire, on s'intéresse à l'action du groupe des transformations affines et des homothéties sur l'axe du temps des systèmes différentiels quadratiques à foyer faible d'ordre trois, dans le plan. Ces systèmes sont importants dans le cadre du seizième problème d'Hilbert. Le diagramme de bifurcation a été produit à l'aide de la forme normale de Li dans des travaux de Andronova [2] et Artès et Llibre [4], sans utiliser le plan projectif comme espace des paramètres ni de méthodes globales. Dans [7], Llibre et Schlomiuk ont utilisé le plan projectif comme espace des paramètres et des notions à caractère géométrique global (invariants affines et topologiques). Ce diagramme contient 18 portraits de phase et certains de ces portraits sont répétés dans des parties distinctes du diagramme. Ceci nous mène à poser la question suivante : existe-t-il des systèmes distincts, correspondant à des valeurs distinctes de paramètres, se trouvant sur la même orbite par rapport à l'action du groupe? Dans ce mémoire, on prouve un résultat original : l'action du groupe n'est pas triviale sur la forme de Li (théorème 3.1), ni sur la forme normale de Bautin (théorème 4.1). En utilisant le deuxième résultat, on construit l'espace topologique quotient des systèmes quadratiques à foyer faible d'ordre trois par rapport à l'action de ce groupe.
Resumo:
Ce texte examine la méthode axiomatique durant la période de l’histoire des mathématiques correspondant à la crise des fondements. Il a pour objectif de montrer que la méthode axiomatique changea, mais aussi de comprendre la nature de ces changements. À cette fin, les conceptions de Frege, Hilbert et Noether sont analysées.
Resumo:
This study is to look the effect of change in the ordering of the Fourier system on Szegö’s classical observations of asymptotic distribution of eigenvalues of finite Toeplitz forms.This is done by checking proofs and Szegö’s properties in the new set up.The Fourier system is unconditional [19], any arbitrary ordering of the Fourier system forms a basis for the Hilbert space L2 [-Π, Π].Here study about the classical Szegö’s theorem.Szegö’s type theorem for operators in L2(R+) and check its validity for certain multiplication operators.Since the trigonometric basis is not available in L2(R+) or in L2(R) .This study discussed about the classes of orderings of Haar System in L2 (R+) and in L2(R) in which Szegö’s Type TheoreT Am is valid for certain multiplication operators.It is divided into two sections. In the first section there is an ordering to Haar system in L2(R+) and prove that with respect to this ordering, Szegö’s Type theorem holds for general class of multiplication operators Tƒ with multiplier ƒ ε L2(R+), subject to some conditions on ƒ.Finally in second section more general classes of ordering of Haar system in L2(R+) and in L2(R) are identified in such a way that for certain classes of multiplication operators the asymptotic distribution of eigenvalues exists.
Resumo:
Department of Mathematics, Cochin University of Science and Technology
Resumo:
This thesis Entitled Spectral theory of bounded self-adjoint operators -A linear algebraic approach.The main results of the thesis can be classified as three different approaches to the spectral approximation problems. The truncation method and its perturbed versions are part of the classical linear algebraic approach to the subject. The usage of block Toeplitz-Laurent operators and the matrix valued symbols is considered as a particular example where the linear algebraic techniques are effective in simplifying problems in inverse spectral theory. The abstract approach to the spectral approximation problems via pre-conditioners and Korovkin-type theorems is an attempt to make the computations involved, well conditioned. However, in all these approaches, linear algebra comes as the central object. The objective of this study is to discuss the linear algebraic techniques in the spectral theory of bounded self-adjoint operators on a separable Hilbert space. The usage of truncation method in approximating the bounds of essential spectrum and the discrete spectral values outside these bounds is well known. The spectral gap prediction and related results was proved in the second chapter. The discrete versions of Borg-type theorems, proved in the third chapter, partly overlap with some known results in operator theory. The pure linear algebraic approach is the main novelty of the results proved here.
Resumo:
Die relativistische Multikonfigurations Dirac-Fock (MCDF) Methode ist gegenwärtig eines der am häufigsten benutzten Verfahren zur Berechnung der elektronischen Struktur und der Eigenschaften freier Atome. In diesem Verfahren werden die Wellenfunktionen ausgewählter atomarer Zustände als eine Linearkombination von sogenannten Konfigurationszuständen (CSF - Configuration State Functions) konstruiert, die in einem Teilraum des N-Elektronen Hilbert-Raumes eine (Vielteilchen-)Basis aufspannen. Die konkrete Konstruktion dieser Basis entscheidet letzlich über die Güte der Wellenfunktionen, die üblicherweise mit Hilfe einer Variation des Erwartungswertes zum no-pair Dirac-Coulomb Hamiltonoperators gewonnen werden. Mit Hilfe von MCDF Wellenfunktionen können die dominanten relativistischen und Korrelationseffekte in freien Atomen allgemein recht gut erfaßt und verstanden werden. Außer der instantanen Coulombabstoßung zwischen allen Elektronenpaaren werden dabei auch die relativistischen Korrekturen zur Elektron-Elektron Wechselwirkung, d.h. die magnetischen und Retardierungsbeiträge in der Wechselwirkung der Elektronen untereinander, die Ankopplung der Elektronen an das Strahlungsfeld sowie der Einfluß eines ausgedehnten Kernmodells erfaßt. Im Vergleich mit früheren MCDF Rechnungen werden in den in dieser Arbeit diskutierten Fallstudien Wellenfunktionsentwicklungen verwendet, die um 1-2 Größenordnungen aufwendiger sind und daher systematische Untersuchungen inzwischen auch an Atomen mit offenen d- und f-Schalen erlauben. Eine spontane Emission oder Absorption von Photonen kann bei freien Atomen theoretisch am einfachsten mit Hilfe von Übergangswahrscheinlichkeiten erfaßt werden. Solche Daten werden heute in vielen Forschungsbereichen benötigt, wobei neben den traditionellen Gebieten der Fusionsforschung und Astrophysik zunehmend auch neue Forschungsrichtungen (z.B. Nanostrukturforschung und Röntgenlithographie) zunehmend ins Blickfeld rücken. Um die Zuverlässigkeit unserer theoretischen Vorhersagen zu erhöhen, wurde in dieser Arbeit insbesondere die Relaxation der gebundenen Elektronendichte, die rechentechnisch einen deutlich größeren Aufwand erfordert, detailliert untersucht. Eine Berücksichtigung dieser Relaxationseffekte führt oftmals auch zu einer deutlich besseren Übereinstimmung mit experimentellen Werten, insbesondere für dn=1 Übergänge sowie für schwache und Interkombinationslinien, die innerhalb einer Hauptschale (dn=0) vorkommen. Unsere in den vergangenen Jahren verbesserten Rechnungen zu den Wellenfunktionen und Übergangswahrscheinlichkeiten zeigen deutlich den Fortschritt bei der Behandlung komplexer Atome. Gleichzeitig kann dieses neue Herangehen künftig aber auch auf (i) kompliziertere Schalensstrukturen, (ii) die Untersuchung von Zwei-Elektronen-ein-Photon (TEOP) Übergängen sowie (iii) auf eine Reihe weiterer atomarer Eigenschaften übertragen werden, die bekanntermaßen empflindlich von der Relaxation der Elektronendichte abhängen. Dies sind bspw. Augerzerfälle, die atomare Photoionisation oder auch strahlende und dielektronische Rekombinationsprozesse, die theoretisch bisher nur selten überhaupt in der Dirac-Fock Näherung betrachtet wurden.
Resumo:
We show that optimizing a quantum gate for an open quantum system requires the time evolution of only three states irrespective of the dimension of Hilbert space. This represents a significant reduction in computational resources compared to the complete basis of Liouville space that is commonly believed necessary for this task. The reduction is based on two observations: the target is not a general dynamical map but a unitary operation; and the time evolution of two properly chosen states is sufficient to distinguish any two unitaries. We illustrate gate optimization employing a reduced set of states for a controlled phasegate with trapped atoms as qubit carriers and a iSWAP gate with superconducting qubits.
Resumo:
We are currently at the cusp of a revolution in quantum technology that relies not just on the passive use of quantum effects, but on their active control. At the forefront of this revolution is the implementation of a quantum computer. Encoding information in quantum states as “qubits” allows to use entanglement and quantum superposition to perform calculations that are infeasible on classical computers. The fundamental challenge in the realization of quantum computers is to avoid decoherence – the loss of quantum properties – due to unwanted interaction with the environment. This thesis addresses the problem of implementing entangling two-qubit quantum gates that are robust with respect to both decoherence and classical noise. It covers three aspects: the use of efficient numerical tools for the simulation and optimal control of open and closed quantum systems, the role of advanced optimization functionals in facilitating robustness, and the application of these techniques to two of the leading implementations of quantum computation, trapped atoms and superconducting circuits. After a review of the theoretical and numerical foundations, the central part of the thesis starts with the idea of using ensemble optimization to achieve robustness with respect to both classical fluctuations in the system parameters, and decoherence. For the example of a controlled phasegate implemented with trapped Rydberg atoms, this approach is demonstrated to yield a gate that is at least one order of magnitude more robust than the best known analytic scheme. Moreover this robustness is maintained even for gate durations significantly shorter than those obtained in the analytic scheme. Superconducting circuits are a particularly promising architecture for the implementation of a quantum computer. Their flexibility is demonstrated by performing optimizations for both diagonal and non-diagonal quantum gates. In order to achieve robustness with respect to decoherence, it is essential to implement quantum gates in the shortest possible amount of time. This may be facilitated by using an optimization functional that targets an arbitrary perfect entangler, based on a geometric theory of two-qubit gates. For the example of superconducting qubits, it is shown that this approach leads to significantly shorter gate durations, higher fidelities, and faster convergence than the optimization towards specific two-qubit gates. Performing optimization in Liouville space in order to properly take into account decoherence poses significant numerical challenges, as the dimension scales quadratically compared to Hilbert space. However, it can be shown that for a unitary target, the optimization only requires propagation of at most three states, instead of a full basis of Liouville space. Both for the example of trapped Rydberg atoms, and for superconducting qubits, the successful optimization of quantum gates is demonstrated, at a significantly reduced numerical cost than was previously thought possible. Together, the results of this thesis point towards a comprehensive framework for the optimization of robust quantum gates, paving the way for the future realization of quantum computers.
Resumo:
In the first part of this paper we show a similarity between the principle of Structural Risk Minimization Principle (SRM) (Vapnik, 1982) and the idea of Sparse Approximation, as defined in (Chen, Donoho and Saunders, 1995) and Olshausen and Field (1996). Then we focus on two specific (approximate) implementations of SRM and Sparse Approximation, which have been used to solve the problem of function approximation. For SRM we consider the Support Vector Machine technique proposed by V. Vapnik and his team at AT&T Bell Labs, and for Sparse Approximation we consider a modification of the Basis Pursuit De-Noising algorithm proposed by Chen, Donoho and Saunders (1995). We show that, under certain conditions, these two techniques are equivalent: they give the same solution and they require the solution of the same quadratic programming problem.
Resumo:
The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Central notations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform. In this way very elaborated aspects of mathematical statistics can be understood easily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating, combination of likelihood and robust M-estimation functions are simple additions/ perturbations in A2(Pprior). Weighting observations corresponds to a weighted addition of the corresponding evidence. Likelihood based statistics for general exponential families turns out to have a particularly easy interpretation in terms of A2(P). Regular exponential families form finite dimensional linear subspaces of A2(P) and they correspond to finite dimensional subspaces formed by their posterior in the dual information space A2(Pprior). The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P. The discussion of A2(P) valued random variables, such as estimation functions or likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning
Resumo:
The preceding two editions of CoDaWork included talks on the possible consideration of densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended the Euclidean structure of the simplex to a Hilbert space structure of the set of densities within a bounded interval, and van den Boogaart (2005) generalized this to the set of densities bounded by an arbitrary reference density. From the many variations of the Hilbert structures available, we work with three cases. For bounded variables, a basis derived from Legendre polynomials is used. For variables with a lower bound, we standardize them with respect to an exponential distribution and express their densities as coordinates in a basis derived from Laguerre polynomials. Finally, for unbounded variables, a normal distribution is used as reference, and coordinates are obtained with respect to a Hermite-polynomials-based basis. To get the coordinates, several approaches can be considered. A numerical accuracy problem occurs if one estimates the coordinates directly by using discretized scalar products. Thus we propose to use a weighted linear regression approach, where all k- order polynomials are used as predictand variables and weights are proportional to the reference density. Finally, for the case of 2-order Hermite polinomials (normal reference) and 1-order Laguerre polinomials (exponential), one can also derive the coordinates from their relationships to the classical mean and variance. Apart of these theoretical issues, this contribution focuses on the application of this theory to two main problems in sedimentary geology: the comparison of several grain size distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock or sediment, like their composition
Resumo:
El terrorismo en la actualidad es considerado como uno de los conceptos más controversiales en los campos social, académico y político. El término se empieza a utilizar después de la Revolución Francesa, pero recientemente, a raíz de los atentados del 11 de septiembre de 2001, ha tomado suma relevancia y ha motivado numerosas investigaciones para tratar de entender qué es terrorismo. Aunque a la fecha existen varias revisiones sistemáticas, este trabajo tiene como propósito revisar, agrupar y concretar diferentes teorías y conceptos formulados por los autores que han trabajado sobre el concepto de “terrorismo” con el fin de entender las implicaciones de su utilización en el discurso, y cómo esto afecta la dinámica interna de las sociedades en relación con la violencia, las creencias, los estereotipos entre otros elementos. Para lograrlo, se revisaron 56 artículos, publicados entre los años 1985 y 2013; 10 fuentes secundarias entre noticias y artículos de periódicos correspondientes a los años 1995-2013 y 10 estudios estadísticos cuyos resultados nos aportan a la comprensión del tema en cuestión. La búsqueda se limitó al desarrollo histórico del terrorismo, sus diferentes dimensiones y el concepto social de la realidad de terrorismo. Los hallazgos demuestran que la palabra “terrorismo” constituye un concepto que como tal es un vehículo lingüístico que puede ser utilizado con fines, estratégicos movilizando al público conforme a través del discurso e intereses políticos, destacando la necesidad de estudiar las implicaciones psicológicas y sociales de su uso.
Resumo:
El proyecto pretende, como objetivo general, incorporar a los padres de forma responsable en el proceso de aprendizaje de los alumnos, para lograr una comunicación óptima con el tutor, profesor del curso y orientador que facilite la educación integral de sus hijos e hijas. El proyecto comienza con una información general a los padres, por parte del profesorado, del plan de trabajo, del proyecto en sí, plan tutorial, técnicas de estudio. Se organiza a continuación un cursillo para padres por parte de los miembros del Seminario Permanente de Orientación y Tutoría, con un nivel de asistencia considerable. Se utilizaron distintas pruebas: cuestionario inicial, inventario de hábitos de estudio de C. Hilbert Wrenn, prueba de atención, Otis sencillo para alumnos y alumnas con dificultades y pruebas iniciales de materias como Lengua y Matemáticas. Resultados: el nivel de integración y participación del alumnado en las actividades fue creciendo positivamente a medida que avanzaba el curso. El nivel de integración y participación de los padres en el centro ha sido mucho más directo y puede decirse que se encuentran, a partir de esta experiencia, más integrados en la tarea educativa de sus hijos. El grado de compromiso de tutores y profesores en este proyecto ha sido elevado. Su responsabilidad con el plan de tutoría y su nivel de demanda durante todo el curso escolar fue notoria. Se concluye lo positivo del proyecto y se solicita prorrogar un año más la experiencia.
Resumo:
Resumen tomado de la publicación