7 resultados para functionals
em Universitätsbibliothek Kassel, Universität Kassel, Germany
Resumo:
Singularities of elastic and electric fields are investigated at the tip of a crack on the interface of two anisotropic piezo-electric media under various boundary conditions on the crack surfaces. The Griffith formulae are obtained for increments of energy functionals due to growth of the crack and the notion of the energy release matrix is introduced. Normalization conditions for bases of singular solution are proposed to adapt them to the energy, stress, and deformation fracture criteria. Connections between these bases are determined and additional properties of the deformation basis related to the notion of electric surface enthalpy are established.
Resumo:
Relativistic density functional theory is widely applied in molecular calculations with heavy atoms, where relativistic and correlation effects are on the same footing. Variational stability of the Dirac Hamiltonian is a very important field of research from the beginning of relativistic molecular calculations on, among efforts for accuracy, efficiency, and density functional formulation, etc. Approximations of one- or two-component methods and searching for suitable basis sets are two major means for good projection power against the negative continuum. The minimax two-component spinor linear combination of atomic orbitals (LCAO) is applied in the present work for both light and super-heavy one-electron systems, providing good approximations in the whole energy spectrum, being close to the benchmark minimax finite element method (FEM) values and without spurious and contaminated states, in contrast to the presence of these artifacts in the traditional four-component spinor LCAO. The variational stability assures that minimax LCAO is bounded from below. New balanced basis sets, kinetic and potential defect balanced (TVDB), following the minimax idea, are applied with the Dirac Hamiltonian. Its performance in the same super-heavy one-electron quasi-molecules shows also very good projection capability against variational collapse, as the minimax LCAO is taken as the best projection to compare with. The TVDB method has twice as many basis coefficients as four-component spinor LCAO, which becomes now linear and overcomes the disadvantage of great time-consumption in the minimax method. The calculation with both the TVDB method and the traditional LCAO method for the dimers with elements in group 11 of the periodic table investigates their difference. New bigger basis sets are constructed than in previous research, achieving high accuracy within the functionals involved. Their difference in total energy is much smaller than the basis incompleteness error, showing that the traditional four-spinor LCAO keeps enough projection power from the numerical atomic orbitals and is suitable in research on relativistic quantum chemistry. In scattering investigations for the same comparison purpose, the failure of the traditional LCAO method of providing a stable spectrum with increasing size of basis sets is contrasted to the TVDB method, which contains no spurious states already without pre-orthogonalization of basis sets. Keeping the same conditions including the accuracy of matrix elements shows that the variational instability prevails over the linear dependence of the basis sets. The success of the TVDB method manifests its capability not only in relativistic quantum chemistry but also for scattering and under the influence of strong external electronic and magnetic fields. The good accuracy in total energy with large basis sets and the good projection property encourage wider research on different molecules, with better functionals, and on small effects.
Resumo:
In dieser Arbeit werden nichtüberlappende Gebietszerlegungsmethoden einerseits hinsichtlich der zu lösenden Problemklassen verallgemeinert und andererseits in bisher nicht untersuchten Kontexten betrachtet. Dabei stehen funktionalanalytische Untersuchungen zur Wohldefiniertheit, eindeutigen Lösbarkeit und Konvergenz im Vordergrund. Im ersten Teil werden lineare elliptische Dirichlet-Randwertprobleme behandelt, wobei neben Problemen mit dominantem Hauptteil auch solche mit singulärer Störung desselben, wie konvektions- oder reaktionsdominante Probleme zugelassen sind. Der zweite Teil befasst sich mit (gleichmäßig) monotonen koerziven quasilinearen elliptischen Dirichlet-Randwertproblemen. In beiden Fällen wird das Lipschitz-Gebiet in endlich viele Lipschitz-Teilgebiete zerlegt, wobei insbesondere Kreuzungspunkte und Teilgebiete ohne Außenrand zugelassen sind. Anschließend werden Transmissionsprobleme mit frei wählbaren $L^{\infty}$-Parameterfunktionen hergeleitet, wobei die Konormalenableitungen als Funktionale auf geeigneten Funktionenräumen über den Teilrändern ($H_{00}^{1/2}(\Gamma)$) interpretiert werden. Die iterative Lösung dieser Transmissionsprobleme mit einem Ansatz von Deng führt auf eine Substrukturierungsmethode mit Robin-artigen Transmissionsbedingungen, bei der eine Auswertung der Konormalenableitungen aufgrund einer geschickten Aufdatierung der Robin-Daten nicht notwendig ist (insbesondere ist die bekannte Robin-Robin-Methode von Lions als Spezialfall enthalten). Die Konvergenz bezüglich einer partitionierten $H^1$-Norm wird für beide Problemklassen gezeigt. Dabei werden keine über $H^1$ hinausgehende Regularitätsforderungen an die Lösungen gestellt und die Gebiete müssen keine zusätzlichen Glattheitsvoraussetzungen erfüllen. Im letzten Kapitel werden nichtmonotone koerzive quasilineare Probleme untersucht, wobei das Zugrunde liegende Gebiet nur in zwei Lipschitz-Teilgebiete zerlegt sein soll. Das zugehörige nichtlineare Transmissionsproblem wird durch Kirchhoff-Transformation in lineare Teilprobleme mit nichtlinearen Kopplungsbedingungen überführt. Ein optimierungsbasierter Lösungsansatz, welcher einen geeigneten Abstand der rücktransformierten Dirichlet-Daten der linearen Teilprobleme auf den Teilrändern minimiert, führt auf ein optimales Kontrollproblem. Die dabei entstehenden regularisierten freien Minimierungsprobleme werden mit Hilfe eines Gradientenverfahrens unter minimalen Glattheitsforderungen an die Nichtlinearitäten gelöst. Unter zusätzlichen Glattheitsvoraussetzungen an die Nichtlinearitäten und weiteren technischen Voraussetzungen an die Lösung des quasilinearen Ausgangsproblems, kann zudem die quadratische Konvergenz des Newton-Verfahrens gesichert werden.
Resumo:
Using the functional approach, we state and prove a characterization theorem for classical orthogonal polynomials on non-uniform lattices (quadratic lattices of a discrete or a q-discrete variable) including the Askey-Wilson polynomials. This theorem proves the equivalence between seven characterization properties, namely the Pearson equation for the linear functional, the second-order divided-difference equation, the orthogonality of the derivatives, the Rodrigues formula, two types of structure relations,and the Riccati equation for the formal Stieltjes function.
Resumo:
Im Rahmen der Dichtefunktionaltheorie wurden Orbitalfunktionale wie z.B. B3LYP entwickelt. Diese lassen sich mit der „optimized effective potential“ – Methode selbstkonsistent auswerten. Während sie früher nur im 1D-Fall genau berechnet werden konnte, entwickelten Kümmel und Perdew eine Methode, bei der das OEP-Problem unter Verwendung einer Differentialgleichung selbstkonsistent gelöst werden kann. In dieser Arbeit wird ein Finite-Elemente-Mehrgitter-Verfahren verwendet, um die entstehenden Gleichungen zu lösen und damit Energien, Dichten und Ionisationsenergien für Atome und zweiatomige Moleküle zu berechnen. Als Orbitalfunktional wird dabei der „exakte Austausch“ verwendet; das Programm ist aber leicht auf jedes beliebige Funktional erweiterbar. Für das Be-Atom ließ sich mit 8.Ordnung –FEM die Gesamtenergien etwa um 2 Größenordnungen genauer berechnen als der Finite-Differenzen-Code von Makmal et al. Für die Eigenwerte und die Eigenschaften der Atome N und Ne wurde die Genauigkeit anderer numerischer Methoden erreicht. Die Rechenzeit wuchs erwartungsgemäß linear mit der Punktzahl. Trotz recht langsamer scf-Konvergenz wurden für das Molekül LiH Genauigkeiten wie bei FD und bei HF um 2-3 Größenordnungen bessere als mit Basismethoden erzielt. Damit zeigt sich, dass auf diese Weise benchmark-Rechnungen durchgeführt werden können. Diese dürften wegen der schnellen Konvergenz über der Punktzahl und dem geringen Zeitaufwand auch auf schwerere Systeme ausweitbar sein.
Resumo:
We are currently at the cusp of a revolution in quantum technology that relies not just on the passive use of quantum effects, but on their active control. At the forefront of this revolution is the implementation of a quantum computer. Encoding information in quantum states as “qubits” allows to use entanglement and quantum superposition to perform calculations that are infeasible on classical computers. The fundamental challenge in the realization of quantum computers is to avoid decoherence – the loss of quantum properties – due to unwanted interaction with the environment. This thesis addresses the problem of implementing entangling two-qubit quantum gates that are robust with respect to both decoherence and classical noise. It covers three aspects: the use of efficient numerical tools for the simulation and optimal control of open and closed quantum systems, the role of advanced optimization functionals in facilitating robustness, and the application of these techniques to two of the leading implementations of quantum computation, trapped atoms and superconducting circuits. After a review of the theoretical and numerical foundations, the central part of the thesis starts with the idea of using ensemble optimization to achieve robustness with respect to both classical fluctuations in the system parameters, and decoherence. For the example of a controlled phasegate implemented with trapped Rydberg atoms, this approach is demonstrated to yield a gate that is at least one order of magnitude more robust than the best known analytic scheme. Moreover this robustness is maintained even for gate durations significantly shorter than those obtained in the analytic scheme. Superconducting circuits are a particularly promising architecture for the implementation of a quantum computer. Their flexibility is demonstrated by performing optimizations for both diagonal and non-diagonal quantum gates. In order to achieve robustness with respect to decoherence, it is essential to implement quantum gates in the shortest possible amount of time. This may be facilitated by using an optimization functional that targets an arbitrary perfect entangler, based on a geometric theory of two-qubit gates. For the example of superconducting qubits, it is shown that this approach leads to significantly shorter gate durations, higher fidelities, and faster convergence than the optimization towards specific two-qubit gates. Performing optimization in Liouville space in order to properly take into account decoherence poses significant numerical challenges, as the dimension scales quadratically compared to Hilbert space. However, it can be shown that for a unitary target, the optimization only requires propagation of at most three states, instead of a full basis of Liouville space. Both for the example of trapped Rydberg atoms, and for superconducting qubits, the successful optimization of quantum gates is demonstrated, at a significantly reduced numerical cost than was previously thought possible. Together, the results of this thesis point towards a comprehensive framework for the optimization of robust quantum gates, paving the way for the future realization of quantum computers.
Resumo:
Since no physical system can ever be completely isolated from its environment, the study of open quantum systems is pivotal to reliably and accurately control complex quantum systems. In practice, reliability of the control field needs to be confirmed via certification of the target evolution while accuracy requires the derivation of high-fidelity control schemes in the presence of decoherence. In the first part of this thesis an algebraic framework is presented that allows to determine the minimal requirements on the unique characterisation of arbitrary unitary gates in open quantum systems, independent on the particular physical implementation of the employed quantum device. To this end, a set of theorems is devised that can be used to assess whether a given set of input states on a quantum channel is sufficient to judge whether a desired unitary gate is realised. This allows to determine the minimal input for such a task, which proves to be, quite remarkably, independent of system size. These results allow to elucidate the fundamental limits regarding certification and tomography of open quantum systems. The combination of these insights with state-of-the-art Monte Carlo process certification techniques permits a significant improvement of the scaling when certifying arbitrary unitary gates. This improvement is not only restricted to quantum information devices where the basic information carrier is the qubit but it also extends to systems where the fundamental informational entities can be of arbitary dimensionality, the so-called qudits. The second part of this thesis concerns the impact of these findings from the point of view of Optimal Control Theory (OCT). OCT for quantum systems utilises concepts from engineering such as feedback and optimisation to engineer constructive and destructive interferences in order to steer a physical process in a desired direction. It turns out that the aforementioned mathematical findings allow to deduce novel optimisation functionals that significantly reduce not only the required memory for numerical control algorithms but also the total CPU time required to obtain a certain fidelity for the optimised process. The thesis concludes by discussing two problems of fundamental interest in quantum information processing from the point of view of optimal control - the preparation of pure states and the implementation of unitary gates in open quantum systems. For both cases specific physical examples are considered: for the former the vibrational cooling of molecules via optical pumping and for the latter a superconducting phase qudit implementation. In particular, it is illustrated how features of the environment can be exploited to reach the desired targets.