956 resultados para weak approximation
Resumo:
The accurate description of ground and electronic excited states is an important and challenging topic in quantum chemistry. The pairing matrix fluctuation, as a counterpart of the density fluctuation, is applied to this topic. From the pairing matrix fluctuation, the exact electron correlation energy as well as two electron addition/removal energies can be extracted. Therefore, both ground state and excited states energies can be obtained and they are in principle exact with a complete knowledge of the pairing matrix fluctuation. In practice, considering the exact pairing matrix fluctuation is unknown, we adopt its simple approximation --- the particle-particle random phase approximation (pp-RPA) --- for ground and excited states calculations. The algorithms for accelerating the pp-RPA calculation, including spin separation, spin adaptation, as well as an iterative Davidson method, are developed. For ground states correlation descriptions, the results obtained from pp-RPA are usually comparable to and can be more accurate than those from traditional particle-hole random phase approximation (ph-RPA). For excited states, the pp-RPA is able to describe double, Rydberg, and charge transfer excitations, which are challenging for conventional time-dependent density functional theory (TDDFT). Although the pp-RPA intrinsically cannot describe those excitations excited from the orbitals below the highest occupied molecular orbital (HOMO), its performances on those single excitations that can be captured are comparable to TDDFT. The pp-RPA for excitation calculation is further applied to challenging diradical problems and is used to unveil the nature of the ground and electronic excited states of higher acenes. The pp-RPA and the corresponding Tamm-Dancoff approximation (pp-TDA) are also applied to conical intersections, an important concept in nonadiabatic dynamics. Their good description of the double-cone feature of conical intersections is in sharp contrast to the failure of TDDFT. All in all, the pairing matrix fluctuation opens up new channel of thinking for quantum chemistry, and the pp-RPA is a promising method in describing ground and electronic excited states.
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Using a different approach to that of Popa, we arrive at an alternative definition
of the positive approximation property for order complete Banach lattices.
Some results associated with this new approach may be of independent interest. We
also prove a Banach lattice analogue of an old characterization, due to Palmer, of
the metric approximation property in terms of the continuous bidual of the ideal of
approximable operators.
Resumo:
Résumé : Malgré le nombre croissant de capteurs dans les domaines de la chimie et la biologie, il reste encore à étudier en profondeur la complexité des interactions entre les différentes molécules présentes lors d’une détection à l’interface solide-liquide. Dans ce cadre, il est de tout intérêt de croiser différentes méthodes de détection afin d’obtenir des informations complémentaires. Le principal objectif de cette étude est de dimensionner, fabriquer et caractériser un détecteur optique intégré sur verre basé sur la résonance plasmonique de surface, destiné à terme à être combiné avec d’autres techniques de détection, dont un microcalorimètre. La résonance plasmonique de surface est une technique reconnue pour sa sensibilité adaptée à la détection de surface, qui a l’avantage d’être sans marquage et permet de fournir un suivi en temps réel de la cinétique d’une réaction. L’avantage principal de ce capteur est qu’il a été dimensionné pour une large gamme d’indice de réfraction de l’analyte, allant de 1,33 à 1,48. Ces valeurs correspondent à la plupart des entités biologiques associées à leurs couches d’accroche dont les matrices de polymères, présentés dans ce travail. Étant donné que beaucoup d’études biologiques nécessitent la comparaison de la mesure à une référence ou à une autre mesure, le second objectif du projet est d’étudier le potentiel du système SPR intégré sur verre pour la détection multi-analyte. Les trois premiers chapitres se concentrent sur l’objectif principal du projet. Le dimensionnement du dispositif est ainsi présenté, basé sur deux modélisations différentes, associées à plusieurs outils de calcul analytique et numérique. La première modélisation, basée sur l’approximation des interactions faibles, permet d’obtenir la plupart des informations nécessaires au dimensionnement du dispositif. La seconde modélisation, sans approximation, permet de valider le premier modèle approché et de compléter et affiner le dimensionnement. Le procédé de fabrication de la puce optique sur verre est ensuite décrit, ainsi que les instruments et protocoles de caractérisation. Un dispositif est obtenu présentant des sensibilités volumiques entre 1000 nm/RIU et 6000 nm/RIU suivant l’indice de réfraction de l’analyte. L’intégration 3D du guide grâce à son enterrage sélectif dans le verre confère au dispositif une grande compacité, le rendant adapté à la cointégration avec un microcalorimètre en particulier. Le dernier chapitre de la thèse présente l’étude de plusieurs techniques de multiplexage spectral adaptées à un système SPR intégré, exploitant en particulier la technologie sur verre. L’objectif est de fournir au moins deux détections simultanées. Dans ce cadre, plusieurs solutions sont proposées et les dispositifs associés sont dimensionnés, fabriqués et testés.
Resumo:
The main purpose of this work was to study population dynamic discrete models in which the growth of the population is described by generalized von Bertalanffy's functions, with an adjustment or correction factor of polynomial type. The consideration of this correction factor is made with the aim to introduce the Allee effect. To the class of generalized von Bertalanffy's functions is identified and characterized subclasses of strong and weak Allee's functions and functions with no Allee effect. This classification is founded on the concepts of strong and weak Allee's effects to population growth rates associated. A complete description of the dynamic behavior is given, where we provide necessary conditions for the occurrence of unconditional and essential extinction types. The bifurcation structures of the parameter plane are analyzed regarding the evolution of the Allee limit with the aim to understand how the transition from strong Allee effect to no Allee effect, passing through the weak Allee effect, is realized. To generalized von Bertalanffy's functions with strong and weak Allee effects is identified an Allee's effect region, to which is associated the concepts of chaotic semistability curve and Allee's bifurcation point. We verified that under some sufficient conditions, generalized von Bertalanffy's functions have a particular bifurcation structure: the big bang bifurcations of the so-called box-within-a-box type. To this family of maps, the Allee bifurcation points and the big bang bifurcation points are characterized by the symmetric of Allee's limit and by a null intrinsic growth rate. The present paper is also a significant contribution in the framework of the big bang bifurcation analysis for continuous 1D maps and unveil their relationship with the explosion birth and the extinction phenomena.
Resumo:
The industrial production of aluminium is an electrolysis process where two superposed horizontal liquid layers are subjected to a mainly vertical electric current supplied by carbon electrodes. The lower layer consists of molten aluminium and lies on the cathode. The upper layer is the electrolyte and is covered by the anode. The interface between the two layers is often perturbed, leading to oscillations, or waves, similar to the waves on the surface of seas or lakes. The presence of electric currents and the resulting magnetic field are responsible for electromagnetic (Lorentz) forces within the fluid, which can amplify these oscillations and have an adverse influence on the process. The electrolytic bath vertical to horizontal aspect ratio is such, that it is advantageous to use the shallow water equations to model the interface motion. These are the depth-averaging the Navier-Stokes equations so that nonlinear and dispersion terms may be taken into account. Although these terms are essential to the prediction of wave dynamics, they are neglected in most of the literature on interface instabilities in aluminium reduction cells where only the linear theory is usually considered. The unknown variables are the two horizontal components of the fluid velocity, the height of the interface and the electric potential. In this application, a finite volume resolution of the double-layer shallow water equations including the electromagnetic sources has been developed, for incorporation into a generic three-dimensional computational fluid dynamics code that also deals with heat transfer within the cell.
Resumo:
We develop a framework for proving approximation limits of polynomial size linear programs (LPs) from lower bounds on the nonnegative ranks of suitably defined matrices. This framework yields unconditional impossibility results that are applicable to any LP as opposed to only programs generated by hierarchies. Using our framework, we prove that O(n1/2-ε)-approximations for CLIQUE require LPs of size 2nΩ(ε). This lower bound applies to LPs using a certain encoding of CLIQUE as a linear optimization problem. Moreover, we establish a similar result for approximations of semidefinite programs by LPs. Our main technical ingredient is a quantitative improvement of Razborov's [38] rectangle corruption lemma for the high error regime, which gives strong lower bounds on the nonnegative rank of shifts of the unique disjointness matrix.
Resumo:
The application of stochastic methods to the problem of particle and energy transport in turbulent plasmas is briefly reviewed. The "classical" Corrsin approximation is shown to be valid only in the limit of weak turbulence. The recently developed method of decorrelation trajectories is applicable over the whole range of turbulence intensities and yields the correct asymptotic behavior in the limit of very strong turbulence (subdiffusion). © 2004 Wiley Periodicals, Inc.
Resumo:
The approximation lemma is a simplification of the well-known take lemma, and is used to prove properties of programs that produce lists of values. We show how the approximation lemma, unlike the take lemma, can naturally be generalised from lists to a large class of datatypes, and present a generic approximation lemma that is parametric in the datatype to which it applies. As a useful by-product, we find that generalising the approximation lemma in this way also simplifies its proof.
Resumo:
This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.
Resumo:
We describe a one-step bio-refinery process for shrimp composites by-products. Its originality lies in a simple rapid (6 h) biotechnological cuticle fragmentation process that recovers all major compounds (chitins, peptides and minerals in particular calcium). The process consists of a controlled exogenous enzymatic proteolysis in a food-grade acidic medium allowing chitin purification (solid phase), and recovery of peptides and minerals (liquid phase). At a pH of between 3.5 and 4, protease activity is effective, and peptides are preserved. Solid phase demineralization kinetics were followed for phosphoric, hydrochloric, acetic, formic and citric acids with pKa ranging from 2.1 to 4.76. Formic acid met the initial aim of (i) 99 % of demineralization yield and (ii) 95 % deproteinization yield at a pH close to 3.5 and a molar ratio of 1.5. The proposed one-step process is proven to be efficient. To formalize the necessary elements for the future optimization of the process, two models to predict shell demineralization kinetics were studied, one based on simplified physical considerations and a second empirical one. The first model did not accurately describe the kinetics for times exceeding 30 minutes, the empirical one performed adequately.