974 resultados para High-order theory
Resumo:
In this work we present the formulas for the calculation of exact three-center electron sharing indices (3c-ESI) and introduce two new approximate expressions for correlated wave functions. The 3c-ESI uses the third-order density, the diagonal of the third-order reduced density matrix, but the approximations suggested in this work only involve natural orbitals and occupancies. In addition, the first calculations of 3c-ESI using Valdemoro's, Nakatsuji's and Mazziotti's approximation for the third-order reduced density matrix are also presented for comparison. Our results on a test set of molecules, including 32 3c-ESI values, prove that the new approximation based on the cubic root of natural occupancies performs the best, yielding absolute errors below 0.07 and an average absolute error of 0.015. Furthemore, this approximation seems to be rather insensitive to the amount of electron correlation present in the system. This newly developed methodology provides a computational inexpensive method to calculate 3c-ESI from correlated wave functions and opens new avenues to approximate high-order reduced density matrices in other contexts, such as the contracted Schrödinger equation and the anti-Hermitian contracted Schrödinger equation
Resumo:
If you want to know whether a property is true or not in a specific algebraic structure,you need to test that property on the given structure. This can be done by hand, which can be cumbersome and erroneous. In addition, the time consumed in testing depends on the size of the structure where the property is applied. We present an implementation of a system for finding counterexamples and testing properties of models of first-order theories. This system is supposed to provide a convenient and paperless environment for researchers and students investigating or studying such models and algebraic structures in particular. To implement a first-order theory in the system, a suitable first-order language.( and some axioms are required. The components of a language are given by a collection of variables, a set of predicate symbols, and a set of operation symbols. Variables and operation symbols are used to build terms. Terms, predicate symbols, and the usual logical connectives are used to build formulas. A first-order theory now consists of a language together with a set of closed formulas, i.e. formulas without free occurrences of variables. The set of formulas is also called the axioms of the theory. The system uses several different formats to allow the user to specify languages, to define axioms and theories and to create models. Besides the obvious operations and tests on these structures, we have introduced the notion of a functor between classes of models in order to generate more co~plex models from given ones automatically. As an example, we will use the system to create several lattices structures starting from a model of the theory of pre-orders.
Resumo:
The interaction of short intense laser pulses with atoms/molecules produces a multitude of highly nonlinear processes requiring a non-perturbative treatment. Detailed study of these highly nonlinear processes by numerically solving the time-dependent Schrodinger equation becomes a daunting task when the number of degrees of freedom is large. Also the coupling between the electronic and nuclear degrees of freedom further aggravates the computational problems. In the present work we show that the time-dependent Hartree (TDH) approximation, which neglects the correlation effects, gives unreliable description of the system dynamics both in the absence and presence of an external field. A theoretical framework is required that treats the electrons and nuclei on equal footing and fully quantum mechanically. To address this issue we discuss two approaches, namely the multicomponent density functional theory (MCDFT) and the multiconfiguration time-dependent Hartree (MCTDH) method, that go beyond the TDH approximation and describe the correlated electron-nuclear dynamics accurately. In the MCDFT framework, where the time-dependent electronic and nuclear densities are the basic variables, we discuss an algorithm to calculate the exact Kohn-Sham (KS) potentials for small model systems. By simulating the photodissociation process in a model hydrogen molecular ion, we show that the exact KS potentials contain all the many-body effects and give an insight into the system dynamics. In the MCTDH approach, the wave function is expanded as a sum of products of single-particle functions (SPFs). The MCTDH method is able to describe the electron-nuclear correlation effects as the SPFs and the expansion coefficients evolve in time and give an accurate description of the system dynamics. We show that the MCTDH method is suitable to study a variety of processes such as the fragmentation of molecules, high-order harmonic generation, the two-center interference effect, and the lochfrass effect. We discuss these phenomena in a model hydrogen molecular ion and a model hydrogen molecule. Inclusion of absorbing boundaries in the mean-field approximation and its consequences are discussed using the model hydrogen molecular ion. To this end, two types of calculations are considered: (i) a variational approach with a complex absorbing potential included in the full many-particle Hamiltonian and (ii) an approach in the spirit of time-dependent density functional theory (TDDFT), including complex absorbing potentials in the single-particle equations. It is elucidated that for small grids the TDDFT approach is superior to the variational approach.
Resumo:
This thesis deals with the so-called Basis Set Superposition Error (BSSE) from both a methodological and a practical point of view. The purpose of the present thesis is twofold: (a) to contribute step ahead in the correct characterization of weakly bound complexes and, (b) to shed light the understanding of the actual implications of the basis set extension effects in the ab intio calculations and contribute to the BSSE debate. The existing BSSE-correction procedures are deeply analyzed, compared, validated and, if necessary, improved. A new interpretation of the counterpoise (CP) method is used in order to define counterpoise-corrected descriptions of the molecular complexes. This novel point of view allows for a study of the BSSE-effects not only in the interaction energy but also on the potential energy surface and, in general, in any property derived from the molecular energy and its derivatives A program has been developed for the calculation of CP-corrected geometry optimizations and vibrational frequencies, also using several counterpoise schemes for the case of molecular clusters. The method has also been implemented in Gaussian98 revA10 package. The Chemical Hamiltonian Approach (CHA) methodology has been also implemented at the RHF and UHF levels of theory for an arbitrary number interacting systems using an algorithm based on block-diagonal matrices. Along with the methodological development, the effects of the BSSE on the properties of molecular complexes have been discussed in detail. The CP and CHA methodologies are used for the determination of BSSE-corrected molecular complexes properties related to the Potential Energy Surfaces and molecular wavefunction, respectively. First, the behaviour of both BSSE-correction schemes are systematically compared at different levels of theory and basis sets for a number of hydrogen-bonded complexes. The Complete Basis Set (CBS) limit of both uncorrected and CP-corrected molecular properties like stabilization energies and intermolecular distances has also been determined, showing the capital importance of the BSSE correction. Several controversial topics of the BSSE correction are addressed as well. The application of the counterpoise method is applied to internal rotational barriers. The importance of the nuclear relaxation term is also pointed out. The viability of the CP method for dealing with charged complexes and the BSSE effects on the double-well PES blue-shifted hydrogen bonds is also studied in detail. In the case of the molecular clusters the effect of high-order BSSE effects introduced with the hierarchical counterpoise scheme is also determined. The effect of the BSSE on the electron density-related properties is also addressed. The first-order electron density obtained with the CHA/F and CHA/DFT methodologies was used to assess, both graphically and numerically, the redistribution of the charge density upon BSSE-correction. Several tools like the Atoms in Molecules topologycal analysis, density difference maps, Quantum Molecular Similarity, and Chemical Energy Component Analysis were used to deeply analyze, for the first time, the BSSE effects on the electron density of several hydrogen bonded complexes of increasing size. The indirect effect of the BSSE on intermolecular perturbation theory results is also pointed out It is shown that for a BSSE-free SAPT study of hydrogen fluoride clusters, the use of a counterpoise-corrected PES is essential in order to determine the proper molecular geometry to perform the SAPT analysis.
Resumo:
Jean-François Lyotard's 1973 essay ‘Acinema’ is explicitly concerned with the cinematic medium, but has received scant critical attention. Lyotard's acinema conceives of an experimental, excessive form of film-making that uses stillness and movement to shift away from the orderly process of meaning-making within mainstream cinema. What motivates this present paper is a striking link between Lyotard's writing and contemporary Hollywood production; both are concerned with a sense of excess, especially within moments of motion. Using Charlie's Angels (McG, 2000) as a case study – a film that has been critically dismissed as ‘eye candy for the blind’ – my methodology brings together two different discourses, high culture theory and mainstream film-making, to test out and propose the value of Lyotard's ideas for the study of contemporary film. Combining close textual analysis and engagement with key scholarship on film spectacle, I reflexively engage with the process of film analysis and re-direct attention to a neglected essay by a major theorist, in order to stimulate further engagement with his work.
Resumo:
An accident with Brazilian Satellite Launching Vehicle (SLV-1 V03) third prototype in August, 2003 at Alcântara Base, in the State of Maranhão, dramatically exposed accumulated deficiencies affecting Brazilian space sector. A report regarding this accident published by Ministry of Defense recognized the relevance of organizational dimension for the success of Brazilian space policy. In this case study, the author analyses sector organizational structure - the National Space Activities Development System (NSADS) - to evaluate its adequacy to policy development requisites. The Theory of Structural Contingency - TSC provided the analytical framework adopted in the research complemented by two organizational approaches that focuses high risk systems: Normal Accident Theory - NAT and High Reliability Theory - HRT. The last two approaches supported the analysis of NSADS's organizations which are, according to Charles Perrow definition, directly involved in developing high risk technological systems and their relationship with the System. The case study was supplemented with a brief comparison between NSADS and the organizational structures of North-American and French civilian space agencies, respectively, NASA and CNES, in order to subsidize organizational modeling of Brazilian System.
Resumo:
Estudos que investigam as razões do acumulo de caixa e a existência de uma política de gestão ativa do caixa têm ganhado evidência na literatura acadêmica internacional nos últimos anos. As pesquisas que consideram os determinantes do nível de caixa e suas implicações ainda encontram-se em um estágio inicial. Adicionalmente, pesquisas ao redor do mundo têm encontrado elevado nível de caixa acumulado pelas empresas. De acordo com as principais linhas de investigação empírica, três correntes teóricas podem explicar o nível de caixa a partir das variáveis denominadas determinantes: tradeoff theory, pecking order theory e a teoria de agência. Este estudo tem por objetivo investigar empiricamente os determinantes do caixa e identificar a existência ou não de uma política de gestão ativa nas empresas brasileiras. Foi realizada uma pesquisa com amostra de 198 empresas listadas na BOVESPA, no período de 1998 a 2008, totalizando 2178 observações. A pesquisa utilizou modelos econométricos de regressão linear e painel de dados. Com as evidências empíricas encontradas, podemos concluir que a gestão do nível de caixa acumulado é uma importante decisão a ser tomada pelas empresas brasileiras. As teorias que suportam as explicações dos determinantes do nível de caixa podem ser aplicadas de modo complementar ao invés de divergentes.
Resumo:
Trabalho apresentado no Congresso Nacional de Matemática Aplicada à Indústria, 18 a 21 de novembro de 2014, Caldas Novas - Goiás
Resumo:
The scheme is based on Ami Harten's ideas (Harten, 1994), the main tools coming from wavelet theory, in the framework of multiresolution analysis for cell averages. But instead of evolving cell averages on the finest uniform level, we propose to evolve just the cell averages on the grid determined by the significant wavelet coefficients. Typically, there are few cells in each time step, big cells on smooth regions, and smaller ones close to irregularities of the solution. For the numerical flux, we use a simple uniform central finite difference scheme, adapted to the size of each cell. If any of the required neighboring cell averages is not present, it is interpolated from coarser scales. But we switch to ENO scheme in the finest part of the grids. To show the feasibility and efficiency of the method, it is applied to a system arising in polymer-flooding of an oil reservoir. In terms of CPU time and memory requirements, it outperforms Harten's multiresolution algorithm.The proposed method applies to systems of conservation laws in 1Dpartial derivative(t)u(x, t) + partial derivative(x)f(u(x, t)) = 0, u(x, t) is an element of R-m. (1)In the spirit of finite volume methods, we shall consider the explicit schemeupsilon(mu)(n+1) = upsilon(mu)(n) - Deltat/hmu ((f) over bar (mu) - (f) over bar (mu)-) = [Dupsilon(n)](mu), (2)where mu is a point of an irregular grid Gamma, mu(-) is the left neighbor of A in Gamma, upsilon(mu)(n) approximate to 1/mu-mu(-) integral(mu-)(mu) u(x, t(n))dx are approximated cell averages of the solution, (f) over bar (mu) = (f) over bar (mu)(upsilon(n)) are the numerical fluxes, and D is the numerical evolution operator of the scheme.According to the definition of (f) over bar (mu), several schemes of this type have been proposed and successfully applied (LeVeque, 1990). Godunov, Lax-Wendroff, and ENO are some of the popular names. Godunov scheme resolves well the shocks, but accuracy (of first order) is poor in smooth regions. Lax-Wendroff is of second order, but produces dangerous oscillations close to shocks. ENO schemes are good alternatives, with high order and without serious oscillations. But the price is high computational cost.Ami Harten proposed in (Harten, 1994) a simple strategy to save expensive ENO flux calculations. The basic tools come from multiresolution analysis for cell averages on uniform grids, and the principle is that wavelet coefficients can be used for the characterization of local smoothness.. Typically, only few wavelet coefficients are significant. At the finest level, they indicate discontinuity points, where ENO numerical fluxes are computed exactly. Elsewhere, cheaper fluxes can be safely used, or just interpolated from coarser scales. Different applications of this principle have been explored by several authors, see for example (G-Muller and Muller, 1998).Our scheme also uses Ami Harten's ideas. But instead of evolving the cell averages on the finest uniform level, we propose to evolve the cell averages on sparse grids associated with the significant wavelet coefficients. This means that the total number of cells is small, with big cells in smooth regions and smaller ones close to irregularities. This task requires improved new tools, which are described next.
Resumo:
In this work we have elaborated a spline-based method of solution of inicial value problems involving ordinary differential equations, with emphasis on linear equations. The method can be seen as an alternative for the traditional solvers such as Runge-Kutta, and avoids root calculations in the linear time invariant case. The method is then applied on a central problem of control theory, namely, the step response problem for linear EDOs with possibly varying coefficients, where root calculations do not apply. We have implemented an efficient algorithm which uses exclusively matrix-vector operations. The working interval (till the settling time) was determined through a calculation of the least stable mode using a modified power method. Several variants of the method have been compared by simulation. For general linear problems with fine grid, the proposed method compares favorably with the Euler method. In the time invariant case, where the alternative is root calculation, we have indications that the proposed method is competitive for equations of sifficiently high order.
Resumo:
We present a numerical solution for the steady 2D Navier-Stokes equations using a fourth order compact-type method. The geometry of the problem is a constricted symmetric channel, where the boundary can be varied, via a parameter, from a smooth constriction to one possessing a very sharp but smooth corner allowing us to analyse the behaviour of the errors when the solution is smooth or near singular. The set of non-linear equations is solved by the Newton method. Results have been obtained for Reynolds number up to 500. Estimates of the errors incurred have shown that the results are accurate and better than those of the corresponding second order method. (C) 2002 Elsevier B.V. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We propose an approach to integrate the theory, simulations, and experiments in protein-folding kinetics. This is realized by measuring the mean and high-order moments of the first-passage time and its associated distribution. The full kinetics is revealed in the current theoretical framework through these measurements. In the experiments, information about the statistical properties of first-passage times can be obtained from the kinetic folding trajectories of single molecule experiments ( for example, fluorescence). Theoretical/simulation and experimental approaches can be directly related. We study in particular the temperature-varying kinetics to probe the underlying structure of the folding energy landscape. At high temperatures, exponential kinetics is observed; there are multiple parallel kinetic paths leading to the native state. At intermediate temperatures, nonexponential kinetics appears, revealing the nature of the distribution of local traps on the landscape and, as a result, discrete kinetic paths emerge. At very low temperatures, exponential kinetics is again observed; the dynamics on the underlying landscape is dominated by a single barrier. The ratio between first-passage-time moments is proposed to be a good variable to quantitatively probe these kinetic changes. The temperature-dependent kinetics is consistent with the strange kinetics found in folding dynamics experiments. The potential applications of the current results to single-molecule protein folding are discussed.
Resumo:
The photonic modes of Thue-Morse and Fibonacci lattices with generating layers A and B, of positive and negative indices of refraction, are calculated by the transfer-matrix technique. For Thue-Morse lattices, as well for periodic lattices with AB unit cell, the constructive interference of reflected waves, corresponding to the zero(th)-order gap, takes place when the optical paths in single layers A and B are commensurate. In contrast, for Fibonacci lattices of high order, the same phenomenon occurs when the ratio of those optical paths is close to the golden ratio. In the long wavelength limit, analytical expressions defining the edge frequencies of the zero(th) order gap are obtained for both quasi-periodic lattices. Furthermore, analytical expressions that define the gap edges around the zero(th) order gap are shown to correspond to the
Resumo:
The author proposes an approach to string theory where the zero-order theory is the null string. An explicit form of the propagator for the null string in the momentum space is found. Considering the tension as a perturbative parameter, the perturbative series is completely summable and the propagator of the bosonic open string with tension T is found.