964 resultados para Projection approximation
Resumo:
A hierarchical matrix is an efficient data-sparse representation of a matrix, especially useful for large dimensional problems. It consists of low-rank subblocks leading to low memory requirements as well as inexpensive computational costs. In this work, we discuss the use of the hierarchical matrix technique in the numerical solution of a large scale eigenvalue problem arising from a finite rank discretization of an integral operator. The operator is of convolution type, it is defined through the first exponential-integral function and, hence, it is weakly singular. We develop analytical expressions for the approximate degenerate kernels and deduce error upper bounds for these approximations. Some computational results illustrating the efficiency and robustness of the approach are presented.
Resumo:
We report vibrational excitation (v(i) = 0 -> v(f) = 1) cross-sections for positron scattering by H(2) and model calculations for the (v(i) = 0 -> v(f) = 1) excitation of the C-C symmetric stretch mode of C(2)H(2). The Feshbach projection operator formalism was employed to vibrationally resolve the fixed-nuclei phase shifts obtained with the Schwinger multichannel method. The near threshold behavior of H(2) and C(2)H(2) significantly differ in the sense that no low lying singularity (either virtual or bound state) was found for the former, while a e(+)-acetylene virtual state was found at the equilibrium geometry (this virtual state becomes a bound state upon stretching the molecule). For C(2)H(2), we also performed model calculations comparing excitation cross-sections arising from virtual (-i kappa(0)) and bound (+i kappa(0)) states symmetrically located around the origin of the complex momentum plane (i.e. having the same kappa(0)). The virtual state is seen to significantly couple to vibrations, and similar cross-sections were obtained for shallow bound and virtual states. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper introduces and studies the notion of CLP projection for Constraint Handling Rules (CHR). The CLP projection consists of a naive translation of CHR programs into Constraint Logic Programs (CLP). We show that the CLP projection provides a safe operational and declarative approximation for CHR programs. We demónstrate moreover that a confluent CHR program has a least model, which is precisely equal to the least model of its CLP projection (closing henee a ten year-old conjecture by Abdenader et al.). Finally, we illustrate how the notion of CLP projection can be used in practice to apply CLP analyzers to CHR. In particular, we show results from applying AProVE to prove termination, and CiaoPP to infer both complexity upper bounds and types for CHR programs.
Resumo:
Consider a random medium consisting of N points randomly distributed so that there is no correlation among the distances separating them. This is the random link model, which is the high dimensionality limit (mean-field approximation) for the Euclidean random point structure. In the random link model, at discrete time steps, a walker moves to the nearest point, which has not been visited in the last mu steps (memory), producing a deterministic partially self-avoiding walk (the tourist walk). We have analytically obtained the distribution of the number n of points explored by the walker with memory mu=2, as well as the transient and period joint distribution. This result enables us to explain the abrupt change in the exploratory behavior between the cases mu=1 (memoryless walker, driven by extreme value statistics) and mu=2 (walker with memory, driven by combinatorial statistics). In the mu=1 case, the mean newly visited points in the thermodynamic limit (N >> 1) is just < n >=e=2.72... while in the mu=2 case, the mean number < n > of visited points grows proportionally to N(1/2). Also, this result allows us to establish an equivalence between the random link model with mu=2 and random map (uncorrelated back and forth distances) with mu=0 and the abrupt change between the probabilities for null transient time and subsequent ones.
Resumo:
The local-density approximation (LDA) together with the half occupation (transitionstate) is notoriously successful in the calculation of atomic ionization potentials. When it comes to extended systems, such as a semiconductor infinite system, it has been very difficult to find a way to half ionize because the hole tends to be infinitely extended (a Bloch wave). The answer to this problem lies in the LDA formalism itself. One proves that the half occupation is equivalent to introducing the hole self-energy (electrostatic and exchange correlation) into the Schrodinger equation. The argument then becomes simple: The eigenvalue minus the self-energy has to be minimized because the atom has a minimal energy. Then one simply proves that the hole is localized, not infinitely extended, because it must have maximal self-energy. Then one also arrives at an equation similar to the self- interaction correction equation, but corrected for the removal of just 1/2 electron. Applied to the calculation of band gaps and effective masses, we use the self- energy calculated in atoms and attain a precision similar to that of GW, but with the great advantage that it requires no more computational effort than standard LDA.
Resumo:
We study the spin-1/2 Ising model on a Bethe lattice in the mean-field limit, with the interaction constants following one of two deterministic aperiodic sequences, the Fibonacci or period-doubling one. New algorithms of sequence generation were implemented, which were fundamental in obtaining long sequences and, therefore, precise results. We calculate the exact critical temperature for both sequences, as well as the critical exponents beta, gamma, and delta. For the Fibonacci sequence, the exponents are classical, while for the period-doubling one they depend on the ratio between the two exchange constants. The usual relations between critical exponents are satisfied, within error bars, for the period-doubling sequence. Therefore, we show that mean-field-like procedures may lead to nonclassical critical exponents.
Resumo:
We show that measurements of finite duration performed on an open two-state system can protect the initial state from a phase-noisy environment, provided the measured observable does not commute with the perturbing interaction. When the measured observable commutes with the environmental interaction, the finite-duration measurement accelerates the rate of decoherence induced by the phase noise. For the description of the measurement of an observable that is incompatible with the interaction between system and environment, we have found an approximate analytical expression, valid at zero temperature and weak coupling with the measuring device. We have tested the validity of the analytical predictions against an exact numerical approach, based on the superoperator-splitting method, that confirms the protection of the initial state of the system. When the coupling between the system and the measuring apparatus increases beyond the range of validity of the analytical approximation, the initial state is still protected by the finite-time measurement, according with the exact numerical calculations.
Resumo:
We consider a class of two-dimensional problems in classical linear elasticity for which material overlapping occurs in the absence of singularities. Of course, material overlapping is not physically realistic, and one possible way to prevent it uses a constrained minimization theory. In this theory, a minimization problem consists of minimizing the total potential energy of a linear elastic body subject to the constraint that the deformation field must be locally invertible. Here, we use an interior and an exterior penalty formulation of the minimization problem together with both a standard finite element method and classical nonlinear programming techniques to compute the minimizers. We compare both formulations by solving a plane problem numerically in the context of the constrained minimization theory. The problem has a closed-form solution, which is used to validate the numerical results. This solution is regular everywhere, including the boundary. In particular, we show numerical results which indicate that, for a fixed finite element mesh, the sequences of numerical solutions obtained with both the interior and the exterior penalty formulations converge to the same limit function as the penalization is enforced. This limit function yields an approximate deformation field to the plane problem that is locally invertible at all points in the domain. As the mesh is refined, this field converges to the exact solution of the plane problem.
Resumo:
This paper deals with the calculation of the discrete approximation to the full spectrum for the tangent operator for the stability problem of the symmetric flow past a circular cylinder. It is also concerned with the localization of the Hopf bifurcation in laminar flow past a cylinder, when the stationary solution loses stability and often becomes periodic in time. The main problem is to determine the critical Reynolds number for which a pair of eigenvalues crosses the imaginary axis. We thus present a divergence-free method, based on a decoupling of the vector of velocities in the saddle-point system from the vector of pressures, allowing the computation of eigenvalues, from which we can deduce the fundamental frequency of the time-periodic solution. The calculation showed that stability is lost through a symmetry-breaking Hopf bifurcation and that the critical Reynolds number is in agreement with the value presented in reported computations. (c) 2007 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
The ability to control both the minimum size of holes and the minimum size of structural members are essential requirements in the topology optimization design process for manufacturing. This paper addresses both requirements by means of a unified approach involving mesh-independent projection techniques. An inverse projection is developed to control the minimum hole size while a standard direct projection scheme is used to control the minimum length of structural members. In addition, a heuristic scheme combining both contrasting requirements simultaneously is discussed. Two topology optimization implementations are contributed: one in which the projection (either inverse or direct) is used at each iteration; and the other in which a two-phase scheme is explored. In the first phase, the compliance minimization is carried out without any projection until convergence. In the second phase, the chosen projection scheme is applied iteratively until a solution is obtained while satisfying either the minimum member size or minimum hole size. Examples demonstrate the various features of the projection-based techniques presented.
Distributed Estimation Over an Adaptive Incremental Network Based on the Affine Projection Algorithm
Resumo:
We study the problem of distributed estimation based on the affine projection algorithm (APA), which is developed from Newton`s method for minimizing a cost function. The proposed solution is formulated to ameliorate the limited convergence properties of least-mean-square (LMS) type distributed adaptive filters with colored inputs. The analysis of transient and steady-state performances at each individual node within the network is developed by using a weighted spatial-temporal energy conservation relation and confirmed by computer simulations. The simulation results also verify that the proposed algorithm provides not only a faster convergence rate but also an improved steady-state performance as compared to an LMS-based scheme. In addition, the new approach attains an acceptable misadjustment performance with lower computational and memory cost, provided the number of regressor vectors and filter length parameters are appropriately chosen, as compared to a distributed recursive-least-squares (RLS) based method.
Resumo:
Although the formulation of the nonlinear theory of H(infinity) control has been well developed, solving the Hamilton-Jacobi-Isaacs equation remains a challenge and is the major bottleneck for practical application of the theory. Several numerical methods have been proposed for its solution. In this paper, results on convergence and stability for a successive Galerkin approximation approach for nonlinear H(infinity) control via output feedback are presented. An example is presented illustrating the application of the algorithm.
Resumo:
Classical mechanics is formulated in complex Hilbert space with the introduction of a commutative product of operators, an antisymmetric bracket and a quasidensity operator that is not positive definite. These are analogues of the star product, the Moyal bracket, and the Wigner function in the phase space formulation of quantum mechanics. Quantum mechanics is then viewed as a limiting form of classical mechanics, as Planck's constant approaches zero, rather than the other way around. The forms of semiquantum approximations to classical mechanics, analogous to semiclassical approximations to quantum mechanics, are indicated.
Resumo:
Recent advances in the control of molecular engineering architectures have allowed unprecedented ability of molecular recognition in biosensing, with a promising impact for clinical diagnosis and environment control. The availability of large amounts of data from electrical, optical, or electrochemical measurements requires, however, sophisticated data treatment in order to optimize sensing performance. In this study, we show how an information visualization system based on projections, referred to as Projection Explorer (PEx), can be used to achieve high performance for biosensors made with nanostructured films containing immobilized antigens. As a proof of concept, various visualizations were obtained with impedance spectroscopy data from an array of sensors whose electrical response could be specific toward a given antibody (analyte) owing to molecular recognition processes. In addition to discussing the distinct methods for projection and normalization of the data, we demonstrate that an excellent distinction can be made between real samples tested positive for Chagas disease and Leishmaniasis, which could not be achieved with conventional statistical methods. Such high performance probably arose from the possibility of treating the data in the whole frequency range. Through a systematic analysis, it was inferred that Sammon`s mapping with standardization to normalize the data gives the best results, where distinction could be made of blood serum samples containing 10(-7) mg/mL of the antibody. The method inherent in PEx and the procedures for analyzing the impedance data are entirely generic and can be extended to optimize any type of sensor or biosensor.