9 resultados para Energy Efficient Algorithms
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
The design of a network is a solution to several engineering and science problems. Several network design problems are known to be NP-hard, and population-based metaheuristics like evolutionary algorithms (EAs) have been largely investigated for such problems. Such optimization methods simultaneously generate a large number of potential solutions to investigate the search space in breadth and, consequently, to avoid local optima. Obtaining a potential solution usually involves the construction and maintenance of several spanning trees, or more generally, spanning forests. To efficiently explore the search space, special data structures have been developed to provide operations that manipulate a set of spanning trees (population). For a tree with n nodes, the most efficient data structures available in the literature require time O(n) to generate a new spanning tree that modifies an existing one and to store the new solution. We propose a new data structure, called node-depth-degree representation (NDDR), and we demonstrate that using this encoding, generating a new spanning forest requires average time O(root n). Experiments with an EA based on NDDR applied to large-scale instances of the degree-constrained minimum spanning tree problem have shown that the implementation adds small constants and lower order terms to the theoretical bound.
Resumo:
Parallel kinematic structures are considered very adequate architectures for positioning and orienti ng the tools of robotic mechanisms. However, developing dynamic models for this kind of systems is sometimes a difficult task. In fact, the direct application of traditional methods of robotics, for modelling and analysing such systems, usually does not lead to efficient and systematic algorithms. This work addre sses this issue: to present a modular approach to generate the dynamic model and through some convenient modifications, how we can make these methods more applicable to parallel structures as well. Kane’s formulati on to obtain the dynamic equations is shown to be one of the easiest ways to deal with redundant coordinates and kinematic constraints, so that a suitable c hoice of a set of coordinates allows the remaining of the modelling procedure to be computer aided. The advantages of this approach are discussed in the modelling of a 3-dof parallel asymmetric mechanisms.
Resumo:
Background: This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. Results: The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. Conclusions: We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor.
Resumo:
We report a systematic study of the localized surface plasmon resonance effects on the photoluminescence of Er3+-doped tellurite glasses containing Silver or Gold nanoparticles. The Silver and Gold nanoparticles are obtained by means of reduction of Ag ions (Ag+ -> Ag-0) or Au ions (Au3+ -> Au-0) during the melting process followed by the formation of nanoparticles by heat treatment of the glasses. Absorption and photoluminescence spectra reveal particular features of the interaction between the metallic nanoparticles and Er3+ ions. The photoluminescence enhancement observed is due to dipole coupling of Silver nanoparticles with the I-4(13/2) -> I-4(15/2) Er3+ transition and Gold nanoparticles with the H-2(11/2)-> I-4(13/2) (805 nm) and S-4(3/2) -> I-4(13/2) (840 nm) Er3+ transitions. Such process is achieved via an efficient coupling yielding an energy transfer from the nanoparticles to the Er3+ ions, which is confirmed from the theoretical spectra calculated through the decay rate. Crown Copyright (C) 2011 Published by Elsevier B.V. All rights reserved.
Resumo:
Solution of structural reliability problems by the First Order method require optimization algorithms to find the smallest distance between a limit state function and the origin of standard Gaussian space. The Hassofer-Lind-Rackwitz-Fiessler (HLRF) algorithm, developed specifically for this purpose, has been shown to be efficient but not robust, as it fails to converge for a significant number of problems. On the other hand, recent developments in general (augmented Lagrangian) optimization techniques have not been tested in aplication to structural reliability problems. In the present article, three new optimization algorithms for structural reliability analysis are presented. One algorithm is based on the HLRF, but uses a new differentiable merit function with Wolfe conditions to select step length in linear search. It is shown in the article that, under certain assumptions, the proposed algorithm generates a sequence that converges to the local minimizer of the problem. Two new augmented Lagrangian methods are also presented, which use quadratic penalties to solve nonlinear problems with equality constraints. Performance and robustness of the new algorithms is compared to the classic augmented Lagrangian method, to HLRF and to the improved HLRF (iHLRF) algorithms, in the solution of 25 benchmark problems from the literature. The new proposed HLRF algorithm is shown to be more robust than HLRF or iHLRF, and as efficient as the iHLRF algorithm. The two augmented Lagrangian methods proposed herein are shown to be more robust and more efficient than the classical augmented Lagrangian method.
Resumo:
In this work, we report a theoretical and experimental investigation of the energy transfer mechanism in two isotypical 2D coordination polymers, (infinity)[(Tb1-xEux)(DPA)(HDPA)], where H(2)DPA is pyridine 2,6-dicarboxylic acid and x = 0.05 or 0.50. Emission spectra of (infinity)[(Tb0.95Eu0.05)(DPA)(HDPA)] and (infinity)[(Tb0.5Eu0.5)(DPA)(HDPA)], (I) and (2), show that the high quenching effect on Tb3+ emission caused by Eu3+ ion indicates an efficient Tb3+-> Eu3+ energy transfer (ET). The k(ET) of Tb3+-> Eu3+ ET and rise rates (k(r)) of Eu3+ as a function of temperature for (1) are on the same order of magnitude, indicating that the sensitization of the Eu3+5D0 level is highly fed by ET from the D-5(4) level of Tb3+ ion. The eta(ET) and R-0 values vary in the 67-79% and 7.15 to 7.93 angstrom ranges. Hence, Tb3+ is enabled to transfer efficiently to Eu3+ that can occupy the possible sites at 6.32 and 6.75 angstrom. For (2), the ET processes occur on average with eta(ET) and R-0 of 97% and 31 angstrom, respectively. Consequently, Tb3+ ion is enabled to transfer energy to Eu3+ localized at different layers. The theoretical model developed by Malta was implemented aiming to insert more insights about the dominant mechanisms involved in the ET between lanthanides ions. Calculated single Tb3+-> Eu3+ ETs are three orders of magnitude inferior to those experimentally; however, it can be explained by the theoretical model that does not consider the role of phonon assistance in the Ln(3+)-> Ln(3+) ET processes. In addition, the Tb3+-> Eu3+ ET processes are predominantly governed by dipole-dipole (d-d) and dipole-quadrupole (d-q) mechanisms.
Resumo:
It is a well-established fact that statistical properties of energy-level spectra are the most efficient tool to characterize nonintegrable quantum systems. The statistical behavior of different systems such as complex atoms, atomic nuclei, two-dimensional Hamiltonians, quantum billiards, and noninteracting many bosons has been studied. The study of statistical properties and spectral fluctuations in interacting many-boson systems has developed interest in this direction. We are especially interested in weakly interacting trapped bosons in the context of Bose-Einstein condensation (BEC) as the energy spectrum shows a transition from a collective nature to a single-particle nature with an increase in the number of levels. However this has received less attention as it is believed that the system may exhibit Poisson-like fluctuations due to the existence of an external harmonic trap. Here we compute numerically the energy levels of the zero-temperature many-boson systems which are weakly interacting through the van der Waals potential and are confined in the three-dimensional harmonic potential. We study the nearest-neighbor spacing distribution and the spectral rigidity by unfolding the spectrum. It is found that an increase in the number of energy levels for repulsive BEC induces a transition from a Wigner-like form displaying level repulsion to the Poisson distribution for P(s). It does not follow the Gaussian orthogonal ensemble prediction. For repulsive interaction, the lower levels are correlated and manifest level-repulsion. For intermediate levels P(s) shows mixed statistics, which clearly signifies the existence of two energy scales: external trap and interatomic interaction, whereas for very high levels the trapping potential dominates, generating a Poisson distribution. Comparison with mean-field results for lower levels are also presented. For attractive BEC near the critical point we observe the Shnirelman-like peak near s = 0, which signifies the presence of a large number of quasidegenerate states.
Resumo:
In this present work we present a methodology that aims to apply the many-body expansion to decrease the computational cost of ab initio molecular dynamics, keeping acceptable accuracy on the results. We implemented this methodology in a program which we called ManBo. In the many-body expansion approach, we partitioned the total energy E of the system in contributions of one body, two bodies, three bodies, etc., until the contribution of the Nth body [1-3]: E = E1 + E2 + E3 + …EN. The E1 term is the sum of the internal energy of the molecules; the term E2 is the energy due to interaction between all pairs of molecules; E3 is the energy due to interaction between all trios of molecules; and so on. In Manbo we chose to truncate the expansion in the contribution of two or three bodies, both for the calculation of the energy and for the calculation of the atomic forces. In order to partially include the many-body interactions neglected when we truncate the expansion, we can include an electrostatic embedding in the electronic structure calculations, instead of considering the monomers, pairs and trios as isolated molecules in space. In simulations we made we chose to simulate water molecules, and use the Gaussian 09 as external program to calculate the atomic forces and energy of the system, as well as reference program for analyzing the accuracy of the results obtained with the ManBo. The results show that the use of the many-body expansion seems to be an interesting approach for reducing the still prohibitive computational cost of ab initio molecular dynamics. The errors introduced on atomic forces in applying such methodology are very small. The inclusion of an embedding electrostatic seems to be a good solution for improving the results with only a small increase in simulation time. As we increase the level of calculation, the simulation time of ManBo tends to largely decrease in relation to a conventional BOMD simulation of Gaussian, due to better scalability of the methodology presented. References [1] E. E. Dahlke and D. G. Truhlar; J. Chem. Theory Comput., 3, 46 (2007). [2] E. E. Dahlke and D. G. Truhlar; J. Chem. Theory Comput., 4, 1 (2008). [3] R. Rivelino, P. Chaudhuri and S. Canuto; J. Chem. Phys., 118, 10593 (2003).
Resumo:
The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While dozens of classification algorithms have been applied to time series, recent empirical evidence strongly suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm is important, and depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping, and cardiology data requires invariance to the baseline (the mean value). Similarly, recent work suggests that for time series clustering, the choice of clustering algorithm is much less important than the choice of distance measure used.In this work we make a somewhat surprising claim. There is an invariance that the community seems to have missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where some complex objects may be incorrectly assigned to a simpler class. Similarly, for clustering this effect can introduce errors by “suggesting” to the clustering algorithm that subjectively similar, but complex objects belong in a sparser and larger diameter cluster than is truly warranted.We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification and clustering accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series mining experiments ever attempted in a single work, and show that complexity-invariant distance measures can produce improvements in classification and clustering in the vast majority of cases.