960 resultados para Energy function
Resumo:
A Maple scheme for quickly parameterizing vibrational potential energy functions is presented. As an example, the potential energy function's parameters for the vibrational motions in H_2O_2 are obtained assuming the simplest potential energy function. This paper was originally written as a research paper, but rejected by the referees. It is therefore being edited into an ``educational'' paper for student usage.
Resumo:
The present study explores a “hydrophobic” energy function for folding simulations of the protein lattice model. The contribution of each monomer to conformational energy is the product of its “hydrophobicity” and the number of contacts it makes, i.e., E(h⃗, c⃗) = −Σi=1N cihi = −(h⃗.c⃗) is the negative scalar product between two vectors in N-dimensional cartesian space: h⃗ = (h1, … , hN), which represents monomer hydrophobicities and is sequence-dependent; and c⃗ = (c1, … , cN), which represents the number of contacts made by each monomer and is conformation-dependent. A simple theoretical analysis shows that restrictions are imposed concomitantly on both sequences and native structures if the stability criterion for protein-like behavior is to be satisfied. Given a conformation with vector c⃗, the best sequence is a vector h⃗ on the direction upon which the projection of c⃗ − c̄⃗ is maximal, where c̄⃗ is the diagonal vector with components equal to c̄, the average number of contacts per monomer in the unfolded state. Best native conformations are suggested to be not maximally compact, as assumed in many studies, but the ones with largest variance of contacts among its monomers, i.e., with monomers tending to occupy completely buried or completely exposed positions. This inside/outside segregation is reflected on an apolar/polar distribution on the corresponding sequence. Monte Carlo simulations in two dimensions corroborate this general scheme. Sequences targeted to conformations with large contact variances folded cooperatively with thermodynamics of a two-state transition. Sequences targeted to maximally compact conformations, which have lower contact variance, were either found to have degenerate ground state or to fold with much lower cooperativity.
Resumo:
Recent improvements of a hierarchical ab initio or de novo approach for predicting both α and β structures of proteins are described. The united-residue energy function used in this procedure includes multibody interactions from a cumulant expansion of the free energy of polypeptide chains, with their relative weights determined by Z-score optimization. The critical initial stage of the hierarchical procedure involves a search of conformational space by the conformational space annealing (CSA) method, followed by optimization of an all-atom model. The procedure was assessed in a recent blind test of protein structure prediction (CASP4). The resulting lowest-energy structures of the target proteins (ranging in size from 70 to 244 residues) agreed with the experimental structures in many respects. The entire experimental structure of a cyclic α-helical protein of 70 residues was predicted to within 4.3 Å α-carbon (Cα) rms deviation (rmsd) whereas, for other α-helical proteins, fragments of roughly 60 residues were predicted to within 6.0 Å Cα rmsd. Whereas β structures can now be predicted with the new procedure, the success rate for α/β- and β-proteins is lower than that for α-proteins at present. For the β portions of α/β structures, the Cα rmsd's are less than 6.0 Å for contiguous fragments of 30–40 residues; for one target, three fragments (of length 10, 23, and 28 residues, respectively) formed a compact part of the tertiary structure with a Cα rmsd less than 6.0 Å. Overall, these results constitute an important step toward the ab initio prediction of protein structure solely from the amino acid sequence.
Resumo:
The one-electron reduced local energy function, t ~ , is introduced and has the property < tL)=(~>. It is suggested that the accuracy of SL reflects the local accuracy of an approximate wavefunction. We establish that <~~>~ <~2,> and present a bound formula, E~ , which is such that where Ew is Weinstein's lower bound formula to the ground state. The nature of the bound is not guaranteed but for sufficiently accurate wavefunctions it will yield a lower bound. ,-+ 1'S I I Applications to X LW Hz. and ne are presented.
Resumo:
A potential energy function has been derived for the ground state surface of C2H2 as a many-body expansion. The 2- and 3-body terms have been obtained by preliminary investigation of the ground state surfaces of CH2( 3B1) and C2H( 2Σ+). A 4-body term has been derived which reproduces the energy, geometry and harmonic force field of C2H2. The potential has a secondary minimum corresponding to the vinylidene structure and the geometry and energy of this are in close agreement with predictions from ab initio calculations. The saddle point for the HCCH-H2CC rearrangement is predicted to lie 2•530 eV above the acetylene minimum.
Resumo:
A model potential energy function for the ground state of H2CO has been derived which covers the whole space of the six internal coordinates. This potential reproduces the experimental energy, geometry and quadratic force field of formaldehyde, and dissociates correctly to all possible atom, diatom and triatom fragments. Thus there are good reasons for believing it to be close to the true potential energy surface except in regions where both hydrogen atoms are close to the oxygen. It leads to the prediction that there should be a metastable singlet hydroxycarbene HCOH which has a planar trans structure and an energy of 2•31 eV above that of equilibrium formaldehyde. The reaction path for dissociation into H2 + CO is predicted to pass through a low symmetry transition state with an activation energy of 4•8 eV. Both of these predictions are in good agreement with recently published ab initio calculations.
Resumo:
This paper presents a, simple two dimensional frame formulation to deal with structures undergoing large motions due to dynamic actions including very thin inflatable structures, balloons. The proposed methodology is based on the minimum potential energy theorem written regarding nodal positions. Velocity, acceleration and strain are achieved directly from positions, not. displacements, characterizing the novelty of the proposed technique. A non-dimensional space is created and the deformation function (change of configuration) is written following two independent mappings from which the strain energy function is written. The classical New-mark equations are used to integrate time. Dumping and non-conservative forces are introduced into the mechanical system by a rheonomic energy function. The final formulation has the advantage of being simple and easy to teach, when compared to classical Counterparts. The behavior of a bench-mark problem (spin-up maneuver) is solved to prove the formulation regarding high circumferential speed applications. Other examples are dedicated to inflatable and very thin structures, in order to test the formulation for further analysis of three dimensional balloons.
Resumo:
This paper presents a positional FEM formulation to deal with geometrical nonlinear dynamics of shells. The main objective is to develop a new FEM methodology based on the minimum potential energy theorem written regarding nodal positions and generalized unconstrained vectors not displacements and rotations. These characteristics are the novelty of the present work and avoid the use of large rotation approximations. A nondimensional auxiliary coordinate system is created, and the change of configuration function is written following two independent mappings from which the strain energy function is derived. This methodology is called positional and, as far as the authors' knowledge goes, is a new procedure to approximated geometrical nonlinear structures. In this paper a proof for the linear and angular momentum conservation property of the Newmark beta algorithm is provided for total Lagrangian description. The proposed shell element is locking free for elastic stress-strain relations due to the presence of linear strain variation along the shell thickness. The curved, high-order element together with an implicit procedure to solve nonlinear equations guarantees precision in calculations. The momentum conserving, the locking free behavior, and the frame invariance of the adopted mapping are numerically confirmed by examples. Copyright (C) 2009 H. B. Coda and R. R. Paccola.
Resumo:
In this paper, a phenomenologically motivated magneto-mechanically coupled finite strain elastic framework for simulating the curing process of polymers in the presence of a magnetic load is proposed. This approach is in line with previous works by Hossain and co-workers on finite strain curing modelling framework for the purely mechanical polymer curing (Hossain et al., 2009b). The proposed thermodynamically consistent approach is independent of any particular free energy function that may be used for the fully-cured magneto-sensitive polymer modelling, i.e. any phenomenological or micromechanical-inspired free energy can be inserted into the main modelling framework. For the fabrication of magneto-sensitive polymers, micron-size ferromagnetic particles are mixed with the liquid matrix material in the uncured stage. The particles align in a preferred direction with the application of a magnetic field during the curing process. The polymer curing process is a complex (visco) elastic process that transforms a fluid to a solid with time. Such transformation process is modelled by an appropriate constitutive relation which takes into account the temporal evolution of the material parameters appearing in a particular energy function. For demonstration in this work, a frequently used energy function is chosen, i.e. the classical Mooney-Rivlin free energy enhanced by coupling terms. Several representative numerical examples are demonstrated that prove the capability of our approach to correctly capture common features in polymers undergoing curing processes in the presence of a magneto-mechanical coupled load.
Resumo:
Background Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations. Method We present Cau-dateCut: a new fully-automatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlas-based segmentation strategy with the Graph Cut energy-minimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multi-scale edgeness measure. Results We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attention-deficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to state-of-the-art approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis. Conclusion CaudateCut generates segmentation results that are comparable to gold-standard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD.
Resumo:
This paper deals with a phenomenologically motivated magneto-viscoelastic coupled finite strain framework for simulating the curing process of polymers under the application of a coupled magneto-mechanical road. Magneto-sensitive polymers are prepared by mixing micron-sized ferromagnetic particles in uncured polymers. Application of a magnetic field during the curing process causes the particles to align and form chain-like structures lending an overall anisotropy to the material. The polymer curing is a viscoelastic complex process where a transformation from fluid. to solid occurs in the course of time. During curing, volume shrinkage also occurs due to the packing of polymer chains by chemical reactions. Such reactions impart a continuous change of magneto-mechanical properties that can be modelled by an appropriate constitutive relation where the temporal evolution of material parameters is considered. To model the shrinkage during curing, a magnetic-induction-dependent approach is proposed which is based on a multiplicative decomposition of the deformation gradient into a mechanical and a magnetic-induction-dependent volume shrinkage part. The proposed model obeys the relevant laws of thermodynamics. Numerical examples, based on a generalised Mooney-Rivlin energy function, are presented to demonstrate the model capacity in the case of a magneto-viscoelastically coupled load.
Resumo:
In this article, the fusion of a stochastic metaheuristic as Simulated Annealing (SA) with classical criteria for convergence of Blind Separation of Sources (BSS), is shown. Although the topic of BSS, by means of various techniques, including ICA, PCA, and neural networks, has been amply discussed in the literature, to date the possibility of using simulated annealing algorithms has not been seriously explored. From experimental results, this paper demonstrates the possible benefits offered by SA in combination with high order statistical and mutual information criteria for BSS, such as robustness against local minima and a high degree of flexibility in the energy function.
Resumo:
The relationship between the magnetic dipole-dipole potential energy function and its quantum analogue is presented in this work. It is assumed the reader is familiar with the classical expression of the dipolar interaction and has basic knowledge of the quantum mechanics of angular momentum. Except for these two points only elementary steps are involved.
Resumo:
L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.
Resumo:
Introducción: La evaluación de injertos vasculares de submucosa de intestino delgado para la regeneración de vasos sanguíneos ha producido una permeabilidad variable (0-100%) que ha sido concurrente con la variabilidad en las técnicas de fabricación. Metodología: Investigamos los efectos de fabricación en permeabilidad y regeneración en un diseño experimental de 22factorial que combino: 1) preservación (P) o remoción (R) de la capa estratum compactum del intestino, y 2) deshidratada (D) o hidratada (H), dentro de cuatro grupos de estudio (PD, RD, PH, RH). Los injertos fueron implantados en las Arterias Carótidas de porcinos (ID 4.5mm, N=4, 7d). Permeabilidad, trombogenicidad, reacción inflamatoria, vascularización, infiltración de fibroblastos, perfil de polarización de macrófagos y fuerza tensil biaxial fueron evaluadas. Resultados: Todos los injertos PD permanecieron permeables (4/4), pero tuvieron escasa vascularización e infiltración de fibroblastos. El grupo RD permaneció permeable (4/4), presentó una extensa vascularización e infiltración de fibroblastos, y el mayor número del fenotipo de macrófagos (M2) asociado a regeneración. El grupo RH presentó menor permeabilidad (3/4), una extensa vascularización e infiltración de fibroblastos, y un perfil dominante de M2. El grupo PH presentó el menor grado de permeabilidad, y a pesar de mayor infiltración celular que PD, exhibió un fenotipo de macrófagos dominante adverso. La elasticidad de los injertos R evolucionó de una manera similar a las Carótidas nativas (particularmente RD, mientras que los injertos P mantuvieron su rigidez inicial. Discusión: Concluimos que los parámetros de fabricación afectan drásticamente los resultados, siendo los injertos RD los que arrojaron mejores resultados.