983 resultados para Vector fields.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present Thesis looks at the problem of protein folding using Monte Carlo and Langevin simulations, three topics in protein folding have been studied: 1) the effect of confining potential barriers, 2) the effect of a static external field and 3) the design of amino acid sequences which fold in a short time and which have a stable native state (global minimum). Regarding the first topic, we studied the confinement of a small protein of 16 amino acids known as 1NJ0 (PDB code) which has a beta-sheet structure as a native state. The confinement of proteins occurs frequently in the cell environment. Some molecules called Chaperones, present in the cytoplasm, capture the unfolded proteins in their interior and avoid the formation of aggregates and misfolded proteins. This mechanism of confinement mediated by Chaperones is not yet well understood. In the present work we considered two kinds of potential barriers which try to mimic the confinement induced by a Chaperon molecule. The first kind of potential was a purely repulsive barrier whose only effect is to create a cavity where the protein folds up correctly. The second kind of potential was a barrier which includes both attractive and repulsive effects. We performed Wang-Landau simulations to calculate the thermodynamical properties of 1NJ0. From the free energy landscape plot we found that 1NJ0 has two intermediate states in the bulk (without confinement) which are clearly separated from the native and the unfolded states. For the case of the purely repulsive barrier we found that the intermediate states get closer to each other in the free energy landscape plot and eventually they collapse into a single intermediate state. The unfolded state is more compact, compared to that in the bulk, as the size of the barrier decreases. For an attractive barrier modifications of the states (native, unfolded and intermediates) are observed depending on the degree of attraction between the protein and the walls of the barrier. The strength of the attraction is measured by the parameter $\epsilon$. A purely repulsive barrier is obtained for $\epsilon=0$ and a purely attractive barrier for $\epsilon=1$. The states are changed slightly for magnitudes of the attraction up to $\epsilon=0.4$. The disappearance of the intermediate states of 1NJ0 is already observed for $\epsilon =0.6$. A very high attractive barrier ($\epsilon \sim 1.0$) produces a completely denatured state. In the second topic of this Thesis we dealt with the interaction of a protein with an external electric field. We demonstrated by means of computer simulations, specifically by using the Wang-Landau algorithm, that the folded, unfolded, and intermediate states can be modified by means of a field. We have found that an external field can induce several modifications in the thermodynamics of these states: for relatively low magnitudes of the field ($<2.06 \times 10^8$ V/m) no major changes in the states are observed. However, for higher magnitudes than ($6.19 \times 10^8$ V/m) one observes the appearance of a new native state which exhibits a helix-like structure. In contrast, the original native state is a $\beta$-sheet structure. In the new native state all the dipoles in the backbone structure are aligned parallel to the field. The design of amino acid sequences constitutes the third topic of the present work. We have tested the Rate of Convergence criterion proposed by D. Gridnev and M. Garcia ({\it work unpublished}). We applied it to the study of off-lattice models. The Rate of Convergence criterion is used to decide if a certain sequence will fold up correctly within a relatively short time. Before the present work, the common way to decide if a certain sequence was a good/bad folder was by performing the whole dynamics until the sequence got its native state (if it existed), or by studying the curvature of the potential energy surface. There are some difficulties in the last two approaches. In the first approach, performing the complete dynamics for hundreds of sequences is a rather challenging task because of the CPU time needed. In the second approach, calculating the curvature of the potential energy surface is possible only for very smooth surfaces. The Rate of Convergence criterion seems to avoid the previous difficulties. With this criterion one does not need to perform the complete dynamics to find the good and bad sequences. Also, the criterion does not depend on the kind of force field used and therefore it can be used even for very rugged energy surfaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Little is known about gaseous carbon (C) and nitrogen (N) emissions from traditional terrace agriculture in irrigated high mountain agroecosystems of the subtropics. In an effort towards filling this knowledge gap measurements of carbon dioxide (CO_2), methane (CH_4), ammonia (NH_3) and dinitrous oxide (N_2O) were taken with a mobile photoacoustic infrared multi-gas monitor on manure-filled PE-fibre storage bags and on flood-irrigated untilled and tilled fields in three mountain oases of the northen Omani Al Jabal al Akhdar mountains. During typical 9-11 day irrigation cycles of March, August and September 2006 soil volumetric moisture contents of fields dominated by fodder wheat, barley, oats and pomegranate ranged from 46-23%. While manure incorporation after application effectively reduced gaseous N losses, prolonged storage of manure in heaps or in PE-fibre bags caused large losses of C and N. Given the large irrigation-related turnover of organic C, sustainable agricultural productivity of oasis agriculture in Oman seems to require the integration of livestock which allows for several applications of manure per year at individual rates of 20 t dry matter ha^−1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The structural, electronic and magnetic properties of one-dimensional 3d transition-metal (TM) monoatomic chains having linear, zigzag and ladder geometries are investigated in the frame-work of first-principles density-functional theory. The stability of long-range magnetic order along the nanowires is determined by computing the corresponding frozen-magnon dispersion relations as a function of the 'spin-wave' vector q. First, we show that the ground-state magnetic orders of V, Mn and Fe linear chains at the equilibrium interatomic distances are non-collinear (NC) spin-density waves (SDWs) with characteristic equilibrium wave vectors q that depend on the composition and interatomic distance. The electronic and magnetic properties of these novel spin-spiral structures are discussed from a local perspective by analyzing the spin-polarized electronic densities of states, the local magnetic moments and the spin-density distributions for representative values q. Second, we investigate the stability of NC spin arrangements in Fe zigzag chains and ladders. We find that the non-collinear SDWs are remarkably stable in the biatomic chains (square ladder), whereas ferromagnetic order (q =0) dominates in zigzag chains (triangular ladders). The different magnetic structures are interpreted in terms of the corresponding effective exchange interactions J(ij) between the local magnetic moments μ(i) and μ(j) at atoms i and j. The effective couplings are derived by fitting a classical Heisenberg model to the ab initio magnon dispersion relations. In addition they are analyzed in the framework of general magnetic phase diagrams having arbitrary first, second, and third nearest-neighbor (NN) interactions J(ij). The effect of external electric fields (EFs) on the stability of NC magnetic order has been quantified for representative monoatomic free-standing and deposited chains. We find that an external EF, which is applied perpendicular to the chains, favors non-collinear order in V chains, whereas it stabilizes the ferromagnetic (FM) order in Fe chains. Moreover, our calculations reveal a change in the magnetic order of V chains deposited on the Cu(110) surface in the presence of external EFs. In this case the NC spiral order, which was unstable in the absence of EF, becomes the most favorable one when perpendicular fields of the order of 0.1 V/Å are applied. As a final application of the theory we study the magnetic interactions within monoatomic TM chains deposited on graphene sheets. One observes that even weak chain substrate hybridizations can modify the magnetic order. Mn and Fe chains show incommensurable NC spin configurations. Remarkably, V chains show a transition from a spiral magnetic order in the freestanding geometry to FM order when they are deposited on a graphene sheet. Some TM-terminated zigzag graphene-nanoribbons, for example V and Fe terminated nanoribbons, also show NC spin configurations. Finally, the magnetic anisotropy energies (MAEs) of TM chains on graphene are investigated. It is shown that Co and Fe chains exhibit significant MAEs and orbital magnetic moments with in-plane easy magnetization axis. The remarkable changes in the magnetic properties of chains on graphene are correlated to charge transfers from the TMs to NN carbon atoms. Goals and limitations of this study and the resulting perspectives of future investigations are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Support Vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights and threshold such as to minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by $k$--means clustering and the weights are found using error backpropagation. We consider three machines, namely a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the US postal service database of handwritten digits, the SV machine achieves the highest test accuracy, followed by the hybrid approach. The SV approach is thus not only theoretically well--founded, but also superior in a practical application.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The computation of a piecewise smooth function that approximates a finite set of data points may be decomposed into two decoupled tasks: first, the computation of the locally smooth models, and hence, the segmentation of the data into classes that consist on the sets of points best approximated by each model, and second, the computation of the normalized discriminant functions for each induced class. The approximating function may then be computed as the optimal estimator with respect to this measure field. We give an efficient procedure for effecting both computations, and for the determination of the optimal number of components.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We compare Naive Bayes and Support Vector Machines on the task of multiclass text classification. Using a variety of approaches to combine the underlying binary classifiers, we find that SVMs substantially outperform Naive Bayes. We present full multiclass results on two well-known text data sets, including the lowest error to date on both data sets. We develop a new indicator of binary performance to show that the SVM's lower multiclass error is a result of its improved binary performance. Furthermore, we demonstrate and explore the surprising result that one-vs-all classification performs favorably compared to other approaches even though it has no error-correcting properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Support Vector Machines (SVMs) perform pattern recognition between two point classes by finding a decision surface determined by certain points of the training set, termed Support Vectors (SV). This surface, which in some feature space of possibly infinite dimension can be regarded as a hyperplane, is obtained from the solution of a problem of quadratic programming that depends on a regularization parameter. In this paper we study some mathematical properties of support vectors and show that the decision surface can be written as the sum of two orthogonal terms, the first depending only on the margin vectors (which are SVs lying on the margin), the second proportional to the regularization parameter. For almost all values of the parameter, this enables us to predict how the decision surface varies for small parameter changes. In the special but important case of feature space of finite dimension m, we also show that there are at most m+1 margin vectors and observe that m+1 SVs are usually sufficient to fully determine the decision surface. For relatively small m this latter result leads to a consistent reduction of the SV number.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We derive a new representation for a function as a linear combination of local correlation kernels at optimal sparse locations and discuss its relation to PCA, regularization, sparsity principles and Support Vector Machines. We first review previous results for the approximation of a function from discrete data (Girosi, 1998) in the context of Vapnik"s feature space and dual representation (Vapnik, 1995). We apply them to show 1) that a standard regularization functional with a stabilizer defined in terms of the correlation function induces a regression function in the span of the feature space of classical Principal Components and 2) that there exist a dual representations of the regression function in terms of a regularization network with a kernel equal to a generalized correlation function. We then describe the main observation of the paper: the dual representation in terms of the correlation function can be sparsified using the Support Vector Machines (Vapnik, 1982) technique and this operation is equivalent to sparsify a large dictionary of basis functions adapted to the task, using a variation of Basis Pursuit De-Noising (Chen, Donoho and Saunders, 1995; see also related work by Donahue and Geiger, 1994; Olshausen and Field, 1995; Lewicki and Sejnowski, 1998). In addition to extending the close relations between regularization, Support Vector Machines and sparsity, our work also illuminates and formalizes the LFA concept of Penev and Atick (1996). We discuss the relation between our results, which are about regression, and the different problem of pattern classification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the relation between support vector machines (SVMs) for regression (SVMR) and SVM for classification (SVMC). We show that for a given SVMC solution there exists a SVMR solution which is equivalent for a certain choice of the parameters. In particular our result is that for $epsilon$ sufficiently close to one, the optimal hyperplane and threshold for the SVMC problem with regularization parameter C_c are equal to (1-epsilon)^{- 1} times the optimal hyperplane and threshold for SVMR with regularization parameter C_r = (1-epsilon)C_c. A direct consequence of this result is that SVMC can be seen as a special case of SVMR.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Support Vector Machines Regression (SVMR) is a regression technique which has been recently introduced by V. Vapnik and his collaborators (Vapnik, 1995; Vapnik, Golowich and Smola, 1996). In SVMR the goodness of fit is measured not by the usual quadratic loss function (the mean square error), but by a different loss function called Vapnik"s $epsilon$- insensitive loss function, which is similar to the "robust" loss functions introduced by Huber (Huber, 1981). The quadratic loss function is well justified under the assumption of Gaussian additive noise. However, the noise model underlying the choice of Vapnik's loss function is less clear. In this paper the use of Vapnik's loss function is shown to be equivalent to a model of additive and Gaussian noise, where the variance and mean of the Gaussian are random variables. The probability distributions for the variance and mean will be stated explicitly. While this work is presented in the framework of SVMR, it can be extended to justify non-quadratic loss functions in any Maximum Likelihood or Maximum A Posteriori approach. It applies not only to Vapnik's loss function, but to a much broader class of loss functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples -- in particular the regression problem of approximating a multivariate function from sparse data. We present both formulations in a unified framework, namely in the context of Vapnik's theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Local descriptors are increasingly used for the task of object recognition because of their perceived robustness with respect to occlusions and to global geometrical deformations. We propose a performance criterion for a local descriptor based on the tradeoff between selectivity and invariance. In this paper, we evaluate several local descriptors with respect to selectivity and invariance. The descriptors that we evaluated are Gaussian derivatives up to the third order, gray image patches, and Laplacian-based descriptors with either three scales or one scale filters. We compare selectivity and invariance to several affine changes such as rotation, scale, brightness, and viewpoint. Comparisons have been made keeping the dimensionality of the descriptors roughly constant. The overall results indicate a good performance by the descriptor based on a set of oriented Gaussian filters. It is interesting that oriented receptive fields similar to the Gaussian derivatives as well as receptive fields similar to the Laplacian are found in primate visual cortex.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the first part of this paper we show a similarity between the principle of Structural Risk Minimization Principle (SRM) (Vapnik, 1982) and the idea of Sparse Approximation, as defined in (Chen, Donoho and Saunders, 1995) and Olshausen and Field (1996). Then we focus on two specific (approximate) implementations of SRM and Sparse Approximation, which have been used to solve the problem of function approximation. For SRM we consider the Support Vector Machine technique proposed by V. Vapnik and his team at AT&T Bell Labs, and for Sparse Approximation we consider a modification of the Basis Pursuit De-Noising algorithm proposed by Chen, Donoho and Saunders (1995). We show that, under certain conditions, these two techniques are equivalent: they give the same solution and they require the solution of the same quadratic programming problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When training Support Vector Machines (SVMs) over non-separable data sets, one sets the threshold $b$ using any dual cost coefficient that is strictly between the bounds of $0$ and $C$. We show that there exist SVM training problems with dual optimal solutions with all coefficients at bounds, but that all such problems are degenerate in the sense that the "optimal separating hyperplane" is given by ${f w} = {f 0}$, and the resulting (degenerate) SVM will classify all future points identically (to the class that supplies more training data). We also derive necessary and sufficient conditions on the input data for this to occur. Finally, we show that an SVM training problem can always be made degenerate by the addition of a single data point belonging to a certain unboundedspolyhedron, which we characterize in terms of its extreme points and rays.