108 resultados para variational Bayes, Voronoi tessellations
Resumo:
During the motion of one dimensional flexible objects such as ropes, chains, etc., the assumption of constant length is realistic. Moreover,their motion appears to be naturally minimizing some abstract distance measure, wherein the disturbance at one end gradually dies down along the curve defining the object. This paper presents purely kinematic strategies for deriving length-preserving transformations of flexible objects that minimize appropriate ‘motion’. The strategies involve sequential and overall optimization of the motion derived using variational calculus. Numerical simulations are performed for the motion of a planar curve and results show stable converging behavior for single-step infinitesimal and finite perturbations 1 as well as multi-step perturbations. Additionally, our generalized approach provides different intuitive motions for various problem-specific measures of motion, one of which is shown to converge to the conventional tractrix-based solution. Simulation results for arbitrary shapes and excitations are also included.
Resumo:
This work presents a finite element-based strategy for exterior acoustical problems based on an assumed pressure form that favours outgoing waves. The resulting governing equation, weak formulation, and finite element formulation are developed both for coupled and uncoupled problems. The developed elements are very similar to conventional elements in that they are based on the standard Galerkin variational formulation and use standard Lagrange interpolation functions and standard Gaussian quadrature. In addition and in contrast to wave envelope formulations and their extensions, the developed elements can be used in the immediate vicinity of the radiator/scatterer. The method is similar to the perfectly matched layer (PML) method in the sense that each layer of elements added around the radiator absorbs acoustical waves so that no boundary condition needs to be applied at the outermost boundary where the domain is truncated. By comparing against strategies such as the PML and wave-envelope methods, we show that the relative accuracy, both in the near and far-field results, is considerably higher.
Resumo:
In document community support vector machines and naïve bayes classifier are known for their simplistic yet excellent performance. Normally the feature subsets used by these two approaches complement each other, however a little has been done to combine them. The essence of this paper is a linear classifier, very similar to these two. We propose a novel way of combining these two approaches, which synthesizes best of them into a hybrid model. We evaluate the proposed approach using 20ng dataset, and compare it with its counterparts. The efficacy of our results strongly corroborate the effectiveness of our approach.
Resumo:
This paper deals with the evaluation of the component-laminate load-carrying capacity, i.e., to calculate the loads that cause the failure of the individual layers and the component-laminate as a whole in four-bar mechanism. The component-laminate load-carrying capacity is evaluated using the Tsai-Wu-Hahn failure criterion for various lay-ups. The reserve factor of each ply in the component-laminate is calculated by using the maximum resultant force and the maximum resultant moment occurring at different time steps at the joints of the mechanism. Here, all component bars of the mechanism are made of fiber reinforced laminates and have thin rectangular cross-sections. They could, in general, be pre-twisted and/or possess initial curvature, either by design or by defect. They are linked to each other by means of revolute joints. We restrict ourselves to linear materials with small strains within each elastic body (strip-like beam). Each component of the mechanism is modeled as a beam based on geometrically non-linear 3-D elasticity theory. The component problems are thus split into 2-D analyses of reference beam cross-sections and non-linear 1-D analyses along the three beam reference curves. For the thin rectangular cross-sections considered here, the 2-D cross-sectional nonlinearity is also overwhelming. This can be perceived from the fact that such sections constitute a limiting case between thin-walled open and closed sections, thus inviting the non-linear phenomena observed in both. The strong elastic couplings of anisotropic composite laminates complicate the model further. However, a powerful mathematical tool called the Variational Asymptotic Method (VAM) not only enables such a dimensional reduction, but also provides asymptotically correct analytical solutions to the non-linear cross-sectional analysis. Such closed-form solutions are used here in conjunction with numerical techniques for the rest of the problem to predict more quickly and accurately than would otherwise be possible. Local 3-D stress, strain and displacement fields for representative sections in the component-bars are recovered, based on the stress resultants from the 1-D global beam analysis. A numerical example is presented which illustrates the failure of each component-laminate and the mechanism as a whole.
Resumo:
There are many popular models available for classification of documents like Naïve Bayes Classifier, k-Nearest Neighbors and Support Vector Machine. In all these cases, the representation is based on the “Bag of words” model. This model doesn't capture the actual semantic meaning of a word in a particular document. Semantics are better captured by proximity of words and their occurrence in the document. We propose a new “Bag of Phrases” model to capture this discriminative power of phrases for text classification. We present a novel algorithm to extract phrases from the corpus using the well known topic model, Latent Dirichlet Allocation(LDA), and to integrate them in vector space model for classification. Experiments show a better performance of classifiers with the new Bag of Phrases model against related representation models.
Resumo:
Motivated by experiments on Josephson junction arrays in a magnetic field and ultracold interacting atoms in an optical lattice in the presence of a ``synthetic'' orbital magnetic field, we study the ``fully frustrated'' Bose-Hubbard model and quantum XY model with half a flux quantum per lattice plaquette. Using Monte Carlo simulations and the density matrix renormalization group method, we show that these kinetically frustrated boson models admit three phases at integer filling: a weakly interacting chiral superfluid phase with staggered loop currents which spontaneously break time-reversal symmetry, a conventional Mott insulator at strong coupling, and a remarkable ``chiral Mott insulator'' (CMI) with staggered loop currents sandwiched between them at intermediate correlation. We discuss how the CMI state may be viewed as an exciton condensate or a vortex supersolid, study a Jastrow variational wave function which captures its correlations, present results for the boson momentum distribution across the phase diagram, and consider various experimental implications of our phase diagram. Finally, we consider generalizations to a staggered flux Bose-Hubbard model and a two-dimensional (2D) version of the CMI in weakly coupled ladders.
Resumo:
For one-dimensional flexible objects such as ropes, chains, hair, the assumption of constant length is realistic for large-scale 3D motion. Moreover, when the motion or disturbance at one end gradually dies down along the curve defining the one-dimensional flexible objects, the motion appears ``natural''. This paper presents a purely geometric and kinematic approach for deriving more natural and length-preserving transformations of planar and spatial curves. Techniques from variational calculus are used to determine analytical conditions and it is shown that the velocity at any point on the curve must be along the tangent at that point for preserving the length and to yield the feature of diminishing motion. It is shown that for the special case of a straight line, the analytical conditions lead to the classical tractrix curve solution. Since analytical solutions exist for a tractrix curve, the motion of a piecewise linear curve can be solved in closed-form and thus can be applied for the resolution of redundancy in hyper-redundant robots. Simulation results for several planar and spatial curves and various input motions of one end are used to illustrate the features of motion damping and eventual alignment with the perturbation vector.
Resumo:
A molecular dynamics (MD) investigation of LiCl in water, methanol, and ethylene glycol (EG) at 298 K is reported. Several; structural and dynamical properties of the ions as well as the solvent such as self-diffusivity, radial distribution functions, void and neck distributions, velocity autocorrelation functions, and mean residence times of solvent in the first solvation shell have been computed. The results show that the reciprocal relationship between the self-diffusivity of the ions and the viscosity is valid in almost all solvents with the exception of water. From an analysis of radial distribution functions and coordination numbers the nature of hydrogen bonding within the solvent and its influence on the void and neck distribution becomes evident. It is seen that the solvent solvent interaction is important in EG while solute solvent interactions dominate in water and methanol. From Voronoi tessellation, it is seen that the voids and necks within methanol are larger as compared to those within water or EG. On the basis of the void and neck distributions obtained from MD simulations and literature experimental data of limiting ion conductivity for various ions of different sizes we show that there is a relation between the void and neck radius on e one hand and dependence of conductivity on the ionic radius on the other. It is shown that the presence of large diameter voids and necks in methanol is responsible for maximum in limiting ion conductivity (lambda(0)) of TMA(+), while in water in EG, the maximum is seen for Rb+. In the case of monovalent anions, maximum in lambda(0) as a function ionic radius is seen for Br- in water EG but for the larger ClO4- ion in methanol. The relation between the void and neck distribution and the variation in lambda(0) with ionic radius arises via the Levitation effect which is discussed. These studies show the importance of the solvent structure and the associated void structure.
Resumo:
In systems biology, questions concerning the molecular and cellular makeup of an organism are of utmost importance, especially when trying to understand how unreliable components-like genetic circuits, biochemical cascades, and ion channels, among others-enable reliable and adaptive behaviour. The repertoire and speed of biological computations are limited by thermodynamic or metabolic constraints: an example can be found in neurons, where fluctuations in biophysical states limit the information they can encode-with almost 20-60% of the total energy allocated for the brain used for signalling purposes, either via action potentials or by synaptic transmission. Here, we consider the imperatives for neurons to optimise computational and metabolic efficiency, wherein benefits and costs trade-off against each other in the context of self-organised and adaptive behaviour. In particular, we try to link information theoretic (variational) and thermodynamic (Helmholtz) free-energy formulations of neuronal processing and show how they are related in a fundamental way through a complexity minimisation lemma.
Resumo:
Residue depth accurately measures burial and parameterizes local protein environment. Depth is the distance of any atom/residue to the closest bulk water. We consider the non-bulk waters to occupy cavities, whose volumes are determined using a Voronoi procedure. Our estimation of cavity sizes is statistically superior to estimates made by CASTp and VOIDOO, and on par with McVol over a data set of 40 cavities. Our calculated cavity volumes correlated best with the experimentally determined destabilization of 34 mutants from five proteins. Some of the cavities identified are capable of binding small molecule ligands. In this study, we have enhanced our depth-based predictions of binding sites by including evolutionary information. We have demonstrated that on a database (LigASite) of similar to 200 proteins, we perform on par with ConCavity and better than MetaPocket 2.0. Our predictions, while less sensitive, are more specific and precise. Finally, we use depth (and other features) to predict pK(a)s of GLU, ASP, LYS and HIS residues. Our results produce an average error of just <1 pH unit over 60 predictions. Our simple empirical method is statistically on par with two and superior to three other methods while inferior to only one. The DEPTH server (http://mspc.bii.a-star.edu.sg/depth/) is an ideal tool for rapid yet accurate structural analyses of protein structures.
Resumo:
Scatter/Gather systems are increasingly becoming useful in browsing document corpora. Usability of the present-day systems are restricted to monolingual corpora, and their methods for clustering and labeling do not easily extend to the multilingual setting, especially in the absence of dictionaries/machine translation. In this paper, we study the cluster labeling problem for multilingual corpora in the absence of machine translation, but using comparable corpora. Using a variational approach, we show that multilingual topic models can effectively handle the cluster labeling problem, which in turn allows us to design a novel Scatter/Gather system ShoBha. Experimental results on three datasets, namely the Canadian Hansards corpus, the entire overlapping Wikipedia of English, Hindi and Bengali articles, and a trilingual news corpus containing 41,000 articles, confirm the utility of the proposed system.
Resumo:
Maximum entropy approach to classification is very well studied in applied statistics and machine learning and almost all the methods that exists in literature are discriminative in nature. In this paper, we introduce a maximum entropy classification method with feature selection for large dimensional data such as text datasets that is generative in nature. To tackle the curse of dimensionality of large data sets, we employ conditional independence assumption (Naive Bayes) and we perform feature selection simultaneously, by enforcing a `maximum discrimination' between estimated class conditional densities. For two class problems, in the proposed method, we use Jeffreys (J) divergence to discriminate the class conditional densities. To extend our method to the multi-class case, we propose a completely new approach by considering a multi-distribution divergence: we replace Jeffreys divergence by Jensen-Shannon (JS) divergence to discriminate conditional densities of multiple classes. In order to reduce computational complexity, we employ a modified Jensen-Shannon divergence (JS(GM)), based on AM-GM inequality. We show that the resulting divergence is a natural generalization of Jeffreys divergence to a multiple distributions case. As far as the theoretical justifications are concerned we show that when one intends to select the best features in a generative maximum entropy approach, maximum discrimination using J-divergence emerges naturally in binary classification. Performance and comparative study of the proposed algorithms have been demonstrated on large dimensional text and gene expression datasets that show our methods scale up very well with large dimensional datasets.
Resumo:
We propose a model to realize a fermionic superfluid state in an optical lattice circumventing the cooling problem. Our proposal exploits the idea of tuning the interaction in a characteristically low-entropy state, a band insulator in an optical bilayer system, to obtain a superfluid. By performing a detailed analysis of the model including fluctuations and augmented by a variational quantum Monte Carlo calculation of the ground state, we show that the superfluid state obtained has a high transition temperature of the order of the hopping energy. Our system is designed to suppress other competing orders such as a charge density wave. We suggest a laboratory realization of this model via an orthogonally shaken optical lattice bilayer.
Resumo:
With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naive Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (approximate to 85%) and specific (approximate to 95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions. Proteins 2014; 82:1219-1234. (c) 2013 Wiley Periodicals, Inc.
Resumo:
The formulation of higher order structural models and their discretization using the finite element method is difficult owing to their complexity, especially in the presence of non-linearities. In this work a new algorithm for automating the formulation and assembly of hyperelastic higher-order structural finite elements is developed. A hierarchic series of kinematic models is proposed for modeling structures with special geometries and the algorithm is formulated to automate the study of this class of higher order structural models. The algorithm developed in this work sidesteps the need for an explicit derivation of the governing equations for the individual kinematic modes. Using a novel procedure involving a nodal degree-of-freedom based automatic assembly algorithm, automatic differentiation and higher dimensional quadrature, the relevant finite element matrices are directly computed from the variational statement of elasticity and the higher order kinematic model. Another significant feature of the proposed algorithm is that natural boundary conditions are implicitly handled for arbitrary higher order kinematic models. The validity algorithm is illustrated with examples involving linear elasticity and hyperelasticity. (C) 2013 Elsevier Inc. All rights reserved.