980 resultados para gaussian-basis sets
Resumo:
Moment invariants have been thoroughly studied and repeatedly proposed as one of the most powerful tools for 2D shape identification. In this paper a set of such descriptors is proposed, being the basis functions discontinuous in a finite number of points. The goal of using discontinuous functions is to avoid the Gibbs phenomenon, and therefore to yield a better approximation capability for discontinuous signals, as images. Moreover, the proposed set of moments allows the definition of rotation invariants, being this the other main design concern. Translation and scale invariance are achieved by means of standard image normalization. Tests are conducted to evaluate the behavior of these descriptors in noisy environments, where images are corrupted with Gaussian noise up to different SNR values. Results are compared to those obtained using Zernike moments, showing that the proposed descriptor has the same performance in image retrieval tasks in noisy environments, but demanding much less computational power for every stage in the query chain.
Resumo:
Staphylococcus aureus produces a virulence factor, protein A (SpA), that contains five homologous Ig-binding domains. The interactions of SpA with the Fab region of membrane-anchored Igs can stimulate a large fraction of B cells, contributing to lymphocyte clonal selection. To understand the molecular basis for this activity, we have solved the crystal structure of the complex between domain D of SpA and the Fab fragment of a human IgM antibody to 2.7-Å resolution. In the complex, helices II and III of domain D interact with the variable region of the Fab heavy chain (VH) through framework residues, without the involvement of the hypervariable regions implicated in antigen recognition. The contact residues are highly conserved in human VH3 antibodies but not in other families. The contact residues from domain D also are conserved among all SpA Ig-binding domains, suggesting that each could bind in a similar manner. Features of this interaction parallel those reported for staphylococcal enterotoxins that are superantigens for many T cells. The structural homology between Ig VH regions and the T-cell receptor Vβ regions facilitates their comparison, and both types of interactions involve lymphocyte receptor surface remote from the antigen binding site. However, T-cell superantigens reportedly interact through hydrogen bonds with T-cell receptor Vβ backbone atoms in a primary sequence-independent manner, whereas SpA relies on a sequence-restricted conformational binding with residue side chains, suggesting that this common bacterial pathogen has adopted distinct molecular recognition strategies for affecting large sets of B and T lymphocytes.
Resumo:
The thermodynamic consistency of almost 90 VLE data series, including isothermal and isobaric conditions for systems of both total and partial miscibility in the liquid phase, has been examined by means of the area and point-to-point tests. In addition, the Gibbs energy of mixing function calculated from these experimental data has been inspected, with some rather surprising results: certain data sets exhibiting high dispersion or leading to Gibbs energy of mixing curves inconsistent with the total or partial miscibility of the liquid phase, surprisingly, pass the tests. Several possible inconsistencies in the tests themselves or in their application are discussed. Related to this is a very interesting and ambitious initiative that arose within the NIST organization: the development of an algorithm to assess the quality of experimental VLE data. The present paper questions the applicability of two of the five tests that are combined in the algorithm. It further shows that the deviation of the experimental VLE data from the correlation obtained by a given model, the basis of some point-to-point tests, should not be used to evaluate the quality of these data.
Resumo:
We introduce a Gaussian quantum operator representation, using the most general possible multimode Gaussian operator basis. The representation unifies and substantially extends existing phase-space representations of density matrices for Bose systems and also includes generalized squeezed-state and thermal bases. It enables first-principles dynamical or equilibrium calculations in quantum many-body systems, with quantum uncertainties appearing as dynamical objects. Any quadratic Liouville equation for the density operator results in a purely deterministic time evolution. Any cubic or quartic master equation can be treated using stochastic methods.
Resumo:
Aims This paper presents the recommendations, developed from a 3-year consultation process, for a program of research to underpin the development of diagnostic concepts and criteria in the Substance Use Disorders section of the Diagnostic and Statistical Manual of Mental Disorders (DSM) and potentially the relevant section of the next revision of the International Classification of Diseases (ICD). Methods A preliminary list of research topics was developed at the DSM-V Launch Conference in 2004. This led to the presentation of articles on these topics at a specific Substance Use Disorders Conference in February 2005, at the end of which a preliminary list of research questions was developed. This was further refined through an iterative process involving conference participants over the following year. Results Research questions have been placed into four categories: (1) questions that could be addressed immediately through secondary analyses of existing data sets; (2) items likely to require position papers to propose criteria or more focused questions with a view to subsequent analyses of existing data sets; (3) issues that could be proposed for literature reviews, but with a lower probability that these might progress to a data analytic phase; and (4) suggestions or comments that might not require immediate action, but that could be considered by the DSM-V and ICD 11 revision committees as part of their deliberations. Conclusions A broadly based research agenda for the development of diagnostic concepts and criteria for substance use disorders is presented.
Resumo:
The random switching of measurement bases is commonly assumed to be a necessary step of quantum key distribution protocols. In this paper we present a no-switching protocol and show that switching is not required for coherent-state continuous-variable quantum key distribution. Further, this protocol achieves higher information rates and a simpler experimental setup compared to previous protocols that rely on switching. We propose an optimal eavesdropping attack against this protocol, assuming individual Gaussian attacks. Finally, we investigate and compare the no-switching protocol applied to the original Bennett-Brassard 1984 scheme.
Resumo:
We introduce a positive phase-space representation for fermions, using the most general possible multimode Gaussian operator basis. The representation generalizes previous bosonic quantum phase-space methods to Fermi systems. We derive equivalences between quantum and stochastic moments, as well as operator correspondences that map quantum operator evolution onto stochastic processes in phase space. The representation thus enables first-principles quantum dynamical or equilibrium calculations in many-body Fermi systems. Potential applications are to strongly interacting and correlated Fermi gases, including coherent behavior in open systems and nanostructures described by master equations. Examples of an ideal gas and the Hubbard model are given, as well as a generic open system, in order to illustrate these ideas.
Resumo:
We explore the dependence of performance measures, such as the generalization error and generalization consistency, on the structure and the parameterization of the prior on `rules', instanced here by the noisy linear perceptron. Using a statistical mechanics framework, we show how one may assign values to the parameters of a model for a `rule' on the basis of data instancing the rule. Information about the data, such as input distribution, noise distribution and other `rule' characteristics may be embedded in the form of general gaussian priors for improving net performance. We examine explicitly two types of general gaussian priors which are useful in some simple cases. We calculate the optimal values for the parameters of these priors and show their effect in modifying the most probable, MAP, values for the rules.
Resumo:
We develop an approach for a sparse representation for Gaussian Process (GP) models in order to overcome the limitations of GPs caused by large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the model. Experimental results on toy examples and large real-world datasets indicate the efficiency of the approach.
Resumo:
We derive a mean field algorithm for binary classification with Gaussian processes which is based on the TAP approach originally proposed in Statistical Physics of disordered systems. The theory also yields an approximate leave-one-out estimator for the generalization error which is computed with no extra computational cost. We show that from the TAP approach, it is possible to derive both a simpler 'naive' mean field theory and support vector machines (SVM) as limiting cases. For both mean field algorithms and support vectors machines, simulation results for three small benchmark data sets are presented. They show 1. that one may get state of the art performance by using the leave-one-out estimator for model selection and 2. the built-in leave-one-out estimators are extremely precise when compared to the exact leave-one-out estimate. The latter result is a taken as a strong support for the internal consistency of the mean field approach.
Resumo:
We study the dynamics of on-line learning in multilayer neural networks where training examples are sampled with repetition and where the number of examples scales with the number of network weights. The analysis is carried out using the dynamical replica method aimed at obtaining a closed set of coupled equations for a set of macroscopic variables from which both training and generalization errors can be calculated. We focus on scenarios whereby training examples are corrupted by additive Gaussian output noise and regularizers are introduced to improve the network performance. The dependence of the dynamics on the noise level, with and without regularizers, is examined, as well as that of the asymptotic values obtained for both training and generalization errors. We also demonstrate the ability of the method to approximate the learning dynamics in structurally unrealizable scenarios. The theoretical results show good agreement with those obtained by computer simulations.
Resumo:
We develop an approach for sparse representations of Gaussian Process (GP) models (which are Bayesian types of kernel machines) in order to overcome their limitations for large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the GP model. By using an appealing parametrisation and projection techniques that use the RKHS norm, recursions for the effective parameters and a sparse Gaussian approximation of the posterior process are obtained. This allows both for a propagation of predictions as well as of Bayesian error measures. The significance and robustness of our approach is demonstrated on a variety of experiments.
Resumo:
In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.
Resumo:
The concept of entropy rate is well defined in dynamical systems theory but is impossible to apply it directly to finite real world data sets. With this in mind, Pincus developed Approximate Entropy (ApEn), which uses ideas from Eckmann and Ruelle to create a regularity measure based on entropy rate that can be used to determine the influence of chaotic behaviour in a real world signal. However, this measure was found not to be robust and so an improved formulation known as the Sample Entropy (SampEn) was created by Richman and Moorman to address these issues. We have developed a new, related, regularity measure which is not based on the theory provided by Eckmann and Ruelle and proves a more well-behaved measure of complexity than the previous measures whilst still retaining a low computational cost.
Resumo:
We develop an approach for sparse representations of Gaussian Process (GP) models (which are Bayesian types of kernel machines) in order to overcome their limitations for large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the GP model. By using an appealing parametrisation and projection techniques that use the RKHS norm, recursions for the effective parameters and a sparse Gaussian approximation of the posterior process are obtained. This allows both for a propagation of predictions as well as of Bayesian error measures. The significance and robustness of our approach is demonstrated on a variety of experiments.