65 resultados para Bayes Theorem


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to predict the mechanical behavior of polymer composites is crucial for their design and manufacture. Extensive studies based on both macro- and micromechanical analyses are used to develop new insights into the behavior of composites. In this respect, finite element modeling has proved to be a particularly powerful tool. In this article, we present a Galerkin scheme in conjunction with the penalty method for elasticity analyses of different types of polymer composites. In this scheme, the application of Green's theorem to the model equation results in the appearance of interfacial flux terms along the boundary between the filler and polymer matrix. It is shown that for some types of composites these terms significantly affect the stress transfer between polymer and fillers. Thus, inclusion of these terms in the working equations of the scheme preserves the accuracy of the model predictions. The model is used to predict the most important bulk property of different types of composites. Composites filled with rigid or soft particles, and composites reinforced with short or continuous fibers are investigated. For each case, the results are compared with the available experimental results and data obtained from other models reported in the literature. Effects of assumptions made in the development of the model and the selection of the prescribed boundary conditions are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce the concept of cloning for classes of observables and classify cloning machines for qubit systems according to the number of parameters needed to describe the class under investigation. A no-cloning theorem for observables is derived and the connections between cloning of observables and joint measurements of noncommuting observables are elucidated. Relationships with cloning of states and non-demolition measurements are also analysed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the violation of local realism in Bell tests involving homodyne measurements performed on multimode continuous-variable states. By binning the measurement outcomes in an appropriate way, we prove that the Mermin-Klyshko inequality can be violated by an amount that grows exponentially with the number of modes. Furthermore, the maximum violation allowed by quantum mechanics can be attained for any number of modes, albeit requiring a quantum state whose generation is hardly practicable. Interestingly, this exponential increase of the violation holds true even for simpler states, such as multipartite GHZ states. The resulting benefit of using more modes is shown to be significant in practical multipartite Bell tests by analyzing the increase of the robustness to noise with the number of modes. In view of the high efficiency achievable with homodyne detection, our results thus open a possible way to feasible loophole-free Bell tests that are robust to experimental imperfections. We provide an explicit example of a three-mode state (a superposition of coherent states) which results in a significantly high violation of the Mermin-Klyshko inequality (around 10%) with homodyne measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Out-of-equilibrium statistical mechanics is attracting considerable interest due to the recent advances in the control and manipulations of systems at the quantum level. Recently, an interferometric scheme for the detection of the characteristic function of the work distribution following a time-dependent process has been proposed [L. Mazzola et al., Phys. Rev. Lett. 110 (2013) 230602]. There, it was demonstrated that the work statistics of a quantum system undergoing a process can be reconstructed by effectively mapping the characteristic function of work on the state of an ancillary qubit. Here, we expand that work in two important directions. We first apply the protocol to an interesting specific physical example consisting of a superconducting qubit dispersively coupled to the field of a microwave resonator, thus enlarging the class of situations for which our scheme would be key in the task highlighted above. We then account for the interaction of the system with an additional one (which might embody an environment), and generalize the protocol accordingly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an algebro-geometric approach to a theorem on finite domination of chain complexes over a Laurent polynomial ring. The approach uses extension of chain complexes to sheaves on the projective line, which is governed by a K-theoretical obstruction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We undertake a detailed study of the sets of multiplicity in a second countable locally compact group G and their operator versions. We establish a symbolic calculus for normal completely bounded maps from the space B(L-2(G)) of bounded linear operators on L-2 (G) into the von Neumann algebra VN(G) of G and use it to show that a closed subset E subset of G is a set of multiplicity if and only if the set E* = {(s,t) is an element of G x G : ts(-1) is an element of E} is a set of operator multiplicity. Analogous results are established for M-1-sets and M-0-sets. We show that the property of being a set of multiplicity is preserved under various operations, including taking direct products, and establish an Inverse Image Theorem for such sets. We characterise the sets of finite width that are also sets of operator multiplicity, and show that every compact operator supported on a set of finite width can be approximated by sums of rank one operators supported on the same set. We show that, if G satisfies a mild approximation condition, pointwise multiplication by a given measurable function psi : G -> C defines a closable multiplier on the reduced C*-algebra G(r)*(G) of G if and only if Schur multiplication by the function N(psi): G x G -> C, given by N(psi)(s, t) = psi(ts(-1)), is a closable operator when viewed as a densely defined linear map on the space of compact operators on L-2(G). Similar results are obtained for multipliers on VN(C).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We establish an unbounded version of Stinespring's Theorem and a lifting result for Stinespring representations of completely positive modular maps defined on the space of all compact operators. We apply these results to study positivity for Schur multipliers. We characterise positive local Schur multipliers, and provide a description of positive local Schur multipliers of Toeplitz type. We introduce local operator multipliers as a non-commutative analogue of local Schur multipliers, and characterise them extending both the characterisation of operator multipliers from [16] and that of local Schur multipliers from [27]. We provide a description of the positive local operator multipliers in terms of approximation by elements of canonical positive cones.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Classification methods with embedded feature selection capability are very appealing for the analysis of complex processes since they allow the analysis of root causes even when the number of input variables is high. In this work, we investigate the performance of three techniques for classification within a Monte Carlo strategy with the aim of root cause analysis. We consider the naive bayes classifier and the logistic regression model with two different implementations for controlling model complexity, namely, a LASSO-like implementation with a L1 norm regularization and a fully Bayesian implementation of the logistic model, the so called relevance vector machine. Several challenges can arise when estimating such models mainly linked to the characteristics of the data: a large number of input variables, high correlation among subsets of variables, the situation where the number of variables is higher than the number of available data points and the case of unbalanced datasets. Using an ecological and a semiconductor manufacturing dataset, we show advantages and drawbacks of each method, highlighting the superior performance in term of classification accuracy for the relevance vector machine with respect to the other classifiers. Moreover, we show how the combination of the proposed techniques and the Monte Carlo approach can be used to get more robust insights into the problem under analysis when faced with challenging modelling conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the non-equilibrium dynamics of a simple system consisting of interacting spin-1/2 particles subjected to a collective damping. The model is close to situations that can be engineered in hybrid electro/opto-mechanical settings. Making use of large-deviation theory, we find a Gallavotti-Cohen symmetry in the dynamics of the system as well as evidence for the coexistence of two dynamical phases with different activity levels. We show that additional damping processes smooth out this behavior. Our analytical results are backed up by Monte Carlo simulations that reveal the nature of the trajectories contributing to the different dynamical phases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work proposes an extended version of the well-known tree-augmented naive Bayes (TAN) classifier where the structure learning step is performed without requiring features to be connected to the class. Based on a modification of Edmonds’ algorithm, our structure learning procedure explores a superset of the structures that are considered by TAN, yet achieves global optimality of the learning score function in a very efficient way (quadratic in the number of features, the same complexity as learning TANs). A range of experiments show that we obtain models with better accuracy than TAN and comparable to the accuracy of the state-of-the-art classifier averaged one-dependence estimator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Credal networks are graph-based statistical models whose parameters take values in a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The computational complexity of inferences on such models depends on the irrelevance/independence concept adopted. In this paper, we study inferential complexity under the concepts of epistemic irrelevance and strong independence. We show that inferences under strong independence are NP-hard even in trees with binary variables except for a single ternary one. We prove that under epistemic irrelevance the polynomial-time complexity of inferences in credal trees is not likely to extend to more general models (e.g., singly connected topologies). These results clearly distinguish networks that admit efficient inferences and those where inferences are most likely hard, and settle several open questions regarding their computational complexity. We show that these results remain valid even if we disallow the use of zero probabilities. We also show that the computation of bounds on the probability of the future state in a hidden Markov model is the same whether we assume epistemic irrelevance or strong independence, and we prove an analogous result for inference in Naive Bayes structures. These inferential equivalences are important for practitioners, as hidden Markov models and Naive Bayes networks are used in real applications of imprecise probability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents new results for the (partial) maximum a posteriori (MAP) problem in Bayesian networks, which is the problem of querying the most probable state configuration of some of the network variables given evidence. It is demonstrated that the problem remains hard even in networks with very simple topology, such as binary polytrees and simple trees (including the Naive Bayes structure), which extends previous complexity results. Furthermore, a Fully Polynomial Time Approximation Scheme for MAP in networks with bounded treewidth and bounded number of states per variable is developed. Approximation schemes were thought to be impossible, but here it is shown otherwise under the assumptions just mentioned, which are adopted in most applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents new results for the (partial) maximum a posteriori (MAP) problem in Bayesian networks, which is the problem of querying the most probable state configuration of some of the network variables given evidence. First, it is demonstrated that the problem remains hard even in networks with very simple topology, such as binary polytrees and simple trees (including the Naive Bayes structure). Such proofs extend previous complexity results for the problem. Inapproximability results are also derived in the case of trees if the number of states per variable is not bounded. Although the problem is shown to be hard and inapproximable even in very simple scenarios, a new exact algorithm is described that is empirically fast in networks of bounded treewidth and bounded number of states per variable. The same algorithm is used as basis of a Fully Polynomial Time Approximation Scheme for MAP under such assumptions. Approximation schemes were generally thought to be impossible for this problem, but we show otherwise for classes of networks that are important in practice. The algorithms are extensively tested using some well-known networks as well as random generated cases to show their effectiveness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In open-shell atoms and ions, processes such as photoionization, combination (Raman) scattering, electron scattering, and recombination are often mediated by many-electron compound resonances. We show that their interference (neglected in the independent-resonance approximation) leads to a coherent contribution, which determines the energy-averaged total cross sections of electron- and photon-induced reactions obtained using the optical theorem. In contrast, the partial cross sections (e.g., electron recombination or photon Raman scattering) are dominated by the stochastic contributions. Thus, the optical theorem provides a link between the stochastic and coherent contributions of the compound resonances. Similar conclusions are valid for reactions via compound states in molecules and nuclei.