3 resultados para Variational approximation
em National Center for Biotechnology Information - NCBI
Resumo:
We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.
Resumo:
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science.
Resumo:
We investigated how human subjects adapt to forces perturbing the motion of their ams. We found that this kind of learning is based on the capacity of the central nervous system (CNS) to predict and therefore to cancel externally applied perturbing forces. Our experimental results indicate: (i) that the ability of the CNS to compensate for the perturbing forces is restricted to those spatial locations where the perturbations have been experienced by the moving arm. The subjects also are able to compensate for forces experienced at neighboring workspace locations. However, adaptation decays smoothly and quickly with distance from the locations where disturbances had been sensed by the moving limb. (ii) Our experiments also how that the CNS builds an internal model of the external perturbing forces in intrinsic (muscles and / or joints) coordinates.