983 resultados para exact results


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents new results for the (partial) maximum a posteriori (MAP) problem in Bayesian networks, which is the problem of querying the most probable state configuration of some of the network variables given evidence. First, it is demonstrated that the problem remains hard even in networks with very simple topology, such as binary polytrees and simple trees (including the Naive Bayes structure). Such proofs extend previous complexity results for the problem. Inapproximability results are also derived in the case of trees if the number of states per variable is not bounded. Although the problem is shown to be hard and inapproximable even in very simple scenarios, a new exact algorithm is described that is empirically fast in networks of bounded treewidth and bounded number of states per variable. The same algorithm is used as basis of a Fully Polynomial Time Approximation Scheme for MAP under such assumptions. Approximation schemes were generally thought to be impossible for this problem, but we show otherwise for classes of networks that are important in practice. The algorithms are extensively tested using some well-known networks as well as random generated cases to show their effectiveness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Boolean games are a framework for reasoning about the rational behavior of agents whose goals are formalized using propositional formulas. Compared to normal form games, a well-studied and related game framework, Boolean games allow for an intuitive and more compact representation of the agents’ goals. So far, Boolean games have been mainly studied in the literature from the Knowledge Representation perspective, and less attention has been paid on the algorithmic issues underlying the computation of solution concepts. Although some suggestions for solving specific classes of Boolean games have been made in the literature, there is currently no work available on the practical performance. In this paper, we propose the first technique to solve general Boolean games that does not require an exponential translation to normal-form games. Our method is based on disjunctive answer set programming and computes solutions (equilibria) of arbitrary Boolean games. It can be applied to a wide variety of solution concepts, and can naturally deal with extensions of Boolean games such as constraints and costs. We present detailed experimental results in which we compare the proposed method against a number of existing methods for solving specific classes of Boolean games, as well as adaptations of methods that were initially designed for normal-form games. We found that the heuristic methods that do not require all payoff matrix entries performed well for smaller Boolean games, while our ASP based technique is faster when the problem instances have a higher number of agents or action variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exposure to allergens is pivotal in determining sensitization and allergic symptoms in individuals. Pollen grain counts in ambient air have traditionally been assessed to estimate airborne allergen exposure. However, the exact allergen content of ambient air is unknown. We therefore monitored atmospheric concentrations of birch pollen grain and the matched major birch pollen allergen Bet v 1 simultaneously across Europe within the EU-funded project HIALINE (Health Impacts of Airborne Allergen Information Network). Pollen count was assessed with Hirst type pollen traps at 10 l/min at sites in France, United Kingdom, Germany, Italy and Finland. Allergen concentrations in ambient air were sampled at 800l/min with a Chemvol high-volume cascade impactor equipped with stages PM>10μm, 10 μm>PM>2.5μm, and in Germany also 2.5 μm>PM>0.12μm. The major birch pollen allergen Bet v 1 was determined with an allergen specific ELISA. Bet v 1 isoform patterns were analyzed by 2D-SDS-PAGE blots and mass spectrometric identification. Basophil activation was tested in an FcεR1-humanized rat basophil cell line passively sensitized with serum of a birch pollen lmptomatic patient. Compared to 10 previous years, 2009 was a representative birch pollen season for all stations. About 90% of the allergen was found in the PM>10μm fraction at all stations. Bet v 1 isoforms pattern did not varied substantially neither during ripening of pollen nor between different geographical locations. The average European allergen release from birch pollen was 3.2 pg Bet v 1/pollen and did not vary much between the European countries. However, in all countries a >10-fold difference in daily allergen release per pollen was measured which could be explained by long range transport of pollen with a deviating allergen release. Basophil activation by ambient air extracts correlated better with airborne allergen than with pollen concentration. Although Bet v 1 is a mixture of different isoforms, its fingerprint is constant across Europe. Bet v 1 was also exclusively linked to pollen. Pollen from different days varied >10-fold in allergen release. Thus exposure to allergen is inaccurately monitored by only monitoring birch pollen grains. Indeed, a humanized basophil activation test correlated much better with allergen concentrations in ambient air than with pollen count. Monitoring the allergens themselves together with pollen in ambient air might be an improvement in allergen exposure assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presented at 23rd International Conference on Real-Time Networks and Systems (RTNS 2015). 4 to 6, Nov, 2015, Main Track. Lille, France.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our objective is to develop a diffusion Monte Carlo (DMC) algorithm to estimate the exact expectation values, ($o|^|^o), of multiplicative operators, such as polarizabilities and high-order hyperpolarizabilities, for isolated atoms and molecules. The existing forward-walking pure diffusion Monte Carlo (FW-PDMC) algorithm which attempts this has a serious bias. On the other hand, the DMC algorithm with minimal stochastic reconfiguration provides unbiased estimates of the energies, but the expectation values ($o|^|^) are contaminated by ^, an user specified, approximate wave function, when A does not commute with the Hamiltonian. We modified the latter algorithm to obtain the exact expectation values for these operators, while at the same time eliminating the bias. To compare the efficiency of FW-PDMC and the modified DMC algorithms we calculated simple properties of the H atom, such as various functions of coordinates and polarizabilities. Using three non-exact wave functions, one of moderate quality and the others very crude, in each case the results are within statistical error of the exact values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of slow vortical dynamics and its role in theoretical understanding is central to geophysical fluid dynamics. It leads, for example, to “potential vorticity thinking” (Hoskins et al. 1985). Mathematically, one imagines an invariant manifold within the phase space of solutions, called the slow manifold (Leith 1980; Lorenz 1980), to which the dynamics are constrained. Whether this slow manifold truly exists has been a major subject of inquiry over the past 20 years. It has become clear that an exact slow manifold is an exceptional case, restricted to steady or perhaps temporally periodic flows (Warn 1997). Thus the concept of a “fuzzy slow manifold” (Warn and Ménard 1986) has been suggested. The idea is that nearly slow dynamics will occur in a stochastic layer about the putative slow manifold. The natural question then is, how thick is this layer? In a recent paper, Ford et al. (2000) argue that Lighthill emission—the spontaneous emission of freely propagating acoustic waves by unsteady vortical flows—is applicable to the problem of balance, with the Mach number Ma replaced by the Froude number F, and that it is a fundamental mechanism for this fuzziness. They consider the rotating shallow-water equations and find emission of inertia–gravity waves at O(F2). This is rather surprising at first sight, because several studies of balanced dynamics with the rotating shallow-water equations have gone beyond second order in F, and found only an exponentially small unbalanced component (Warn and Ménard 1986; Lorenz and Krishnamurthy 1987; Bokhove and Shepherd 1996; Wirosoetisno and Shepherd 2000). We have no technical objection to the analysis of Ford et al. (2000), but wish to point out that it depends crucially on R 1, where R is the Rossby number. This condition requires the ratio of the characteristic length scale of the flow L to the Rossby deformation radius LR to go to zero in the limit F → 0. This is the low Froude number scaling of Charney (1963), which, while originally designed for the Tropics, has been argued to be also relevant to mesoscale dynamics (Riley et al. 1981). If L/LR is fixed, however, then F → 0 implies R → 0, which is the standard quasigeostrophic scaling of Charney (1948; see, e.g., Pedlosky 1987). In this limit there is reason to expect the fuzziness of the slow manifold to be “exponentially thin,” and balance to be much more accurate than is consistent with (algebraic) Lighthill emission.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the influence of ferromagnetic and antiferromagnetic bond defects on the ground-state energy of antiferromagnetic spin chains. In the absence of translational invariance, the energy spectrum of the full Hamiltonian is obtained numerically, by an iterative modi. cation of the power algorithm. In parallel, approximate analytical energies are obtained from a local-bond approximation, proposed here. This approximation results in significant improvement upon the mean-field approximation, at negligible extra computational effort. (C) 2008 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we study, under the Stratonovich definition, the problem of the damped oscillatory massive particle subject to a heterogeneous Poisson noise characterized by a rate of events, lambda(t), and a magnitude, Phi, following an exponential distribution. We tackle the problem by performing exact time averages over the noise in a similar way to previous works analysing the problem of the Brownian particle. From this procedure we obtain the long-term equilibrium distributions of position and velocity as well as analytical asymptotic expressions for the injection and dissipation of energy terms. Considerations on the emergence of stochastic resonance in this type of system are also set forth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a new reformulation of the KKT system associated to a variational inequality as a semismooth equation. The reformulation is derived from the concept of differentiable exact penalties for nonlinear programming. The best theoretical results are presented for nonlinear complementarity problems, where simple, verifiable, conditions ensure that the penalty is exact. We close the paper with some preliminary computational tests on the use of a semismooth Newton method to solve the equation derived from the new reformulation. We also compare its performance with the Newton method applied to classical reformulations based on the Fischer-Burmeister function and on the minimum. The new reformulation combines the best features of the classical ones, being as easy to solve as the reformulation that uses the Fischer-Burmeister function while requiring as few Newton steps as the one that is based on the minimum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Let M be a compact, connected non-orientable surface without boundary and of genus g >= 3. We investigate the pure braid groups P,(M) of M, and in particular the possible splitting of the Fadell-Neuwirth short exact sequence 1 -> P(m)(M \ {x(1), ..., x(n)}) hooked right arrow P(n+m)(M) (P*) under right arrow P(n)(M) -> 1, where m, n >= 1, and p* is the homomorphism which corresponds geometrically to forgetting the last m strings. This problem is equivalent to that of the existence of a section for the associated fibration p: F(n+m)(M) -> F(n)(M) of configuration spaces, defined by p((x(1), ..., x(n), x(n+1), ..., x(n+m))) = (x(1), ..., x(n)). We show that p and p* admit a section if and only if n = 1. Together with previous results, this completes the resolution of the splitting problem for surface pure braid groups. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel test battery consisting of self-assessments and motor tests (tapping and spiral drawing) for patients with Parkinson’s disease (PD) was developed for a hand computer with touch screen in a telemedicine setting. Tests are performed four times per day in the home environment during weeklong test periods. Results are processed into scores for different dimensions of the symptom state and an ‘overall score’ reflecting the global condition of a patient during a test period. The test battery was validated in a separate study recently submitted to Mov Disord. This test battery is currently being used in an open longitudinal trial (DAPHNE, EudraCT No. 2005- 002654-21) by sixty-five patients with advanced PD at nine clinics around Sweden. On inclusion, the patients were either receiving treatment with duodenal levodopa/carbidopa infusion (Duodopa®) (n=36), or they were candidates for receiving this treatment (n=29). We now present interim results for the first twelve months. Test periods were performed in three-month intervals. During most of the periods, UPDRS ratings were performed in afternoons at the start of the week. In twenty of the patients, scores were available during individually optimized oral polypharamacy, before receiving infusion and at least one test period after having started infusion treatment. Usability and compliance with performing tests, this far are good, both with patients and clinical staff. Correlations between test periods 2 and 3 during infusion treatment (three months apart) are stronger for overall test score than for total UPDRS, indicating good reliability. The correlation between overall test score and UPDRS for all test periods is adequate (r=-0.6). In an exact Wilcoxon signed rank test, where the endpoint is the change from the first to the twelve month test period (n=25), there was no change in test results in any of the test battery dimensions for the patients already receiving infusion when included. However, in the patients entering the study before receiving infusion, there was a significant change (improvement) from the baseline to the twelve month test period in dimensions; ‘off’, ‘dyskinesia’ and ‘satisfied’ and in the ‘overall score’ (n=15). The mean improvement in overall score after infusion was 29% (p=0.015). We conclude that the test battery is able to measure a functional improvement with infusion that is sustained over at least twelve months.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider exchange economies with a continuum of agents and differential information about finitely many states of nature. It was proved in Einy, Moreno and Shitovitz (2001) that if we allow for free disposal in the market clearing (feasibility) constraints then an irreducible economy has a competitive (or Walrasian expectations) equilibrium, and moreover, the set of competitive equilibrium allocations coincides with the private core. However when feasibility is defined with free disposal, competitive equilibrium allocations may not be incentive compatible and contracts may not be enforceable (see e.g. Glycopantis, Muir and Yannelis (2002)). This is the main motivation for considering equilibrium solutions with exact feasibility. We first prove that the results in Einy et al. (2001) are still valid without free-disposal. Then we define an incentive compatibility property motivated by the issue of contracts’ execution and we prove that every Pareto optimal exact feasible allocation is incentive compatible, implying that contracts of a competitive or core allocations are enforceable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

: In a model of a nancial market with an atomless continuum of assets, we give a precise and rigorous meaning to the intuitive idea of a \well-diversi ed" portfolio and to a notion of \exact arbitrage". We show this notion to be necessary and su cient for an APT pricing formula to hold, to be strictly weaker than the more conventional notion of \asymptotic arbitrage", and to have novel implications for the continuity of the cost functional as well as for various versions of APT asset pricing. We further justify the idealized measure-theoretic setting in terms of a pricing formula based on \essential" risk, one of the three components of a tri-variate decomposition of an asset's rate of return, and based on a speci c index portfolio constructed from endogenously extracted factors and factor loadings. Our choice of factors is also shown to satisfy an optimality property that the rst m factors always provide the best approximation. We illustrate how the concepts and results translate to markets with a large but nite number of assets, and relate to previous work.