955 resultados para Gibbard–Satterthwaite Theorem
First order k-th moment finite element analysis of nonlinear operator equations with stochastic data
Resumo:
We develop and analyze a class of efficient Galerkin approximation methods for uncertainty quantification of nonlinear operator equations. The algorithms are based on sparse Galerkin discretizations of tensorized linearizations at nominal parameters. Specifically, we consider abstract, nonlinear, parametric operator equations J(\alpha ,u)=0 for random input \alpha (\omega ) with almost sure realizations in a neighborhood of a nominal input parameter \alpha _0. Under some structural assumptions on the parameter dependence, we prove existence and uniqueness of a random solution, u(\omega ) = S(\alpha (\omega )). We derive a multilinear, tensorized operator equation for the deterministic computation of k-th order statistical moments of the random solution's fluctuations u(\omega ) - S(\alpha _0). We introduce and analyse sparse tensor Galerkin discretization schemes for the efficient, deterministic computation of the k-th statistical moment equation. We prove a shift theorem for the k-point correlation equation in anisotropic smoothness scales and deduce that sparse tensor Galerkin discretizations of this equation converge in accuracy vs. complexity which equals, up to logarithmic terms, that of the Galerkin discretization of a single instance of the mean field problem. We illustrate the abstract theory for nonstationary diffusion problems in random domains.
Resumo:
We obtain sharp estimates for multidimensional generalisations of Vinogradov’s mean value theorem for arbitrary translation-dilation invariant systems, achieving constraints on the number of variables approaching those conjectured to be the best possible. Several applications of our bounds are discussed.
Resumo:
In this paper, we obtain quantitative estimates for the asymptotic density of subsets of the integer lattice Z2 that contain only trivial solutions to an additive equation involving binary forms. In the process we develop an analogue of Vinogradov’s mean value theorem applicable to binary forms.
Resumo:
Numerical Weather Prediction (NWP) fields are used to assist the detection of cloud in satellite imagery. Simulated observations based on NWP are used within a framework based on Bayes' theorem to calculate a physically-based probability of each pixel with an imaged scene being clear or cloudy. Different thresholds can be set on the probabilities to create application-specific cloud-masks. Here, this is done over both land and ocean using night-time (infrared) imagery. We use a validation dataset of difficult cloud detection targets for the Spinning Enhanced Visible and Infrared Imager (SEVIRI) achieving true skill scores of 87% and 48% for ocean and land, respectively using the Bayesian technique, compared to 74% and 39%, respectively for the threshold-based techniques associated with the validation dataset.
Resumo:
Numerical Weather Prediction (NWP) fields are used to assist the detection of cloud in satellite imagery. Simulated observations based on NWP are used within a framework based on Bayes' theorem to calculate a physically-based probability of each pixel with an imaged scene being clear or cloudy. Different thresholds can be set on the probabilities to create application-specific cloud masks. Here, the technique is shown to be suitable for daytime applications over land and sea, using visible and near-infrared imagery, in addition to thermal infrared. We use a validation dataset of difficult cloud detection targets for the Spinning Enhanced Visible and Infrared Imager (SEVIRI) achieving true skill scores of 89% and 73% for ocean and land, respectively using the Bayesian technique, compared to 90% and 70%, respectively for the threshold-based techniques associated with the validation dataset.
Resumo:
We propose and demonstrate a fully probabilistic (Bayesian) approach to the detection of cloudy pixels in thermal infrared (TIR) imagery observed from satellite over oceans. Using this approach, we show how to exploit the prior information and the fast forward modelling capability that are typically available in the operational context to obtain improved cloud detection. The probability of clear sky for each pixel is estimated by applying Bayes' theorem, and we describe how to apply Bayes' theorem to this problem in general terms. Joint probability density functions (PDFs) of the observations in the TIR channels are needed; the PDFs for clear conditions are calculable from forward modelling and those for cloudy conditions have been obtained empirically. Using analysis fields from numerical weather prediction as prior information, we apply the approach to imagery representative of imagers on polar-orbiting platforms. In comparison with the established cloud-screening scheme, the new technique decreases both the rate of failure to detect cloud contamination and the false-alarm rate by one quarter. The rate of occurrence of cloud-screening-related errors of >1 K in area-averaged SSTs is reduced by 83%. Copyright © 2005 Royal Meteorological Society.
Resumo:
Semi-analytical expressions for the momentum flux associated with orographic internal gravity waves, and closed analytical expressions for its divergence, are derived for inviscid, stationary, hydrostatic, directionally-sheared flow over mountains with an elliptical horizontal cross-section. These calculations, obtained using linear theory conjugated with a third-order WKB approximation, are valid for relatively slowly-varying, but otherwise generic wind profiles, and given in a form that is straightforward to implement in drag parametrization schemes. When normalized by the surface drag in the absence of shear, a quantity that is calculated routinely in existing drag parametrizations, the momentum flux becomes independent of the detailed shape of the orography. Unlike linear theory in the Ri → ∞ limit, the present calculations account for shear-induced amplification or reduction of the surface drag, and partial absorption of the wave momentum flux at critical levels. Profiles of the normalized momentum fluxes obtained using this model and a linear numerical model without the WKB approximation are evaluated and compared for two idealized wind profiles with directional shear, for different Richardson numbers (Ri). Agreement is found to be excellent for the first wind profile (where one of the wind components varies linearly) down to Ri = 0.5, while not so satisfactory, but still showing a large improvement relative to the Ri → ∞ limit, for the second wind profile (where the wind turns with height at a constant rate keeping a constant magnitude). These results are complementary, in the Ri > O(1) parameter range, to Broad’s generalization of the Eliassen–Palm theorem to 3D flow. They should contribute to improve drag parametrizations used in global weather and climate prediction models.
Resumo:
This article shows how one can formulate the representation problem starting from Bayes’ theorem. The purpose of this article is to raise awareness of the formal solutions,so that approximations can be placed in a proper context. The representation errors appear in the likelihood, and the different possibilities for the representation of reality in model and observations are discussed, including nonlinear representation probability density functions. Specifically, the assumptions needed in the usual procedure to add a representation error covariance to the error covariance of the observations are discussed,and it is shown that, when several sub-grid observations are present, their mean still has a representation error ; socalled ‘superobbing’ does not resolve the issue. Connection is made to the off-line or on-line retrieval problem, providing a new simple proof of the equivalence of assimilating linear retrievals and original observations. Furthermore, it is shown how nonlinear retrievals can be assimilated without loss of information. Finally we discuss how errors in the observation operator model can be treated consistently in the Bayesian framework, connecting to previous work in this area.
Resumo:
Considerable specification choice confronts countable adoption investigations and there is need to measure, formally, the evidence in favor of competing formulations. This article presents alternative countable adoption specifications—hitherto neglected in the agricultural-economics literature—and assesses formally their usefulness to practitioners. Reference to the left side of de Finetti's (1937) famous representation theorem motivates Bayesian unification of agricultural adoption studies and facilitates comparisons with conventional binary-choice specifications. Such comparisons have not previously been considered. The various formulations and the specific techniques are highlighted in an application to crossbred cow adoption in Sri Lanka's small-holder dairy sector.
Resumo:
We extend extreme learning machine (ELM) classifiers to complex Reproducing Kernel Hilbert Spaces (RKHS) where the input/output variables as well as the optimization variables are complex-valued. A new family of classifiers, called complex-valued ELM (CELM) suitable for complex-valued multiple-input–multiple-output processing is introduced. In the proposed method, the associated Lagrangian is computed using induced RKHS kernels, adopting a Wirtinger calculus approach formulated as a constrained optimization problem similarly to the conventional ELM classifier formulation. When training the CELM, the Karush–Khun–Tuker (KKT) theorem is used to solve the dual optimization problem that consists of satisfying simultaneously smallest training error as well as smallest norm of output weights criteria. The proposed formulation also addresses aspects of quaternary classification within a Clifford algebra context. For 2D complex-valued inputs, user-defined complex-coupled hyper-planes divide the classifier input space into four partitions. For 3D complex-valued inputs, the formulation generates three pairs of complex-coupled hyper-planes through orthogonal projections. The six hyper-planes then divide the 3D space into eight partitions. It is shown that the CELM problem formulation is equivalent to solving six real-valued ELM tasks, which are induced by projecting the chosen complex kernel across the different user-defined coordinate planes. A classification example of powdered samples on the basis of their terahertz spectral signatures is used to demonstrate the advantages of the CELM classifiers compared to their SVM counterparts. The proposed classifiers retain the advantages of their ELM counterparts, in that they can perform multiclass classification with lower computational complexity than SVM classifiers. Furthermore, because of their ability to perform classification tasks fast, the proposed formulations are of interest to real-time applications.
Resumo:
We explicitly construct simple, piecewise minimizing geodesic, arbitrarily fine interpolation of simple and Jordan curves on a Riemannian manifold. In particular, a finite sequence of partition points can be specified in advance to be included in our construction. Then we present two applications of our main results: the generalized Green’s theorem and the uniqueness of signature for planar Jordan curves with finite p -variation for 1⩽p<2.
Resumo:
We construct a quasi-sure version (in the sense of Malliavin) of geometric rough paths associated with a Gaussian process with long-time memory. As an application we establish a large deviation principle (LDP) for capacities for such Gaussian rough paths. Together with Lyons' universal limit theorem, our results yield immediately the corresponding results for pathwise solutions to stochastic differential equations driven by such Gaussian process in the sense of rough paths. Moreover, our LDP result implies the result of Yoshida on the LDP for capacities over the abstract Wiener space associated with such Gaussian process.
Resumo:
We study the topology of a set naturally arising from the study of β-expansions. After proving several elementary results for this set we study the case when our base is Pisot. In this case we give necessary and sufficient conditions for this set to be finite. This finiteness property will allow us to generalise a theorem due to Schmidt and will provide the motivation for sufficient conditions under which the growth rate and Hausdorff dimension of the set of β-expansions are equal and explicitly calculable.
Resumo:
Let H ∈ C 2(ℝ N×n ), H ≥ 0. The PDE system arises as the Euler-Lagrange PDE of vectorial variational problems for the functional E ∞(u, Ω) = ‖H(Du)‖ L ∞(Ω) defined on maps u: Ω ⊆ ℝ n → ℝ N . (1) first appeared in the author's recent work. The scalar case though has a long history initiated by Aronsson. Herein we study the solutions of (1) with emphasis on the case of n = 2 ≤ N with H the Euclidean norm on ℝ N×n , which we call the “∞-Laplacian”. By establishing a rigidity theorem for rank-one maps of independent interest, we analyse a phenomenon of separation of the solutions to phases with qualitatively different behaviour. As a corollary, we extend to N ≥ 2 the Aronsson-Evans-Yu theorem regarding non existence of zeros of |Du| and prove a maximum principle. We further characterise all H for which (1) is elliptic and also study the initial value problem for the ODE system arising for n = 1 but with H(·, u, u′) depending on all the arguments.
Resumo:
A truly variance-minimizing filter is introduced and its per for mance is demonstrated with the Korteweg– DeV ries (KdV) equation and with a multilayer quasigeostrophic model of the ocean area around South Africa. It is recalled that Kalman-like filters are not variance minimizing for nonlinear model dynamics and that four - dimensional variational data assimilation (4DV AR)-like methods relying on per fect model dynamics have dif- ficulty with providing error estimates. The new method does not have these drawbacks. In fact, it combines advantages from both methods in that it does provide error estimates while automatically having balanced states after analysis, without extra computations. It is based on ensemble or Monte Carlo integrations to simulate the probability density of the model evolution. When obser vations are available, the so-called importance resampling algorithm is applied. From Bayes’ s theorem it follows that each ensemble member receives a new weight dependent on its ‘ ‘distance’ ’ t o the obser vations. Because the weights are strongly var ying, a resampling of the ensemble is necessar y. This resampling is done such that members with high weights are duplicated according to their weights, while low-weight members are largely ignored. In passing, it is noted that data assimilation is not an inverse problem by nature, although it can be for mulated that way . Also, it is shown that the posterior variance can be larger than the prior if the usual Gaussian framework is set aside. However , i n the examples presented here, the entropy of the probability densities is decreasing. The application to the ocean area around South Africa, gover ned by strongly nonlinear dynamics, shows that the method is working satisfactorily . The strong and weak points of the method are discussed and possible improvements are proposed.