952 resultados para Rayleigh-Ritz theorem
Resumo:
This article shows how one can formulate the representation problem starting from Bayes’ theorem. The purpose of this article is to raise awareness of the formal solutions,so that approximations can be placed in a proper context. The representation errors appear in the likelihood, and the different possibilities for the representation of reality in model and observations are discussed, including nonlinear representation probability density functions. Specifically, the assumptions needed in the usual procedure to add a representation error covariance to the error covariance of the observations are discussed,and it is shown that, when several sub-grid observations are present, their mean still has a representation error ; socalled ‘superobbing’ does not resolve the issue. Connection is made to the off-line or on-line retrieval problem, providing a new simple proof of the equivalence of assimilating linear retrievals and original observations. Furthermore, it is shown how nonlinear retrievals can be assimilated without loss of information. Finally we discuss how errors in the observation operator model can be treated consistently in the Bayesian framework, connecting to previous work in this area.
Resumo:
Considerable specification choice confronts countable adoption investigations and there is need to measure, formally, the evidence in favor of competing formulations. This article presents alternative countable adoption specifications—hitherto neglected in the agricultural-economics literature—and assesses formally their usefulness to practitioners. Reference to the left side of de Finetti's (1937) famous representation theorem motivates Bayesian unification of agricultural adoption studies and facilitates comparisons with conventional binary-choice specifications. Such comparisons have not previously been considered. The various formulations and the specific techniques are highlighted in an application to crossbred cow adoption in Sri Lanka's small-holder dairy sector.
Resumo:
We extend extreme learning machine (ELM) classifiers to complex Reproducing Kernel Hilbert Spaces (RKHS) where the input/output variables as well as the optimization variables are complex-valued. A new family of classifiers, called complex-valued ELM (CELM) suitable for complex-valued multiple-input–multiple-output processing is introduced. In the proposed method, the associated Lagrangian is computed using induced RKHS kernels, adopting a Wirtinger calculus approach formulated as a constrained optimization problem similarly to the conventional ELM classifier formulation. When training the CELM, the Karush–Khun–Tuker (KKT) theorem is used to solve the dual optimization problem that consists of satisfying simultaneously smallest training error as well as smallest norm of output weights criteria. The proposed formulation also addresses aspects of quaternary classification within a Clifford algebra context. For 2D complex-valued inputs, user-defined complex-coupled hyper-planes divide the classifier input space into four partitions. For 3D complex-valued inputs, the formulation generates three pairs of complex-coupled hyper-planes through orthogonal projections. The six hyper-planes then divide the 3D space into eight partitions. It is shown that the CELM problem formulation is equivalent to solving six real-valued ELM tasks, which are induced by projecting the chosen complex kernel across the different user-defined coordinate planes. A classification example of powdered samples on the basis of their terahertz spectral signatures is used to demonstrate the advantages of the CELM classifiers compared to their SVM counterparts. The proposed classifiers retain the advantages of their ELM counterparts, in that they can perform multiclass classification with lower computational complexity than SVM classifiers. Furthermore, because of their ability to perform classification tasks fast, the proposed formulations are of interest to real-time applications.
Resumo:
Large waves pose risks to ships, offshore structures, coastal infrastructure and ecosystems. This paper analyses 10 years of in-situ measurements of significant wave height (Hs) and maximum wave height (Hmax) from the ocean weather ship Polarfront in the Norwegian Sea. During the period 2000 to 2009, surface elevation was recorded every 0.59 s during sampling periods of 30 min. The Hmax observations scale linearly with Hs on average. A widely-used empirical Weibull distribution is found to estimate average values of Hmax/Hs and Hmax better than a Rayleigh distribution, but tends to underestimate both for all but the smallest waves. In this paper we propose a modified Rayleigh distribution which compensates for the heterogeneity of the observed dataset: the distribution is fitted to the whole dataset and improves the estimate of the largest waves. Over the 10-year period, the Weibull distribution approximates the observed Hs and Hmax well, and an exponential function can be used to predict the probability distribution function of the ratio Hmax/Hs. However, the Weibull distribution tends to underestimate the occurrence of extremely large values of Hs and Hmax. The persistence of Hs and Hmax in winter is also examined. Wave fields with Hs>12 m and Hmax>16 m do not last longer than 3 h. Low-to-moderate wave heights that persist for more than 12 h dominate the relationship of the wave field with the winter NAO index over 2000–2009. In contrast, the inter-annual variability of wave fields with Hs>5.5 m or Hmax>8.5 m and wave fields persisting over ~2.5 days is not associated with the winter NAO index.
Resumo:
Clouds and associated precipitation are the largest source of uncertainty in current weather and future climate simulations. Observations of the microphysical, dynamical and radiative processes that act at cloud scales are needed to improve our understanding of clouds. The rapid expansion of ground-based super-sites and the availability of continuous profiling and scanning multi-frequency radar observations at 35 and 94 GHz have significantly improved our ability to probe the internal structure of clouds in high temporal-spatial resolution, and to retrieve quantitative cloud and precipitation properties. However, there are still gaps in our ability to probe clouds due to large uncertainties in the retrievals. The present work discusses the potential of G band (frequency between 110 and 300 GHz) Doppler radars in combination with lower frequencies to further improve the retrievals of microphysical properties. Our results show that, thanks to a larger dynamic range in dual-wavelength reflectivity, dual-wavelength attenuation and dual-wavelength Doppler velocity (with respect to a Rayleigh reference), the inclusion of frequencies in the G band can significantly improve current profiling capabilities in three key areas: boundary layer clouds, cirrus and mid-level ice clouds, and precipitating snow.
Resumo:
We explicitly construct simple, piecewise minimizing geodesic, arbitrarily fine interpolation of simple and Jordan curves on a Riemannian manifold. In particular, a finite sequence of partition points can be specified in advance to be included in our construction. Then we present two applications of our main results: the generalized Green’s theorem and the uniqueness of signature for planar Jordan curves with finite p -variation for 1⩽p<2.
Resumo:
We construct a quasi-sure version (in the sense of Malliavin) of geometric rough paths associated with a Gaussian process with long-time memory. As an application we establish a large deviation principle (LDP) for capacities for such Gaussian rough paths. Together with Lyons' universal limit theorem, our results yield immediately the corresponding results for pathwise solutions to stochastic differential equations driven by such Gaussian process in the sense of rough paths. Moreover, our LDP result implies the result of Yoshida on the LDP for capacities over the abstract Wiener space associated with such Gaussian process.
Resumo:
We study the topology of a set naturally arising from the study of β-expansions. After proving several elementary results for this set we study the case when our base is Pisot. In this case we give necessary and sufficient conditions for this set to be finite. This finiteness property will allow us to generalise a theorem due to Schmidt and will provide the motivation for sufficient conditions under which the growth rate and Hausdorff dimension of the set of β-expansions are equal and explicitly calculable.
Resumo:
Let H ∈ C 2(ℝ N×n ), H ≥ 0. The PDE system arises as the Euler-Lagrange PDE of vectorial variational problems for the functional E ∞(u, Ω) = ‖H(Du)‖ L ∞(Ω) defined on maps u: Ω ⊆ ℝ n → ℝ N . (1) first appeared in the author's recent work. The scalar case though has a long history initiated by Aronsson. Herein we study the solutions of (1) with emphasis on the case of n = 2 ≤ N with H the Euclidean norm on ℝ N×n , which we call the “∞-Laplacian”. By establishing a rigidity theorem for rank-one maps of independent interest, we analyse a phenomenon of separation of the solutions to phases with qualitatively different behaviour. As a corollary, we extend to N ≥ 2 the Aronsson-Evans-Yu theorem regarding non existence of zeros of |Du| and prove a maximum principle. We further characterise all H for which (1) is elliptic and also study the initial value problem for the ODE system arising for n = 1 but with H(·, u, u′) depending on all the arguments.
Resumo:
We revisit the issue of sensitivity to initial flow and intrinsic variability in hot-Jupiter atmospheric flow simulations, originally investigated by Cho et al. (2008) and Thrastarson & Cho (2010). The flow in the lower region (~1 to 20 MPa) `dragged' to immobility and uniform temperature on a very short timescale, as in Liu & Showman (2013), leads to effectively a complete cessation of variability as well as sensitivity in three-dimensional (3D) simulations with traditional primitive equations. Such momentum (Rayleigh) and thermal (Newtonian) drags are, however, ad hoc for 3D giant planet simulations. For 3D hot-Jupiter simulations, which typically already employ strong Newtonian drag in the upper region, sensitivity is not quenched if only the Newtonian drag is applied in the lower region, without the strong Rayleigh drag: in general, both sensitivity and variability persist if the two drags are not applied concurrently in the lower region. However, even when the drags are applied concurrently, vertically-propagating planetary waves give rise to significant variability in the ~0.05 to 0.5 MPa region, if the vertical resolution of the lower region is increased (e.g. here with 1000 layers for the entire domain). New observations on the effects of the physical setup and model convergence in ‘deep’ atmosphere simulations are also presented.
Resumo:
A truly variance-minimizing filter is introduced and its per for mance is demonstrated with the Korteweg– DeV ries (KdV) equation and with a multilayer quasigeostrophic model of the ocean area around South Africa. It is recalled that Kalman-like filters are not variance minimizing for nonlinear model dynamics and that four - dimensional variational data assimilation (4DV AR)-like methods relying on per fect model dynamics have dif- ficulty with providing error estimates. The new method does not have these drawbacks. In fact, it combines advantages from both methods in that it does provide error estimates while automatically having balanced states after analysis, without extra computations. It is based on ensemble or Monte Carlo integrations to simulate the probability density of the model evolution. When obser vations are available, the so-called importance resampling algorithm is applied. From Bayes’ s theorem it follows that each ensemble member receives a new weight dependent on its ‘ ‘distance’ ’ t o the obser vations. Because the weights are strongly var ying, a resampling of the ensemble is necessar y. This resampling is done such that members with high weights are duplicated according to their weights, while low-weight members are largely ignored. In passing, it is noted that data assimilation is not an inverse problem by nature, although it can be for mulated that way . Also, it is shown that the posterior variance can be larger than the prior if the usual Gaussian framework is set aside. However , i n the examples presented here, the entropy of the probability densities is decreasing. The application to the ocean area around South Africa, gover ned by strongly nonlinear dynamics, shows that the method is working satisfactorily . The strong and weak points of the method are discussed and possible improvements are proposed.