876 resultados para Probabilistic Finite Automata
Resumo:
An ability to quantify the reliability of probabilistic flood inundation predictions is a requirement not only for guiding model development but also for their successful application. Probabilistic flood inundation predictions are usually produced by choosing a method of weighting the model parameter space, but previous study suggests that this choice leads to clear differences in inundation probabilities. This study aims to address the evaluation of the reliability of these probabilistic predictions. However, a lack of an adequate number of observations of flood inundation for a catchment limits the application of conventional methods of evaluating predictive reliability. Consequently, attempts have been made to assess the reliability of probabilistic predictions using multiple observations from a single flood event. Here, a LISFLOOD-FP hydraulic model of an extreme (>1 in 1000 years) flood event in Cockermouth, UK, is constructed and calibrated using multiple performance measures from both peak flood wrack mark data and aerial photography captured post-peak. These measures are used in weighting the parameter space to produce multiple probabilistic predictions for the event. Two methods of assessing the reliability of these probabilistic predictions using limited observations are utilized; an existing method assessing the binary pattern of flooding, and a method developed in this paper to assess predictions of water surface elevation. This study finds that the water surface elevation method has both a better diagnostic and discriminatory ability, but this result is likely to be sensitive to the unknown uncertainties in the upstream boundary condition
Resumo:
Forecasting wind power is an important part of a successful integration of wind power into the power grid. Forecasts with lead times longer than 6 h are generally made by using statistical methods to post-process forecasts from numerical weather prediction systems. Two major problems that complicate this approach are the non-linear relationship between wind speed and power production and the limited range of power production between zero and nominal power of the turbine. In practice, these problems are often tackled by using non-linear non-parametric regression models. However, such an approach ignores valuable and readily available information: the power curve of the turbine's manufacturer. Much of the non-linearity can be directly accounted for by transforming the observed power production into wind speed via the inverse power curve so that simpler linear regression models can be used. Furthermore, the fact that the transformed power production has a limited range can be taken care of by employing censored regression models. In this study, we evaluate quantile forecasts from a range of methods: (i) using parametric and non-parametric models, (ii) with and without the proposed inverse power curve transformation and (iii) with and without censoring. The results show that with our inverse (power-to-wind) transformation, simpler linear regression models with censoring perform equally or better than non-linear models with or without the frequently used wind-to-power transformation.
Resumo:
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961–2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño–Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.
Resumo:
The weak-constraint inverse for nonlinear dynamical models is discussed and derived in terms of a probabilistic formulation. The well-known result that for Gaussian error statistics the minimum of the weak-constraint inverse is equal to the maximum-likelihood estimate is rederived. Then several methods based on ensemble statistics that can be used to find the smoother (as opposed to the filter) solution are introduced and compared to traditional methods. A strong point of the new methods is that they avoid the integration of adjoint equations, which is a complex task for real oceanographic or atmospheric applications. they also avoid iterative searches in a Hilbert space, and error estimates can be obtained without much additional computational effort. the feasibility of the new methods is illustrated in a two-layer quasigeostrophic model.
Resumo:
In this work, we prove a weak Noether-type Theorem for a class of variational problems that admit broken extremals. We use this result to prove discrete Noether-type conservation laws for a conforming finite element discretisation of a model elliptic problem. In addition, we study how well the finite element scheme satisfies the continuous conservation laws arising from the application of Noether’s first theorem (1918). We summarise extensive numerical tests, illustrating the conservation of the discrete Noether law using the p-Laplacian as an example and derive a geometric-based adaptive algorithm where an appropriate Noether quantity is the goal functional.
Resumo:
The horizontal gradient of potential vorticity (PV) across the tropopause typically declines with lead time in global numerical weather forecasts and tends towards a steady value dependent on model resolution. This paper examines how spreading the tropopause PV contrast over a broader frontal zone affects the propagation of Rossby waves. The approach taken is to analyse Rossby waves on a PV front of finite width in a simple single-layer model. The dispersion relation for linear Rossby waves on a PV front of infinitesimal width is well known; here an approximate correction is derived for the case of a finite width front, valid in the limit that the front is narrow compared to the zonal wavelength. Broadening the front causes a decrease in both the jet speed and the ability of waves to propagate upstream. The contribution of these changes to Rossby wave phase speeds cancel at leading order. At second order the decrease in jet speed dominates, meaning phase speeds are slower on broader PV fronts. This asymptotic phase speed result is shown to hold for a wide class of single-layer dynamics with a varying range of PV inversion operators. The phase speed dependence on frontal width is verified by numerical simulations and also shown to be robust at finite wave amplitude, and estimates are made for the error in Rossby wave propagation speeds due to the PV gradient error present in numerical weather forecast models.
Resumo:
Probabilistic hydro-meteorological forecasts have over the last decades been used more frequently to communicate forecastuncertainty. This uncertainty is twofold, as it constitutes both an added value and a challenge for the forecaster and the user of the forecasts. Many authors have demonstrated the added (economic) value of probabilistic over deterministic forecasts across the water sector (e.g. flood protection, hydroelectric power management and navigation). However, the richness of the information is also a source of challenges for operational uses, due partially to the difficulty to transform the probability of occurrence of an event into a binary decision. This paper presents the results of a risk-based decision-making game on the topic of flood protection mitigation, called “How much are you prepared to pay for a forecast?”. The game was played at several workshops in 2015, which were attended by operational forecasters and academics working in the field of hydrometeorology. The aim of this game was to better understand the role of probabilistic forecasts in decision-making processes and their perceived value by decision-makers. Based on the participants’ willingness-to-pay for a forecast, the results of the game show that the value (or the usefulness) of a forecast depends on several factors, including the way users perceive the quality of their forecasts and link it to the perception of their own performances as decision-makers.
Resumo:
P>Estimates of effective elastic thickness (T(e)) for the western portion of the South American Plate using, independently, forward flexural modelling and coherence analysis, suggest different thermomechanical properties for the same continental lithosphere. We present a review of these T(e) estimates and carry out a critical reappraisal using a common methodology of 3-D finite element method to solve a differential equation for the bending of a thin elastic plate. The finite element flexural model incorporates lateral variations of T(e) and the Andes topography as the load. Three T(e) maps for the entire Andes were analysed: Stewart & Watts (1997), Tassara et al. (2007) and Perez-Gussinye et al. (2007). The predicted flexural deformation obtained for each T(e) map was compared with the depth to the base of the foreland basin sequence. Likewise, the gravity effect of flexurally induced crust-mantle deformation was compared with the observed Bouguer gravity. T(e) estimates using forward flexural modelling by Stewart & Watts (1997) better predict the geological and gravity data for most of the Andean system, particularly in the Central Andes, where T(e) ranges from greater than 70 km in the sub-Andes to less than 15 km under the Andes Cordillera. The misfit between the calculated and observed foreland basin subsidence and the gravity anomaly for the Maranon basin in Peru and the Bermejo basin in Argentina, regardless of the assumed T(e) map, may be due to a dynamic topography component associated with the shallow subduction of the Nazca Plate beneath the Andes at these latitudes.
Resumo:
We provide bounds on the upper box-counting dimension of negatively invariant subsets of Banach spaces, a problem that is easily reduced to covering the image of the unit ball under a linear map by a collection of balls of smaller radius. As an application of the abstract theory we show that the global attractors of a very broad class of parabolic partial differential equations (semilinear equations in Banach spaces) are finite-dimensional. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
We show that a holomorphic map germ f : (C(n), 0) -> (C(2n-1), 0) is finitely determined if and only if the double point scheme D(f) is a reduced curve. If n >= 3, we have that mu(D(2)(f)) = 2 mu(D(2)(f)/S(2))+C(f)-1, where D(2)(f) is the lifting of the double point curve in (C(n) x C(n), 0), mu(X) denotes the Milnor number of X and C(f) is the number of cross-caps that appear in a stable deformation of f. Moreover, we consider an unfolding F(t, x) = (t, f(t)(x)) of f and show that if F is mu-constant, then it is excellent in the sense of Gaffney. Finally, we find a minimal set of invariants whose constancy in the family f(t) is equivalent to the Whitney equisingularity of F. We also give an example of an unfolding which is topologically trivial, but it is not Whitney equisingular.
Resumo:
We consider incompressible Stokes flow with an internal interface at which the pressure is discontinuous, as happens for example in problems involving surface tension. We assume that the mesh does not follow the interface, which makes classical interpolation spaces to yield suboptimal convergence rates (typically, the interpolation error in the L(2)(Omega)-norm is of order h(1/2)). We propose a modification of the P(1)-conforming space that accommodates discontinuities at the interface without introducing additional degrees of freedom or modifying the sparsity pattern of the linear system. The unknowns are the pressure values at the vertices of the mesh and the basis functions are computed locally at each element, so that the implementation of the proposed space into existing codes is straightforward. With this modification, numerical tests show that the interpolation order improves to O(h(3/2)). The new pressure space is implemented for the stable P(1)(+)/P(1) mini-element discretization, and for the stabilized equal-order P(1)/P(1) discretization. Assessment is carried out for Poiseuille flow with a forcing surface and for a static bubble. In all cases the proposed pressure space leads to improved convergence orders and to more accurate results than the standard P(1) space. In addition, two Navier-Stokes simulations with moving interfaces (Rayleigh-Taylor instability and merging bubbles) are reported to show that the proposed space is robust enough to carry out realistic simulations. (c) 2009 Elsevier B.V. All rights reserved.
Resumo:
We study properties of finitely determined corank 2 quasihomogeneous map germs f: (C(2), 0) -> (C(3), 0). Examples and counter examples of such map germs are presented.
Resumo:
To plan testing activities, testers face the challenge of determining a strategy, including a test coverage criterion that offers an acceptable compromise between the available resources and test goals. Known theoretical properties of coverage criteria do not always help and, thus, empirical data are needed. The results of an experimental evaluation of several coverage criteria for finite state machines (FSMs) are presented, namely, state and transition coverage; initialisation fault and transition fault coverage. The first two criteria focus on FSM structure, whereas the other two on potential faults in FSM implementations. The authors elaborate a comparison approach that includes random generation of FSM, construction of an adequate test suite and test minimisation for each criterion to ensure that tests are obtained in a uniform way. The last step uses an improved greedy algorithm.
Resumo:
In testing from a Finite State Machine (FSM), the generation of test suites which guarantee full fault detection, known as complete test suites, has been a long-standing research topic. In this paper, we present conditions that are sufficient for a test suite to be complete. We demonstrate that the existing conditions are special cases of the proposed ones. An algorithm that checks whether a given test suite is complete is given. The experimental results show that the algorithm can be used for relatively large FSMs and test suites.
Resumo:
The critical behavior of the stochastic susceptible-infected-recovered model on a square lattice is obtained by numerical simulations and finite-size scaling. The order parameter as well as the distribution in the number of recovered individuals is determined as a function of the infection rate for several values of the system size. The analysis around criticality is obtained by exploring the close relationship between the present model and standard percolation theory. The quantity UP, equal to the ratio U between the second moment and the squared first moment of the size distribution multiplied by the order parameter P, is shown to have, for a square system, a universal value 1.0167(1) that is the same for site and bond percolation, confirming further that the SIR model is also in the percolation class.