911 resultados para Deterministic walkers
Resumo:
This study examines convection-permitting numerical simulations of four cases of terrain-locked quasi-stationary convective bands over the UK. For each case, a 2.2-km grid-length 12-member ensemble and 1.5-km grid-length deterministic forecast are analyzed, each with two different initialization times. Object-based verification is applied to determine whether the simulations capture the structure, location, timing, intensity and duration of the observed precipitation. These verification diagnostics reveal that the forecast skill varies greatly between the four cases. Although the deterministic and ensemble simulations captured some aspects of the precipitation correctly in each case, they never simultaneously captured all of them satisfactorily. In general, the models predicted banded precipitation accumulations at approximately the correct time and location, but the precipitating structures were more cellular and less persistent than the coherent quasi-stationary bands that were observed. Ensemble simulations from the two different initialization times were not significantly different, which suggests a potential benefit of time-lagging subsequent ensembles to increase ensemble size. The predictive skill of the upstream larger-scale flow conditions and the simulated precipitation on the convection-permitting grids were strongly correlated, which suggests that more accurate forecasts from the parent ensemble should improve the performance of the convection-permitting ensemble nested within it.
Resumo:
The derivation of time evolution equations for slow collective variables starting from a micro- scopic model system is demonstrated for the tutorial example of the classical, two-dimensional XY model. Projection operator techniques are used within a nonequilibrium thermodynamics framework together with molecular simulations in order to establish the building blocks of the hydrodynamics equations: Poisson brackets that determine the deterministic drift, the driving forces from the macroscopic free energy and the friction matrix. The approach is rather general and can be applied for deriving the equations of slow variables for a broad variety of systems.
Resumo:
Based on a large dataset from eight Asian economies, we test the impact of post-crisis regulatory reforms on the performance of depository institutions in countries at different levels of financial development. We allow for technological heterogeneity and estimate a set of country-level stochastic cost frontiers followed by a deterministic bootstrapped meta-frontier to evaluate cost efficiency and cost technology. Our results support the view that liberalization policies have a positive impact on bank performance, while the reverse is true for prudential regulation policies. The removal of activities restrictions, bank privatization and foreign bank entry have a positive and significant impact on technological progress and cost efficiency. In contrast, prudential policies, which aim to protect the banking sector from excessive risk-taking, tend to adversely affect banks cost efficiency but not cost technology.
Resumo:
Following the 1997 crisis, banking sector reforms in Asia have been characterised by the emphasis on prudential regulation, associated with increased financial liberalisation. Using a panel data set of commercial banks from eight major Asian economies over the period 2001-2010, this study explores how the coexistence of liberalisation and prudential regulation affects banks’ cost characteristics. Given the presence of heterogeneity of technologies across countries, we use a stochastic frontier approach followed by the estimation of a deterministic meta-frontier to provide ‘true’ estimates of bank cost efficiency measures. Our results show that the liberalization of bank interest rates and the increase in foreign banks' presence have had a positive and significant impact on technological progress and cost efficiency. On the other hand, we find that prudential regulation might adversely affect bank cost performance. When designing an optimal regulatory framework, policy makers should combine policies which aim to foster financial stability without hindering financial intermediation.
Resumo:
The Plant–Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic scheme only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant–Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant–Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.
Resumo:
Probabilistic hydro-meteorological forecasts have over the last decades been used more frequently to communicate forecastuncertainty. This uncertainty is twofold, as it constitutes both an added value and a challenge for the forecaster and the user of the forecasts. Many authors have demonstrated the added (economic) value of probabilistic over deterministic forecasts across the water sector (e.g. flood protection, hydroelectric power management and navigation). However, the richness of the information is also a source of challenges for operational uses, due partially to the difficulty to transform the probability of occurrence of an event into a binary decision. This paper presents the results of a risk-based decision-making game on the topic of flood protection mitigation, called “How much are you prepared to pay for a forecast?”. The game was played at several workshops in 2015, which were attended by operational forecasters and academics working in the field of hydrometeorology. The aim of this game was to better understand the role of probabilistic forecasts in decision-making processes and their perceived value by decision-makers. Based on the participants’ willingness-to-pay for a forecast, the results of the game show that the value (or the usefulness) of a forecast depends on several factors, including the way users perceive the quality of their forecasts and link it to the perception of their own performances as decision-makers.
Resumo:
A particle filter method is presented for the discrete-time filtering problem with nonlinear ItA ` stochastic ordinary differential equations (SODE) with additive noise supposed to be analytically integrable as a function of the underlying vector-Wiener process and time. The Diffusion Kernel Filter is arrived at by a parametrization of small noise-driven state fluctuations within branches of prediction and a local use of this parametrization in the Bootstrap Filter. The method applies for small noise and short prediction steps. With explicit numerical integrators, the operations count in the Diffusion Kernel Filter is shown to be smaller than in the Bootstrap Filter whenever the initial state for the prediction step has sufficiently few moments. The established parametrization is a dual-formula for the analysis of sensitivity to gaussian-initial perturbations and the analysis of sensitivity to noise-perturbations, in deterministic models, showing in particular how the stability of a deterministic dynamics is modeled by noise on short times and how the diffusion matrix of an SODE should be modeled (i.e. defined) for a gaussian-initial deterministic problem to be cast into an SODE problem. From it, a novel definition of prediction may be proposed that coincides with the deterministic path within the branch of prediction whose information entropy at the end of the prediction step is closest to the average information entropy over all branches. Tests are made with the Lorenz-63 equations, showing good results both for the filter and the definition of prediction.
Resumo:
This paper presents a GIS-based multicriteria flood risk assessment and mapping approach applied to coastal drainage basins where hydrological data are not available. It involves risk to different types of possible processes: coastal inundation (storm surge), river, estuarine and flash flood, either at urban or natural areas, and fords. Based on the causes of these processes, several environmental indicators were taken to build-up the risk assessment. Geoindicators include geological-geomorphologic proprieties of Quaternary sedimentary units, water table, drainage basin morphometry, coastal dynamics, beach morphodynamics and microclimatic characteristics. Bioindicators involve coastal plain and low slope native vegetation categories and two alteration states. Anthropogenic indicators encompass land use categories properties such as: type, occupation density, urban structure type and occupation consolidation degree. The selected indicators were stored within an expert Geoenvironmental Information System developed for the State of Sao Paulo Coastal Zone (SIIGAL), which attributes were mathematically classified through deterministic approaches, in order to estimate natural susceptibilities (Sn), human-induced susceptibilities (Sa), return period of rain events (Ri), potential damages (Dp) and the risk classification (R), according to the equation R=(Sn.Sa.Ri).Dp. Thematic maps were automatically processed within the SIIGAL, in which automata cells (""geoenvironmental management units"") aggregating geological-geomorphologic and land use/native vegetation categories were the units of classification. The method has been applied to the Northern Littoral of the State of Sao Paulo (Brazil) in 32 small drainage basins, demonstrating to be very useful for coastal zone public politics, civil defense programs and flood management.
Resumo:
Conventional procedures employed in the modeling of viscoelastic properties of polymer rely on the determination of the polymer`s discrete relaxation spectrum from experimentally obtained data. In the past decades, several analytical regression techniques have been proposed to determine an explicit equation which describes the measured spectra. With a diverse approach, the procedure herein introduced constitutes a simulation-based computational optimization technique based on non-deterministic search method arisen from the field of evolutionary computation. Instead of comparing numerical results, this purpose of this paper is to highlight some Subtle differences between both strategies and focus on what properties of the exploited technique emerge as new possibilities for the field, In oder to illustrate this, essayed cases show how the employed technique can outperform conventional approaches in terms of fitting quality. Moreover, in some instances, it produces equivalent results With much fewer fitting parameters, which is convenient for computational simulation applications. I-lie problem formulation and the rationale of the highlighted method are herein discussed and constitute the main intended contribution. (C) 2009 Wiley Periodicals, Inc. J Appl Polym Sci 113: 122-135, 2009
Resumo:
In this paper, we consider a classical problem of complete test generation for deterministic finite-state machines (FSMs) in a more general setting. The first generalization is that the number of states in implementation FSMs can even be smaller than that of the specification FSM. Previous work deals only with the case when the implementation FSMs are allowed to have the same number of states as the specification FSM. This generalization provides more options to the test designer: when traditional methods trigger a test explosion for large specification machines, tests with a lower, but yet guaranteed, fault coverage can still be generated. The second generalization is that tests can be generated starting with a user-defined test suite, by incrementally extending it until the desired fault coverage is achieved. Solving the generalized test derivation problem, we formulate sufficient conditions for test suite completeness weaker than the existing ones and use them to elaborate an algorithm that can be used both for extending user-defined test suites to achieve the desired fault coverage and for test generation. We present the experimental results that indicate that the proposed algorithm allows obtaining a trade-off between the length and fault coverage of test suites.
Resumo:
We have investigated plasma turbulence at the edge of a tokamak plasma using data from electrostatic potential fluctuations measured in the Brazilian tokamak TCABR. Recurrence quantification analysis has been used to provide diagnostics of the deterministic content of the series. We have focused our analysis on the radial dependence of potential fluctuations and their characterization by recurrence-based diagnostics. Our main result is that the deterministic content of the experimental signals is most pronounced at the external part of the plasma column just before the plasma radius. Since the chaoticity of the signals follows the same trend, we have concluded that the electrostatic plasma turbulence at the tokamak plasma edge can be partially explained by means of a deterministic nonlinear system. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
We show that the Hausdorff dimension of the spectral measure of a class of deterministic, i.e. nonrandom, block-Jacobi matrices may be determined with any degree of precision, improving a result of Zlatos [Andrej Zlatos,. Sparse potentials with fractional Hausdorff dimension, J. Funct. Anal. 207 (2004) 216-252]. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Recently, the deterministic tourist walk has emerged as a novel approach for texture analysis. This method employs a traveler visiting image pixels using a deterministic walk rule. Resulting trajectories provide clues about pixel interaction in the image that can be used for image classification and identification tasks. This paper proposes a new walk rule for the tourist which is based on contrast direction of a neighborhood. The yielded results using this approach are comparable with those from traditional texture analysis methods in the classification of a set of Brodatz textures and their rotated versions, thus confirming the potential of the method as a feasible texture analysis methodology. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The issue of how children learn the meaning of words is fundamental to developmental psychology. The recent attempts to develop or evolve efficient communication protocols among interacting robots or Virtual agents have brought that issue to a central place in more applied research fields, such as computational linguistics and neural networks, as well. An attractive approach to learning an object-word mapping is the so-called cross-situational learning. This learning scenario is based on the intuitive notion that a learner can determine the meaning of a word by finding something in common across all observed uses of that word. Here we show how the deterministic Neural Modeling Fields (NMF) categorization mechanism can be used by the learner as an efficient algorithm to infer the correct object-word mapping. To achieve that we first reduce the original on-line learning problem to a batch learning problem where the inputs to the NMF mechanism are all possible object-word associations that Could be inferred from the cross-situational learning scenario. Since many of those associations are incorrect, they are considered as clutter or noise and discarded automatically by a clutter detector model included in our NMF implementation. With these two key ingredients - batch learning and clutter detection - the NMF mechanism was capable to infer perfectly the correct object-word mapping. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The issue of smoothing in kriging has been addressed either by estimation or simulation. The solution via estimation calls for postprocessing kriging estimates in order to correct the smoothing effect. Stochastic simulation provides equiprobable images presenting no smoothing and reproducing the covariance model. Consequently, these images reproduce both the sample histogram and the sample semivariogram. However, there is still a problem, which is the lack of local accuracy of simulated images. In this paper, a postprocessing algorithm for correcting the smoothing effect of ordinary kriging estimates is compared with sequential Gaussian simulation realizations. Based on samples drawn from exhaustive data sets, the postprocessing algorithm is shown to be superior to any individual simulation realization yet, at the expense of providing one deterministic estimate of the random function.