8 resultados para A priori model

em CentAUR: Central Archive University of Reading - UK


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Flash floods pose a significant danger for life and property. Unfortunately, in arid and semiarid environment the runoff generation shows a complex non-linear behavior with a strong spatial and temporal non-uniformity. As a result, the predictions made by physically-based simulations in semiarid areas are subject to great uncertainty, and a failure in the predictive behavior of existing models is common. Thus better descriptions of physical processes at the watershed scale need to be incorporated into the hydrological model structures. For example, terrain relief has been systematically considered static in flood modelling at the watershed scale. Here, we show that the integrated effect of small distributed relief variations originated through concurrent hydrological processes within a storm event was significant on the watershed scale hydrograph. We model these observations by introducing dynamic formulations of two relief-related parameters at diverse scales: maximum depression storage, and roughness coefficient in channels. In the final (a posteriori) model structure these parameters are allowed to be both time-constant or time-varying. The case under study is a convective storm in a semiarid Mediterranean watershed with ephemeral channels and high agricultural pressures (the Rambla del Albujón watershed; 556 km 2 ), which showed a complex multi-peak response. First, to obtain quasi-sensible simulations in the (a priori) model with time-constant relief-related parameters, a spatially distributed parameterization was strictly required. Second, a generalized likelihood uncertainty estimation (GLUE) inference applied to the improved model structure, and conditioned to observed nested hydrographs, showed that accounting for dynamic relief-related parameters led to improved simulations. The discussion is finally broadened by considering the use of the calibrated model both to analyze the sensitivity of the watershed to storm motion and to attempt the flood forecasting of a stratiform event with highly different behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A low resolution coupled ocean-atmosphere general circulation model OAGCM is used to study the characteristics of the large scale ocean circulation and its climatic impacts in a series of global coupled aquaplanet experiments. Three configurations, designed to produce fundamentally different ocean circulation regimes, are considered. The first has no obstruction to zonal flow, the second contains a low barrier that blocks zonal flow in the ocean at all latitudes, creating a single enclosed basin, whilst the third contains a gap in the barrier to allow circumglobal flow at high southern latitudes. Warm greenhouse climates with a global average air surface temperature of around 27C result in all cases. Equator to pole temperature gradients are shallower than that of a current climate simulation. Whilst changes in the land configuration cause regional changes in temperature, winds and rainfall, heat transports within the system are little affected. Inhibition of all ocean transport on the aquaplanet leads to a reduction in global mean surface temperature of 8C, along with a sharpening of the meridional temperature gradient. This results from a reduction in global atmospheric water vapour content and an increase in tropical albedo, both of which act to reduce global surface temperatures. Fitting a simple radiative model to the atmospheric characteristics of the OAGCM solutions suggests that a simpler atmosphere model, with radiative parameters chosen a priori based on the changing surface configuration, would have produced qualitatively different results. This implies that studies with reduced complexity atmospheres need to be guided by more complex OAGCM results on a case by case basis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The assimilation of observations with a forecast is often heavily influenced by the description of the error covariances associated with the forecast. When a temperature inversion is present at the top of the boundary layer (BL), a significant part of the forecast error may be described as a vertical positional error (as opposed to amplitude error normally dealt with in data assimilation). In these cases, failing to account for positional error explicitly is shown t o r esult in an analysis for which the inversion structure is erroneously weakened and degraded. In this article, a new assimilation scheme is proposed to explicitly include the positional error associated with an inversion. This is done through the introduction of an extra control variable to allow position errors in the a priori to be treated simultaneously with the usual amplitude errors. This new scheme, referred to as the ‘floating BL scheme’, is applied to the one-dimensional (vertical) variational assimilation of temperature. The floating BL scheme is tested with a series of idealised experiments a nd with real data from radiosondes. For each idealised experiment, the floating BL scheme gives an analysis which has the inversion structure and position in agreement with the truth, and outperforms the a ssimilation which accounts only for forecast a mplitude error. When the floating BL scheme is used to assimilate a l arge sample of radiosonde data, its ability to give an analysis with an inversion height in better agreement with that observed is confirmed. However, it is found that the use of Gaussian statistics is an inappropriate description o f t he error statistics o f t he extra c ontrol variable. This problem is alleviated by incorporating a non-Gaussian description of the new control variable in the new scheme. Anticipated challenges in implementing the scheme operationally are discussed towards the end of the article.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a model of market participation in which the presence of non-negligible fixed costs leads to random censoring of the traditional double-hurdle model. Fixed costs arise when household resources must be devoted a priori to the decision to participate in the market. These costs, usually of time, are manifested in non-negligible minimum-efficient supplies and supply correspondence that requires modification of the traditional Tobit regression. The costs also complicate econometric estimation of household behavior. These complications are overcome by application of the Gibbs sampler. The algorithm thus derived provides robust estimates of the fixed-costs, double-hurdle model. The model and procedures are demonstrated in an application to milk market participation in the Ethiopian highlands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Earthworms are important organisms in soil communities and so are used as model organisms in environmental risk assessments of chemicals. However current risk assessments of soil invertebrates are based on short-term laboratory studies, of limited ecological relevance, supplemented if necessary by site-specific field trials, which sometimes are challenging to apply across the whole agricultural landscape. Here, we investigate whether population responses to environmental stressors and pesticide exposure can be accurately predicted by combining energy budget and agent-based models (ABMs), based on knowledge of how individuals respond to their local circumstances. A simple energy budget model was implemented within each earthworm Eisenia fetida in the ABM, based on a priori parameter estimates. From broadly accepted physiological principles, simple algorithms specify how energy acquisition and expenditure drive life cycle processes. Each individual allocates energy between maintenance, growth and/or reproduction under varying conditions of food density, soil temperature and soil moisture. When simulating published experiments, good model fits were obtained to experimental data on individual growth, reproduction and starvation. Using the energy budget model as a platform we developed methods to identify which of the physiological parameters in the energy budget model (rates of ingestion, maintenance, growth or reproduction) are primarily affected by pesticide applications, producing four hypotheses about how toxicity acts. We tested these hypotheses by comparing model outputs with published toxicity data on the effects of copper oxychloride and chlorpyrifos on E. fetida. Both growth and reproduction were directly affected in experiments in which sufficient food was provided, whilst maintenance was targeted under food limitation. Although we only incorporate toxic effects at the individual level we show how ABMs can readily extrapolate to larger scales by providing good model fits to field population data. The ability of the presented model to fit the available field and laboratory data for E. fetida demonstrates the promise of the agent-based approach in ecology, by showing how biological knowledge can be used to make ecological inferences. Further work is required to extend the approach to populations of more ecologically relevant species studied at the field scale. Such a model could help extrapolate from laboratory to field conditions and from one set of field conditions to another or from species to species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Highly heterogeneous mountain snow distributions strongly affect soil moisture patterns; local ecology; and, ultimately, the timing, magnitude, and chemistry of stream runoff. Capturing these vital heterogeneities in a physically based distributed snow model requires appropriately scaled model structures. This work looks at how model scale—particularly the resolutions at which the forcing processes are represented—affects simulated snow distributions and melt. The research area is in the Reynolds Creek Experimental Watershed in southwestern Idaho. In this region, where there is a negative correlation between snow accumulation and melt rates, overall scale degradation pushed simulated melt to earlier in the season. The processes mainly responsible for snow distribution heterogeneity in this region—wind speed, wind-affected snow accumulations, thermal radiation, and solar radiation—were also independently rescaled to test process-specific spatiotemporal sensitivities. It was found that in order to accurately simulate snowmelt in this catchment, the snow cover needed to be resolved to 100 m. Wind and wind-affected precipitation—the primary influence on snow distribution—required similar resolution. Thermal radiation scaled with the vegetation structure (~100 m), while solar radiation was adequately modeled with 100–250-m resolution. Spatiotemporal sensitivities to model scale were found that allowed for further reductions in computational costs through the winter months with limited losses in accuracy. It was also shown that these modeling-based scale breaks could be associated with physiographic and vegetation structures to aid a priori modeling decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We give an a priori analysis of a semi-discrete discontinuous Galerkin scheme approximating solutions to a model of multiphase elastodynamics which involves an energy density depending not only on the strain but also the strain gradient. A key component in the analysis is the reduced relative entropy stability framework developed in Giesselmann (SIAM J Math Anal 46(5):3518–3539, 2014). The estimate we derive is optimal in the L∞(0,T;dG) norm for the strain and the L2(0,T;dG) norm for the velocity, where dG is an appropriate mesh dependent H1-like space.