4 resultados para calibration of rainfall-runoff models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is aimed at designing a robust vaccination strategy capable of eradicating an infectious disease from a population regardless of the potential uncertainty in the parameters defining the disease. For this purpose, a control theoretic approach based on a sliding-mode control law is used. Initially, the controller is designed assuming certain knowledge of an upper-bound of the uncertainty signal. Afterwards, this condition is removed while an adaptive sliding control system is designed. The closed-loop properties are proved mathematically in the nonadaptive and adaptive cases. Furthermore, the usual sign function appearing in the sliding-mode control is substituted by the saturation function in order to prevent chattering. In addition, the properties achieved by the closed-loop system under this variation are also stated and proved analytically. The closed-loop system is able to attain the control objective regardless of the parametric uncertainties of the model and the lack of a priori knowledge on the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The hydrological response of a catchment to rainfall on different timescales is result of a complex system involving a range of physical processes which may operate simultaneously and have different spatial and temporal influences. This paper presents the analysis of streamflow response of a small humid-temperate catchment (Aixola, 4.8 km(2)) in the Basque Country on different timescales and discusses the role of the controlling factors. Firstly, daily time series analysis was used to establish a hypothesis on the general functioning of the catchment through the relationship between precipitation and discharge on an annual and multiannual scale (2003-2008). Second, rainfall-runoff relationships and relationships among several hydrological variables, including catchment antecedent conditions, were explored at the event scale (222 events) to check and improve the hypothesis. Finally, the evolution of electrical conductivity (EC) during some of the monitored storm events (28 events) was examined to identify the time origin of waters. Quick response of the catchment to almost all the rainfall events as well as a considerable regulation capacity was deduced from the correlation and spectral analyses. These results agree with runoff event scale data analysis; however, the event analysis revealed the non-linearity of the system, as antecedent conditions play a significant role in this catchment. Further, analysis at the event scale made possible to clarify factors controlling (precipitation, precipitation intensity and initial discharge) the different aspects of the runoff response (runoff coefficient and discharge increase) for this catchment. Finally, the evolution of EC of the waters enabled the time origin (event or pre-event waters) of the quickflow to be established; specifically, the conductivity showed that pre-event waters usually represent a high percentage of the total discharge during runoff peaks. The importance of soil waters in the catchment is being studied more deeply.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The learning of probability distributions from data is a ubiquitous problem in the fields of Statistics and Artificial Intelligence. During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models due to their advantageous theoretical properties. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k, which controls the complexity of the model. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose a family of algorithms which approximates this problem with a computational complexity of O(k · n^2 log n) in the worst case, where n is the number of implied random variables. The structures of the decomposable models that solve the maximum likelihood problem are called maximal k-order decomposable graphs. Our proposals, called fractal trees, construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy based on the particular features of this type of structures. Additionally, we propose a prune-and-graft procedure which transforms a maximal k-order decomposable graph into another one, increasing its likelihood. We have implemented two particular fractal tree algorithms called parallel fractal tree and sequential fractal tree. These algorithms can be considered a natural extension of Chow and Liu’s algorithm, from k = 2 to arbitrary values of k. Both algorithms have been compared against other efficient approaches in artificial and real domains, and they have shown a competitive behavior to deal with the maximum likelihood problem. Due to their low computational complexity they are especially recommended to deal with high dimensional domains.