904 resultados para calibration of rainfall-runoff models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The fatty acid omega-hydroxylation regiospecificity of CYP4 enzymes may result from presentation of the terminal carbon to the oxidizing species via a narrow channel that restricts access to the other carbon atoms. To test this hypothesis, the oxidation of 12-iodo-, 12-bromo-, and 12-chlorododecanoic acids by recombinant CYP4A1 has been examined. Although all three 12-halododecanoic acids bind to CYP4A1 with similar dissociation constants, the 12-chloro and 12-bromo fatty acids are oxidized to 12-hydroxydodecanoic acid and 12-oxododecanoic acid, whereas the 12-iodo analogue is very poorly oxidized. Incubations in (H2O)-O-18 show that the 12-hydroxydodecanoic acid oxygen derives from water, whereas that in the aldehyde derives from O-2. The alcohol thus arises from oxidation of the halide to an oxohalonium species that is hydrolyzed by water, whereas the aldehyde arises by a conventional carbon hydroxylation-elimination mechanism. No irreversible inactivation of CYP4A1 is observed during 12-halododecanoic acid oxidation. Control experiments show that CYP2E1, which has an omega-1 regiospecificity, primarily oxidizes 12-halododecanoic acids to the omega-aldehyde rather than alcohol product. Incubation of CYP4A1 with 12,12-[H-2](2)-12-chlorododecanoic acid causes a 2-3-fold increase in halogen versus carbon oxidation. The fact that the order of substrate oxidation (Br > Cl >> I) approximates the inverse of the intrinsic oxidizability of the halogen atoms is consistent with presentation of the halide terminus via a channel that accommodates the chloride and bromide but not iodide atoms, which implies an effective channel diameter greater than 3.90 angstrom but smaller than 4.30 angstrom.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Subsequent to the influential paper of [Chan, K.C., Karolyi, G.A., Longstaff, F.A., Sanders, A.B., 1992. An empirical comparison of alternative models of the short-term interest rate. Journal of Finance 47, 1209-1227], the generalised method of moments (GMM) has been a popular technique for estimation and inference relating to continuous-time models of the short-term interest rate. GMM has been widely employed to estimate model parameters and to assess the goodness-of-fit of competing short-rate specifications. The current paper conducts a series of simulation experiments to document the bias and precision of GMM estimates of short-rate parameters, as well as the size and power of [Hansen, L.P., 1982. Large sample properties of generalised method of moments estimators. Econometrica 50, 1029-1054], J-test of over-identifying restrictions. While the J-test appears to have appropriate size and good power in sample sizes commonly encountered in the short-rate literature, GMM estimates of the speed of mean reversion are shown to be severely biased. Consequently, it is dangerous to draw strong conclusions about the strength of mean reversion using GMM. In contrast, the parameter capturing the levels effect, which is important in differentiating between competing short-rate specifications, is estimated with little bias. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional sensitivity and elasticity analyses of matrix population models have been used to p inform management decisions, but they ignore the economic costs of manipulating vital rates. For exam le, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously, These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Absolute calibration relates the measured (arbitrary) intensity to the differential scattering cross section of the sample, which contains all of the quantitative information specific to the material. The importance of absolute calibration in small-angle scattering experiments has long been recognized. This work details the absolute calibration procedure of a small-angle X-ray scattering instrument from Bruker AXS. The absolute calibration presented here was achieved by using a number of different types of primary and secondary standards. The samples were: a glassy carbon specimen, which had been independently calibrated from neutron radiation; a range of pure liquids, which can be used as primary standards as their differential scattering cross section is directly related to their isothermal compressibility; and a suspension of monodisperse silica particles for which the differential scattering cross section is obtained from Porod's law. Good agreement was obtained between the different standard samples, provided that care was taken to obtain significant signal averaging and all sources of background scattering were accounted for. The specimen best suited for routine calibration was the glassy carbon sample, due to its relatively intense scattering and stability over time; however, initial calibration from a primary source is necessary. Pure liquids can be used as primary calibration standards, but the measurements take significantly longer and are, therefore, less suited for frequent use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Validation procedures play an important role in establishing the credibility of models, improving their relevance and acceptability. This article reviews the testing of models relevant to environmental and natural resource management with particular emphasis on models used in multicriteria analysis (MCA). Validation efforts for a model used in a MCA catchment management study in North Queensland, Australia, are presented. Determination of face validity is found to be a useful approach in evaluating this model, and sensitivity analysis is useful in checking the stability of the model. (C) 2000 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deformable models are a highly accurate and flexible approach to segmenting structures in medical images. The primary drawback of deformable models is that they are sensitive to initialisation, with accurate and robust results often requiring initialisation close to the true object in the image. Automatically obtaining a good initialisation is problematic for many structures in the body. The cartilages of the knee are a thin elastic material that cover the ends of the bone, absorbing shock and allowing smooth movement. The degeneration of these cartilages characterize the progression of osteoarthritis. The state of the art in the segmentation of the cartilage are 2D semi-automated algorithms. These algorithms require significant time and supervison by a clinical expert, so the development of an automatic segmentation algorithm for the cartilages is an important clinical goal. In this paper we present an approach towards this goal that allows us to automatically providing a good initialisation for deformable models of the patella cartilage, by utilising the strong spatial relationship of the cartilage to the underlying bone.