17 resultados para Calibration estimators

em University of Queensland eSpace - Australia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Use of nonlinear parameter estimation techniques is now commonplace in ground water model calibration. However, there is still ample room for further development of these techniques in order to enable them to extract more information from calibration datasets, to more thoroughly explore the uncertainty associated with model predictions, and to make them easier to implement in various modeling contexts. This paper describes the use of pilot points as a methodology for spatial hydraulic property characterization. When used in conjunction with nonlinear parameter estimation software that incorporates advanced regularization functionality (such as PEST), use of pilot points can add a great deal of flexibility to the calibration process at the same time as it makes this process easier to implement. Pilot points can be used either as a substitute for zones of piecewise parameter uniformity, or in conjunction with such zones. In either case, they allow the disposition of areas of high and low hydraulic property value to be inferred through the calibration process, without the need for the modeler to guess the geometry of such areas prior to estimating the parameters that pertain to them. Pilot points and regularization can also be used as an adjunct to geostatistically based stochastic parameterization methods. Using the techniques described herein, a series of hydraulic property fields can be generated, all of which recognize the stochastic characterization of an area at the same time that they satisfy the constraints imposed on hydraulic property values by the need to ensure that model outputs match field measurements. Model predictions can then be made using all of these fields as a mechanism for exploring predictive uncertainty.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The fatty acid omega-hydroxylation regiospecificity of CYP4 enzymes may result from presentation of the terminal carbon to the oxidizing species via a narrow channel that restricts access to the other carbon atoms. To test this hypothesis, the oxidation of 12-iodo-, 12-bromo-, and 12-chlorododecanoic acids by recombinant CYP4A1 has been examined. Although all three 12-halododecanoic acids bind to CYP4A1 with similar dissociation constants, the 12-chloro and 12-bromo fatty acids are oxidized to 12-hydroxydodecanoic acid and 12-oxododecanoic acid, whereas the 12-iodo analogue is very poorly oxidized. Incubations in (H2O)-O-18 show that the 12-hydroxydodecanoic acid oxygen derives from water, whereas that in the aldehyde derives from O-2. The alcohol thus arises from oxidation of the halide to an oxohalonium species that is hydrolyzed by water, whereas the aldehyde arises by a conventional carbon hydroxylation-elimination mechanism. No irreversible inactivation of CYP4A1 is observed during 12-halododecanoic acid oxidation. Control experiments show that CYP2E1, which has an omega-1 regiospecificity, primarily oxidizes 12-halododecanoic acids to the omega-aldehyde rather than alcohol product. Incubation of CYP4A1 with 12,12-[H-2](2)-12-chlorododecanoic acid causes a 2-3-fold increase in halogen versus carbon oxidation. The fact that the order of substrate oxidation (Br > Cl >> I) approximates the inverse of the intrinsic oxidizability of the halogen atoms is consistent with presentation of the halide terminus via a channel that accommodates the chloride and bromide but not iodide atoms, which implies an effective channel diameter greater than 3.90 angstrom but smaller than 4.30 angstrom.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comprehensive published radiocarbon data from selected atmospheric records, tree rings, and recent organic matter were analyzed and grouped into 4 different zones (three for the Northern Hemisphere and one for the whole Southern Hemisphere). These C-14 data for the summer season of each hemisphere were employed to construct zonal, hemispheric, and global data sets for use in regional and global carbon model calculations including calibrating and comparing carbon cycle models. In addition, extended monthly atmospheric C-14 data sets for 4 different zones were compiled for age calibration purposes. This is the first time these data sets were constructed to facilitate the dating of recent organic material using the bomb C-14 curves. The distribution of bomb C-14 reflects the major zones of atmospheric circulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An experimental study of a planar microwave imaging system with step-frequency synthesized pulse for possible use in medical applications is described. Simple phantoms, consisting of a cylindrical plastic container with air or oil imitating fatty tissues and small highly reflective objects emulating tumors, are scanned with a probe antenna over a planar surface in the X-band. Different calibration schemes are considered for successful detection of these objects. (c) 2006 Wiley Periodicals, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Gauss-Marquardt-Levenberg (GML) method of computer-based parameter estimation, in common with other gradient-based approaches, suffers from the drawback that it may become trapped in local objective function minima, and thus report optimized parameter values that are not, in fact, optimized at all. This can seriously degrade its utility in the calibration of watershed models where local optima abound. Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use. It is also easily adapted to maintain this efficiency in the face of potential numerical problems (that adversely affect all parameter estimation methodologies) caused by parameter insensitivity and/or parameter correlation. The present paper presents two algorithmic enhancements to the GML method that retain its strengths, but which overcome its weaknesses in the face of local optima. Using the first of these methods an intelligent search for better parameter sets is conducted in parameter subspaces of decreasing dimensionality when progress of the parameter estimation process is slowed either by numerical instability incurred through problem ill-posedness, or when a local objective function minimum is encountered. The second methodology minimizes the chance of successive GML parameter estimation runs finding the same objective function minimum by starting successive runs at points that are maximally removed from previous parameter trajectories. As well as enhancing the ability of a GML-based method to find the global objective function minimum, the latter technique can also be used to find the locations of many non-global optima (should they exist) in parameter space. This can provide a useful means of inquiring into the well-posedness of a parameter estimation problem, and for detecting the presence of bimodal parameter and predictive probability distributions. The new methodologies are demonstrated by calibrating a Hydrological Simulation Program-FORTRAN (HSPF) model against a time series of daily flows. Comparison with the SCE-UA method in this calibration context demonstrates a high level of comparative model run efficiency for the new method. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: This study was conducted to devise a new individual calibration method to enhance MTI accelerometer estimation of free-living level walking speed. Method: Five female and five male middle-aged adults walked 400 m at 3.5, 4.5, and 5.5 km(.)h(-1), and 800 in at 6.5 km(.)h(-1) on an outdoor track, following a continuous protocol. Lap speed was controlled by a global positioning system (GPS) monitor. MTI counts-to-speed calibration equations were derived for each trial, for each subject for four such trials with each of four MTI, for each subject for the average MTI. and for the pooled data. Standard errors of the estimate (SEE) with and without individual calibration were compared. To assess accuracy of prediction of free-living walking speed, subjects also completed a self-paced, brisk 3-km walk wearing one of the four MTI, and differences between actual and predicted walking speed with and without individual calibration were examined. Results: Correlations between MTI counts and walking speed were 0.90 without individual calibration, 0.98 with individual calibration for the average MTI. and 0.99 with individual calibration for a specific MTI. The SEE (mean +/- SD) was 0.58 +/- 0.30 km(.)h(-1) without individual calibration, 0.19 +/- 0.09 km h(-1) with individual calibration for the average MTI monitor, and 0.16 +/- 0.08 km(.)h(-1) with individual calibration for a specific MTI monitor. The difference between actual and predicted walking speed on the brisk 3-km walk was 0.06 +/- 0.25 km(.)h(-1) using individual calibration and 0.28 +/- 0.63 km(.)h(-1) without individual calibration (for specific accelerometers). Conclusion: MTI accuracy in predicting walking speed without individual calibration might be sufficient for population-based studies but not for intervention trials. This individual calibration method will substantially increase precision of walking speed predicted from MTI counts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a Solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The cost of uniqueness is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, ill turn, can lead to erroneous predictions made by a model that is ostensibly well calibrated. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as all inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based oil pilot points, and calibration is Implemented using both zones of piecewise constancy and constrained minimization regularization. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Absolute calibration relates the measured (arbitrary) intensity to the differential scattering cross section of the sample, which contains all of the quantitative information specific to the material. The importance of absolute calibration in small-angle scattering experiments has long been recognized. This work details the absolute calibration procedure of a small-angle X-ray scattering instrument from Bruker AXS. The absolute calibration presented here was achieved by using a number of different types of primary and secondary standards. The samples were: a glassy carbon specimen, which had been independently calibrated from neutron radiation; a range of pure liquids, which can be used as primary standards as their differential scattering cross section is directly related to their isothermal compressibility; and a suspension of monodisperse silica particles for which the differential scattering cross section is obtained from Porod's law. Good agreement was obtained between the different standard samples, provided that care was taken to obtain significant signal averaging and all sources of background scattering were accounted for. The specimen best suited for routine calibration was the glassy carbon sample, due to its relatively intense scattering and stability over time; however, initial calibration from a primary source is necessary. Pure liquids can be used as primary calibration standards, but the measurements take significantly longer and are, therefore, less suited for frequent use.