949 resultados para PARAMETERS CALIBRATION


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the feasibility of using approximate Bayesian computation (ABC) to calibrate and evaluate complex individual-based models (IBMs). As ABC evolves, various versions are emerging, but here we only explore the most accessible version, rejection-ABC. Rejection-ABC involves running models a large number of times, with parameters drawn randomly from their prior distributions, and then retaining the simulations closest to the observations. Although well-established in some fields, whether ABC will work with ecological IBMs is still uncertain. Rejection-ABC was applied to an existing 14-parameter earthworm energy budget IBM for which the available data consist of body mass growth and cocoon production in four experiments. ABC was able to narrow the posterior distributions of seven parameters, estimating credible intervals for each. ABC’s accepted values produced slightly better fits than literature values do. The accuracy of the analysis was assessed using cross-validation and coverage, currently the best available tests. Of the seven unnarrowed parameters, ABC revealed that three were correlated with other parameters, while the remaining four were found to be not estimable given the data available. It is often desirable to compare models to see whether all component modules are necessary. Here we used ABC model selection to compare the full model with a simplified version which removed the earthworm’s movement and much of the energy budget. We are able to show that inclusion of the energy budget is necessary for a good fit to the data. We show how our methodology can inform future modelling cycles, and briefly discuss how more advanced versions of ABC may be applicable to IBMs. We conclude that ABC has the potential to represent uncertainty in model structure, parameters and predictions, and to embed the often complex process of optimizing an IBM’s structure and parameters within an established statistical framework, thereby making the process more transparent and objective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We use sunspot group observations from the Royal Greenwich Observatory (RGO) to investigate the effects of intercalibrating data from observers with different visual acuities. The tests are made by counting the number of groups RB above a variable cut-off threshold of observed total whole-spot area (uncorrected for foreshortening) to simulate what a lower acuity observer would have seen. The synthesised annual means of RB are then re-scaled to the full observed RGO group number RA using a variety of regression techniques. It is found that a very high correlation between RA and RB (rAB > 0.98) does not prevent large errors in the intercalibration (for example sunspot maximum values can be over 30 % too large even for such levels of rAB). In generating the backbone sunspot number (RBB), Svalgaard and Schatten (2015, this issue) force regression fits to pass through the scatter plot origin which generates unreliable fits (the residuals do not form a normal distribution) and causes sunspot cycle amplitudes to be exaggerated in the intercalibrated data. It is demonstrated that the use of Quantile-Quantile (“Q  Q”) plots to test for a normal distribution is a useful indicator of erroneous and misleading regression fits. Ordinary least squares linear fits, not forced to pass through the origin, are sometimes reliable (although the optimum method used is shown to be different when matching peak and average sunspot group numbers). However, other fits are only reliable if non-linear regression is used. From these results it is entirely possible that the inflation of solar cycle amplitudes in the backbone group sunspot number as one goes back in time, relative to related solar-terrestrial parameters, is entirely caused by the use of inappropriate and non-robust regression techniques to calibrate the sunspot data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demands are one of the most uncertain parameters in a water distribution network model. A good calibration of the model demands leads to better solutions when using the model for any purpose. A demand pattern calibration methodology that uses a priori information has been developed for calibrating the behaviour of demand groups. Generally, the behaviours of demands in cities are mixed all over the network, contrary to smaller villages where demands are clearly sectorised in residential neighbourhoods, commercial zones and industrial sectors. Demand pattern calibration has a final use for leakage detection and isolation. Detecting a leakage in a pattern that covers nodes spread all over the network makes the isolation unfeasible. Besides, demands in the same zone may be more similar due to the common pressure of the area rather than for the type of contract. For this reason, the demand pattern calibration methodology is applied to a real network with synthetic non-geographic demands for calibrating geographic demand patterns. The results are compared with a previous work where the calibrated patterns were also non-geographic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presented work deals with the calibration of a 2D numerical model for the simulation of long term bed load transport. A settled basin along an alpine stream was used as a case study. The focus is to parameterise the used multi fractional transport model such that a dynamically balanced behavior regarding erosion and deposition is reached. The used 2D hydrodynamic model utilizes a multi-fraction multi-layer approach to simulate morphological changes and bed load transport. The mass balancing is performed between three layers: a top mixing layer, an intermediate subsurface layer and a bottom layer. Using this approach bears computational limitations in calibration. Due to the high computational demands, the type of calibration strategy is not only crucial for the result, but as well for the time required for calibration. Brute force methods such as Monte Carlo type methods may require a too large number of model runs. All here tested calibration strategies used multiple model runs utilising the parameterization and/or results from previous run. One concept was to reset to initial bed elevations after each run, allowing the resorting process to convert to stable conditions. As an alternative or in combination, the roughness was adapted, based on resulting nodal grading curves, from the previous run. Since the adaptations are a spatial process, the whole model domain is subdivided in homogeneous sections regarding hydraulics and morphological behaviour. For a faster optimization, the adaptation of the parameters is made section wise. Additionally, a systematic variation was done, considering results from previous runs and the interaction between sections. The used approach can be considered as similar to evolutionary type calibration approaches, but using analytical links instead of random parameter changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microwave remote sensing has high potential for soil moisture retrieval. However, the efficient retrieval of soil moisture depends on optimally choosing the soil moisture retrieval parameters. In this study first the initial evaluation of SMOS L2 product is performed and then four approaches regarding soil moisture retrieval from SMOS brightness temperature are reported. The radiative transfer equation based tau-omega rationale is used in this study for the soil moisture retrievals. The single channel algorithms (SCA) using H polarisation is implemented with modifications, which includes the effective temperatures simulated from ECMWF (downscaled using WRF-NOAH Land Surface Model (LSM)) and MODIS. The retrieved soil moisture is then utilized for soil moisture deficit (SMD) estimation using empirical relationships with Probability Distributed Model based SMD as a benchmark. The square of correlation during the calibration indicates a value of R2 =0.359 for approach 4 (WRF-NOAH LSM based LST with optimized roughness parameters) followed by the approach 2 (optimized roughness parameters and MODIS based LST) (R2 =0.293), approach 3 (WRF-NOAH LSM based LST with no optimization) (R2 =0.267) and approach 1(MODIS based LST with no optimization) (R2 =0.163). Similarly, during the validation a highest performance is reported by approach 4. The other approaches are also following a similar trend as calibration. All the performances are depicted through Taylor diagram which indicates that the H polarisation using ECMWF based LST is giving a better performance for SMD estimation than the original SMOS L2 products at a catchment scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of dynamic camera calibration considering moving objects in close range environments using straight lines as references is addressed. A mathematical model for the correspondence of a straight line in the object and image spaces is discussed. This model is based on the equivalence between the vector normal to the interpretation plane in the image space and the vector normal to the rotated interpretation plane in the object space. In order to solve the dynamic camera calibration, Kalman Filtering is applied; an iterative process based on the recursive property of the Kalman Filter is defined, using the sequentially estimated camera orientation parameters to feedback the feature extraction process in the image. For the dynamic case, e.g. an image sequence of a moving object, a state prediction and a covariance matrix for the next instant is obtained using the available estimates and the system model. Filtered state estimates can be computed from these predicted estimates using the Kalman Filtering approach and based on the system model parameters with good quality, for each instant of an image sequence. The proposed approach was tested with simulated and real data. Experiments with real data were carried out in a controlled environment, considering a sequence of images of a moving cube in a linear trajectory over a flat surface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work is to evaluate the influence of point measurements in images, with subpixel accuracy, and its contribution in the calibration of digital cameras. Also, the effect of subpixel measurements in 3D coordinates of check points in the object space will be evaluated. With this purpose, an algorithm that allows subpixel accuracy was implemented for semi-automatic determination of points of interest, based on Fõrstner operator. Experiments were accomplished with a block of images acquired with the multispectral camera DuncanTech MS3100-CIR. The influence of subpixel measurements in the adjustment by Least Square Method (LSM) was evaluated by the comparison of estimated standard deviation of parameters in both situations, with manual measurement (pixel accuracy) and with subpixel estimation. Additionally, the influence of subpixel measurements in the 3D reconstruction was also analyzed. Based on the obtained results, i.e., on the quantification of the standard deviation reduction in the Inner Orientation Parameters (IOP) and also in the relative error of the 3D reconstruction, it was shown that measurements with subpixel accuracy are relevant for some tasks in Photogrammetry, mainly for those in which the metric quality is of great relevance, as Camera Calibration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The noteworthy of this study is to predict seven quality parameters for beef samples using time-domain nuclear magnetic resonance (TD-NMR) relaxometry data and multivariate models. Samples from 61 Bonsmara heifers were separated into five groups based on genetic (breeding composition) and feed system (grain and grass feed). Seven sample parameters were analyzed by reference methods; among them, three sensorial parameters, flavor, juiciness and tenderness and four physicochemical parameters, cooking loss, fat and moisture content and instrumental tenderness using Warner Bratzler shear force (WBSF). The raw beef samples of the same animals were analyzed by TD-NMR relaxometry using Carr-Purcell-Meiboom-Gill (CPMG) and Continuous Wave-Free Precession (CWFP) sequences. Regression models computed by partial least squares (PLS) chemometric technique using CPMG and CWFP data and the results of the classical analysis were constructed. The results allowed for the prediction of aforementioned seven properties. The predictive ability of the method was evaluated using the root mean square error (RMSE) for the calibration (RMSEC) and validation (RMSEP) data sets. The reference and predicted values showed no significant differences at a 95% confidence level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current methods for quality control of sugar cane are performed in extracted juice using several methodologies, often requiring appreciable time and chemicals (eventually toxic), making the methods not green and expensive. The present study proposes the use of X-ray spectrometry together with chemometric methods as an innovative and alternative technique for determining sugar cane quality parameters, specifically sucrose concentration, POL, and fiber content. Measurements in stem, leaf, and juice were performed, and those applied directly in stem provided the best results. Prediction models for sugar cane stem determinations with a single 60 s irradiation using portable X-ray fluorescence equipment allows estimating the % sucrose, % fiber, and POL simultaneously. Average relative deviations in the prediction step of around 8% are acceptable if considering that field measurements were done. These results may indicate the best period to cut a particular crop as well as for evaluating the quality of sugar cane for the sugar and alcohol industries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The conversion of computed tomography (CT) numbers into material composition and mass density data influences the accuracy of patient dose calculations in Monte Carlo treatment planning (MCTP). The aim of our work was to develop a CT conversion scheme by performing a stoichiometric CT calibration. Fourteen dosimetrically equivalent tissue subsets (bins), of which ten bone bins, were created. After validating the proposed CT conversion scheme on phantoms, it was compared to a conventional five bin scheme with only one bone bin. This resulted in dose distributions D(14) and D(5) for nine clinical patient cases in a European multi-centre study. The observed local relative differences in dose to medium were mostly smaller than 5%. The dose-volume histograms of both targets and organs at risk were comparable, although within bony structures D(14) was found to be slightly but systematically higher than D(5). Converting dose to medium to dose to water (D(14) to D(14wat) and D(5) to D(5wat)) resulted in larger local differences as D(5wat) became up to 10% higher than D(14wat). In conclusion, multiple bone bins need to be introduced when Monte Carlo (MC) calculations of patient dose distributions are converted to dose to water.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In this paper, we present a new method for the calibration of a microscope and its registration using an active optical tracker. METHODS: Practically, both operations are done simultaneously by moving an active optical marker within the field of view of the two devices. The IR LEDs composing the marker are first segmented from the microscope images. By knowing their corresponding three-dimensional (3D) position in the optical tracker reference system, it is possible to find the transformation matrix between the referential of the two devices. Registration and calibration parameters can be extracted directly from that transformation. In addition, since the zoom and focus can be modified by the surgeon during the operation, we propose a spline based method to update the camera model to the new setup. RESULTS: The proposed technique is currently being used in an augmented reality system for image-guided surgery in the fields of ear, nose and throat (ENT) and craniomaxillofacial surgeries. CONCLUSIONS: The results have proved to be accurate and the technique is a fast, dynamic and reliable way to calibrate and register the two devices in an OR environment.