912 resultados para Calibration methodologies
Resumo:
Producing projections of future crop yields requires careful thought about the appropriate use of atmosphere-ocean global climate model (AOGCM) simulations. Here we describe and demonstrate multiple methods for ‘calibrating’ climate projections using an ensemble of AOGCM simulations in a ‘perfect sibling’ framework. Crucially, this type of analysis assesses the ability of each calibration methodology to produce reliable estimates of future climate, which is not possible just using historical observations. This type of approach could be more widely adopted for assessing calibration methodologies for crop modelling. The calibration methods assessed include the commonly used ‘delta’ (change factor) and ‘nudging’ (bias correction) approaches. We focus on daily maximum temperature in summer over Europe for this idealised case study, but the methods can be generalised to other variables and other regions. The calibration methods, which are relatively easy to implement given appropriate observations, produce more robust projections of future daily maximum temperatures and heat stress than using raw model output. The choice over which calibration method to use will likely depend on the situation, but change factor approaches tend to perform best in our examples. Finally, we demonstrate that the uncertainty due to the choice of calibration methodology is a significant contributor to the total uncertainty in future climate projections for impact studies. We conclude that utilising a variety of calibration methods on output from a wide range of AOGCMs is essential to produce climate data that will ensure robust and reliable crop yield projections.
Resumo:
The aim of this paper is to present the current development status of a low cost system for surface reconstruction with structured light. The acquisition system is composed of a single off-the-shelf digital camera and a pattern projector. A pattern codification strategy was developed to allow the pattern recognition automatically and a calibration methodology ensures the determination of the direction vector of each pattern. The experiments indicated that an accuracy of 0.5mm in depth could be achieved for typical applications.
Resumo:
The accurate measurement of a vehicle’s velocity is an essential feature in adaptive vehicle activated sign systems. Since the velocities of the vehicles are acquired from a continuous wave Doppler radar, the data collection becomes challenging. Data accuracy is sensitive to the calibration of the radar on the road. However, clear methodologies for in-field calibration have not been carefully established. The signs are often installed by subjective judgment which results in measurement errors. This paper develops a calibration method based on mining the data collected and matching individual vehicles travelling between two radars. The data was cleaned and prepared in two ways: cleaning and reconstructing. The results showed that the proposed correction factor derived from the cleaned data corresponded well with the experimental factor done on site. In addition, this proposed factor showed superior performance to the one derived from the reconstructed data.
Resumo:
It has been demonstrated that iodine does have an important influence on atmospheric chemistry, especially the formation of new particles and the enrichment of iodine in marine aerosols. It was pointed out that the most probable chemical species involved in the production or growth of these particles are iodine oxides, produced photochemically from biogenic halocarbon emissions and/or iodine emission from the sea surface. However, the iodine chemistry from gaseous to particulate phase in the coastal atmosphere and the chemical nature of the condensing iodine species are still not understood. A Tenax / Carbotrap adsorption sampling technique and a thermo-desorption / cryo-trap / GC-MS system has been further developed and improved for the volatile organic iodine species in the gas phase. Several iodo-hydrocarbons such as CH3I, C2H5I, CH2ICl, CH2IBr and CH2I2 etc., have been measured in samples from a calibration test gas source (standards), real air samples and samples from seaweeds / macro-algae emission experiments. A denuder sampling technique has been developed to characterise potential precursor compounds of coastal particle formation processes, such as molecular iodine in the gas phase. Starch, TMAH (TetraMethylAmmonium Hydroxide) and TBAH (TetraButylAmmonium Hydroxide) coated denuders were tested for their efficiencies to collect I2 at the inner surface, followed by a TMAH extraction and ICP/MS determination, adding tellurium as an internal standard. The developed method has been proved to be an effective, accurate and suitable process for I2 measurement in the field, with the estimated detection limit of ~0.10 ng∙L-1 for a sampling volume of 15 L. An H2O/TMAH-Extraction-ICP/MS method has been developed for the accurate and sensitive determination of iodine species in tropospheric aerosol particles. The particle samples were collected on cellulose-nitrate filters using conventional filter holders or on cellulose nitrate/tedlar-foils using a 5-stage Berner impactor for size-segregated particle analysis. The water soluble species as IO3- and I- were separated by anion exchanging process after water extraction. Non-water soluble species including iodine oxide and organic iodine were digested and extracted by TMAH. Afterwards the triple samples were analysed by ICP/MS. The detection limit for particulate iodine was determined to be 0.10~0.20 ng•m-3 for sampling volumes of 40~100 m3. The developed methods have been used in two field measurements in May 2002 and September 2003, at and around the Mace Head Atmospheric Research Station (MHARS) located at the west coast of Ireland. Elemental iodine as a precursor of the iodine chemistry in the coastal atmosphere, was determined in the gas phase at a seaweed hot-spot around the MHARS, showing I2 concentrations were in the range of 0~1.6 ng∙L-1 and indicating a positive correlation with the ozone concentration. A seaweed-chamber experiment performed at the field measurement station showed that the I2 emission rate from macro-algae was in the range of 0.019~0.022 ng•min-1•kg-1. During these experiments, nanometer-particle concentrations were obtained from the Scanning Mobility Particle Sizer (SMPS) measurements. Particle number concentrations were found to have a linear correlation with elemental iodine in the gas phase of the seaweeds chamber, showing that gaseous I2 is one of the important precursors of the new particle formation in the coastal atmosphere. Iodine contents in the particle phase were measured in both field campaigns at and around the field measurement station. Total iodine concentrations were found to be in the range of 1.0 ~ 21.0 ng∙m-3 in the PM2.5 samples. A significant correlation between the total iodine concentrations and the nanometer-particle number concentrations was observed. The particulate iodine species analysis indicated that iodide contents are usually higher than those of iodate in all samples, with ratios in the range of 2~5:1. It is possible that those water soluble iodine species are transferred through the sea-air interface into the particle phase. The ratio of water soluble (iodate + iodide) and non-water soluble species (probably iodine oxide and organic iodine compounds) was observed to be in the range of 1:1 to 1:2. It appears that higher concentrated non-water soluble species, as the products of the photolysis from the gas phase into the particle phase, can be obtained in those samples while the nucleation events occur. That supports the idea that iodine chemistry in the coastal boundary layer is linked with new particle formation events. Furthermore, artificial aerosol particles were formed from gaseous iodine sources (e.g. CH2I2) using a laboratory reaction-chamber experiment, in which the reaction constant of the CH2I2 photolysis was calculated to be based upon the first order reaction kinetic. The end products of iodine chemistry in the particle phase were identified and quantified.
Resumo:
A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The Gauss-Marquardt-Levenberg (GML) method of computer-based parameter estimation, in common with other gradient-based approaches, suffers from the drawback that it may become trapped in local objective function minima, and thus report optimized parameter values that are not, in fact, optimized at all. This can seriously degrade its utility in the calibration of watershed models where local optima abound. Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use. It is also easily adapted to maintain this efficiency in the face of potential numerical problems (that adversely affect all parameter estimation methodologies) caused by parameter insensitivity and/or parameter correlation. The present paper presents two algorithmic enhancements to the GML method that retain its strengths, but which overcome its weaknesses in the face of local optima. Using the first of these methods an intelligent search for better parameter sets is conducted in parameter subspaces of decreasing dimensionality when progress of the parameter estimation process is slowed either by numerical instability incurred through problem ill-posedness, or when a local objective function minimum is encountered. The second methodology minimizes the chance of successive GML parameter estimation runs finding the same objective function minimum by starting successive runs at points that are maximally removed from previous parameter trajectories. As well as enhancing the ability of a GML-based method to find the global objective function minimum, the latter technique can also be used to find the locations of many non-global optima (should they exist) in parameter space. This can provide a useful means of inquiring into the well-posedness of a parameter estimation problem, and for detecting the presence of bimodal parameter and predictive probability distributions. The new methodologies are demonstrated by calibrating a Hydrological Simulation Program-FORTRAN (HSPF) model against a time series of daily flows. Comparison with the SCE-UA method in this calibration context demonstrates a high level of comparative model run efficiency for the new method. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. ^ Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. ^ Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. ^ With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.^
Resumo:
Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.
Resumo:
This paper is the second in a pair that Lesh, English, and Fennewald will be presenting at ICME TSG 19 on Problem Solving in Mathematics Education. The first paper describes three shortcomings of past research on mathematical problem solving. The first shortcoming can be seen in the fact that knowledge has not accumulated – in fact it has atrophied significantly during the past decade. Unsuccessful theories continue to be recycled and embellished. One reason for this is that researchers generally have failed to develop research tools needed to reliably observe, document, and assess the development of concepts and abilities that they claim to be important. The second shortcoming is that existing theories and research have failed to make it clear how concept development (or the development of basic skills) is related to the development of problem solving abilities – especially when attention is shifted beyond word problems found in school to the kind of problems found outside of school, where the requisite skills and even the questions to be asked might not be known in advance. The third shortcoming has to do with inherent weaknesses in observational studies and teaching experiments – and the assumption that a single grand theory should be able to describe all of the conceptual systems, instructional systems, and assessment systems that strongly molded and shaped by the same theoretical perspectives that are being used to develop them. Therefore, this paper will describe theoretical perspectives and methodological tools that are proving to be effective to combat the preceding kinds or shortcomings. We refer to our theoretical framework as models & modeling perspectives (MMP) on problem solving (Lesh & Doerr, 2003), learning, and teaching. One of the main methodologies of MMP is called multi-tier design studies (MTD).