128 resultados para PARAMETERS CALIBRATION
em CentAUR: Central Archive University of Reading - UK
Resumo:
The paper reports an interactive tool for calibrating a camera, suitable for use in outdoor scenes. The motivation for the tool was the need to obtain an approximate calibration for images taken with no explicit calibration data. Such images are frequently presented to research laboratories, especially in surveillance applications, with a request to demonstrate algorithms. The method decomposes the calibration parameters into intuitively simple components, and relies on the operator interactively adjusting the parameter settings to achieve a visually acceptable agreement between a rectilinear calibration model and his own perception of the scene. Using the tool, we have been able to calibrate images of unknown scenes, taken with unknown cameras, in a matter of minutes. The standard of calibration has proved to be sufficient for model-based pose recovery and tracking of vehicles.
Resumo:
A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model
Resumo:
The conceptual and parameter uncertainty of the semi-distributed INCA-N (Integrated Nutrients in Catchments-Nitrogen) model was studied using the GLUE (Generalized Likelihood Uncertainty Estimation) methodology combined with quantitative experimental knowledge, the concept known as 'soft data'. Cumulative inorganic N leaching, annual plant N uptake and annual mineralization proved to be useful soft data to constrain the parameter space. The INCA-N model was able to simulate the seasonal and inter-annual variations in the stream-water nitrate concentrations, although the lowest concentrations during the growing season were not reproduced. This suggested that there were some retention processes or losses either in peatland/wetland areas or in the river which were not included in the INCA-N model. The results of the study suggested that soft data was a way to reduce parameter equifinality, and that the calibration and testing of distributed hydrological and nutrient leaching models should be based both on runoff and/or nutrient concentration data and the qualitative knowledge of experimentalist. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Water quality models generally require a relatively large number of parameters to define their functional relationships, and since prior information on parameter values is limited, these are commonly defined by fitting the model to observed data. In this paper, the identifiability of water quality parameters and the associated uncertainty in model simulations are investigated. A modification to the water quality model `Quality Simulation Along River Systems' is presented in which an improved flow component is used within the existing water quality model framework. The performance of the model is evaluated in an application to the Bedford Ouse river, UK, using a Monte-Carlo analysis toolbox. The essential framework of the model proved to be sound, and calibration and validation performance was generally good. However some supposedly important water quality parameters associated with algal activity were found to be completely insensitive, and hence non-identifiable, within the model structure, while others (nitrification and sedimentation) had optimum values at or close to zero, indicating that those processes were not detectable from the data set examined. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Maize silage nutritive quality is routinely determined by near infrared reflectance spectroscopy (NIRS). However, little is known about the impact of sample preparation on the accuracy of the calibration to predict biological traits. A sample population of 48 maize silages representing a wide range of physiological maturities was used in a study to determine the impact of different sample preparation procedures (i.e., drying regimes; the presence or absence of residual moisture; the degree of particle comminution) on resultant NIR prediction statistics. All silages were scanned using a total of 12 combinations of sample pre-treatments. Each sample preparation combination was subjected to three multivariate regression techniques to give a total of 36 predictions per biological trait. Increased sample preparations procedure, relative to scanning the unprocessed whole plant (WP) material, always resulted in a numerical minimisation of model statistics. However, the ability of each of the treatments to significantly minimise the model statistics differed. Particle comminution was the most important factor, oven-drying regime was intermediate, and residual moisture presence was the least important. Models to predict various biological parameters of maize silage will be improved if material is subjected to a high degree of particle comminution (i.e., having been passed through a 1 mm screen) and developed on plant material previously dried at 60 degrees C. The extra effort in terms of time and cost required to remove sample residual moisture cannot be justified. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The microbial fermentability, ruminal degradability and digestibility of 48 maize silages were determined using in vitro gas production (GP), in situ degradability and in vitro digestibility procedures. The silages were produced from forage maize harvested throughout the summer of 1998, and represent a wide range of physiological maturities. Large variations among samples were observed for all biological parameters, with the exception of in vitro digestibility and the asymptote of in vitro GP. The potential of near infrared reflectance spectroscopy (NIRS) to predict the biological parameters measured was determined by regression of the biological data against the respective spectral profile. NIRS demonstrated only a moderate ability (R-2 > 0.60-0.80) to predict in vitro digestibility, modelled kinetics of gas production (excluding the asymptote of gas production) and the modelled ruminally soluble dry matter (DM) fraction. Calibration statistics for remaining biological parameters were unacceptably poor (R-2 = 0.60). (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone. The methods are also limited to optical see-through HMDs. Building on our existing HMD calibration method [1], we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside an HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in various positions. The locations of image features on the calibration object are then re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the display’s intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner in both see-through and in non-see-through modes and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors and involves no error-prone human measurements.
Resumo:
A robot mounted camera is useful in many machine vision tasks as it allows control over view direction and position. In this paper we report a technique for calibrating both the robot and the camera using only a single corresponding point. All existing head-eye calibration systems we have encountered rely on using pre-calibrated robots, pre- calibrated cameras, special calibration objects or combinations of these. Our method avoids using large scale non-linear optimizations by recovering the parameters in small dependent groups. This is done by performing a series of planned, but initially uncalibrated robot movements. Many of the kinematic parameters are obtained using only camera views in which the calibration feature is at, or near the image center, thus avoiding errors which could be introduced by lens distortion. The calibration is shown to be both stable and accurate. The robotic system we use consists of camera with pan-tilt capability mounted on a Cartesian robot, providing a total of 5 degrees of freedom.
Resumo:
This paper discusses a new method of impedance control that has been successfully implemented on the master robot of a teleoperation system. The method involves calibrating the robot to quantify the effect of adjustable controller parameters on the impedances along its different axes. The empirical equations relating end-effector impedance to the controller's feedback gains are obtained by performing system identification tests along individual axes of the robot. With these equations, online control of end-effector stiffness and damping is possible without having to monitor joint torques or solving complex algorithms. Hard contact conditions and compliant interfaces have been effectively demonstrated on a telemanipulation test-bed using appropriate combinations of stiffness and damping settings obtained by this method.
Resumo:
It took the solar polar passage of Ulysses in the early 1990s to establish the global structure of the solar wind speed during solar minimum. However, it remains unclear if the solar wind is composed of two distinct populations of solar wind from different sources (e.g., closed loops which open up to produce the slow solar wind) or if the fast and slow solar wind rely on the superradial expansion of the magnetic field to account for the observed solar wind speed variation. We investigate the solar wind in the inner corona using the Wang-Sheeley-Arge (WSA) coronal model incorporating a new empirical magnetic topology–velocity relationship calibrated for use at 0.1 AU. In this study the empirical solar wind speed relationship was determined by using Helios perihelion observations, along with results from Riley et al. (2003) and Schwadron et al. (2005) as constraints. The new relationship was tested by using it to drive the ENLIL 3-D MHD solar wind model and obtain solar wind parameters at Earth (1.0 AU) and Ulysses (1.4 AU). The improvements in speed, its variability, and the occurrence of high-speed enhancements provide confidence that the new velocity relationship better determines the solar wind speed in the outer corona (0.1 AU). An analysis of this improved velocity field within the WSA model suggests the existence of two distinct mechanisms of the solar wind generation, one for fast and one for slow solar wind, implying that a combination of present theories may be necessary to explain solar wind observations.
Resumo:
Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone, and are often limited to optical see-through HMDs. Building on our existing approach to HMD calibration Gilson et al. (2008), we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside a HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in multiple positions. The centroids of the markers on the calibration object are recovered and their locations re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the HMD display's intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors without the need for error-prone human judgements.
Resumo:
The Along-Track Scanning Radiometers (ATSRs) provide a long time-series of measurements suitable for the retrieval of cloud properties. This work evaluates the freely-available Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) dataset (version 3) created from the ATSR-2 (1995�2003) and Advanced ATSR (AATSR; 2002 onwards) records. Users are recommended to consider only retrievals flagged as high-quality, where there is a good consistency between the measurements and the retrieved state (corresponding to about 60% of converged retrievals over sea, and more than 80% over land). Cloud properties are found to be generally free of any significant spurious trends relating to satellite zenith angle. Estimates of the random error on retrieved cloud properties are suggested to be generally appropriate for optically-thick clouds, and up to a factor of two too small for optically-thin cases. The correspondence between ATSR-2 and AATSR cloud properties is high, but a relative calibration difference between the sensors of order 5�10% at 660 nm and 870 nm limits the potential of the current version of the dataset for trend analysis. As ATSR-2 is thought to have the better absolute calibration, the discussion focusses on this portion of the record. Cloud-top heights from GRAPE compare well to ground-based data at four sites, particularly for shallow clouds. Clouds forming in boundary-layer inversions are typically around 1 km too high in GRAPE due to poorly-resolved inversions in the modelled temperature profiles used. Global cloud fields are compared to satellite products derived from the Moderate Resolution Imaging Spectroradiometer (MODIS), Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) measurements, and a climatology of liquid water content derived from satellite microwave radiometers. In all cases the main reasons for differences are linked to differing sensitivity to, and treatment of, multi-layer cloud systems. The correlation coefficient between GRAPE and the two MODIS products considered is generally high (greater than 0.7 for most cloud properties), except for liquid and ice cloud effective radius, which also show biases between the datasets. For liquid clouds, part of the difference is linked to choice of wavelengths used in the retrieval. Total cloud cover is slightly lower in GRAPE (0.64) than the CALIOP dataset (0.66). GRAPE underestimates liquid cloud water path relative to microwave radiometers by up to 100 g m�2 near the Equator and overestimates by around 50 g m�2 in the storm tracks. Finally, potential future improvements to the algorithm are outlined.
Resumo:
Flash floods pose a significant danger for life and property. Unfortunately, in arid and semiarid environment the runoff generation shows a complex non-linear behavior with a strong spatial and temporal non-uniformity. As a result, the predictions made by physically-based simulations in semiarid areas are subject to great uncertainty, and a failure in the predictive behavior of existing models is common. Thus better descriptions of physical processes at the watershed scale need to be incorporated into the hydrological model structures. For example, terrain relief has been systematically considered static in flood modelling at the watershed scale. Here, we show that the integrated effect of small distributed relief variations originated through concurrent hydrological processes within a storm event was significant on the watershed scale hydrograph. We model these observations by introducing dynamic formulations of two relief-related parameters at diverse scales: maximum depression storage, and roughness coefficient in channels. In the final (a posteriori) model structure these parameters are allowed to be both time-constant or time-varying. The case under study is a convective storm in a semiarid Mediterranean watershed with ephemeral channels and high agricultural pressures (the Rambla del Albujón watershed; 556 km 2 ), which showed a complex multi-peak response. First, to obtain quasi-sensible simulations in the (a priori) model with time-constant relief-related parameters, a spatially distributed parameterization was strictly required. Second, a generalized likelihood uncertainty estimation (GLUE) inference applied to the improved model structure, and conditioned to observed nested hydrographs, showed that accounting for dynamic relief-related parameters led to improved simulations. The discussion is finally broadened by considering the use of the calibrated model both to analyze the sensitivity of the watershed to storm motion and to attempt the flood forecasting of a stratiform event with highly different behavior.
Resumo:
The classic vertical advection-diffusion (VAD) balance is a central concept in studying the ocean heat budget, in particular in simple climate models (SCMs). Here we present a new framework to calibrate the parameters of the VAD equation to the vertical ocean heat balance of two fully-coupled climate models that is traceable to the models’ circulation as well as to vertical mixing and diffusion processes. Based on temperature diagnostics, we derive an effective vertical velocity w∗ and turbulent diffusivity k∗ for each individual physical process. In steady-state, we find that the residual vertical velocity and diffusivity change sign in mid-depth, highlighting the different regional contributions of isopycnal and diapycnal diffusion in balancing the models’ residual advection and vertical mixing. We quantify the impacts of the time-evolution of the effective quantities under a transient 1%CO2 simulation and make the link to the parameters of currently employed SCMs.