949 resultados para PARAMETERS CALIBRATION
Resumo:
An Ensemble Kalman Filter is applied to assimilate observed tracer fields in various combinations in the Bern3D ocean model. Each tracer combination yields a set of optimal transport parameter values that are used in projections with prescribed CO2 stabilization pathways. The assimilation of temperature and salinity fields yields a too vigorous ventilation of the thermocline and the deep ocean, whereas the inclusion of CFC-11 and radiocarbon improves the representation of physical and biogeochemical tracers and of ventilation time scales. Projected peak uptake rates and cumulative uptake of CO2 by the ocean are around 20% lower for the parameters determined with CFC-11 and radiocarbon as additional target compared to those with salinity and temperature only. Higher surface temperature changes are simulated in the Greenland–Norwegian–Iceland Sea and in the Southern Ocean when CFC-11 is included in the Ensemble Kalman model tuning. These findings highlights the importance of ocean transport calibration for the design of near-term and long-term CO2 emission mitigation strategies and for climate projections.
Resumo:
The reliability of millimeter and sub-millimeter wave radiometer measurements is dependent on the accuracy of the loads they employ as calibration targets. In the recent past on-board calibration loads have been developed for a variety of satellite remote sensing instruments. Unfortunately some of these have suffered from calibration inaccuracies which had poor thermal performance of the calibration target as the root cause. Stringent performance parameters of the calibration target such as low reflectivity, high temperature uniformity, low mass and low power consumption combined with low volumetric requirements remain a challenge for the space instrument developer. In this paper we present a novel multi-layer absorber concept for a calibration load which offers an excellent compromise between very good radiometric performance and temperature uniformity and the mass and volumetric constraints required by space-borne calibration targets.
Resumo:
The European Rosetta mission on its way to comet 67P/Churyumov-Gerasimenko will remain for more than a year in the close vicinity (1 km) of the comet. The two ROSINA mass spectrometers on board Rosetta are designed to analyze the neutral and ionized volatile components of the cometary coma. However, the relative velocity between the comet and the spacecraft will be minimal and also the velocity of the outgassing particles is below 1km∕s. This combination leads to very low ion energies in the surrounding plasma of the comet, typically below 20eV. Additionally, the spacecraft may charge up to a few volts in this environment. In order to simulate such plasma and to calibrate the mass spectrometers, a source for ions with very low energies had to be developed for the use in the laboratory together with the different gases expected at the comet. In this paper we present the design of this ion source and we discuss the physical parameters of the ion beam like sensitivity, energy distribution, and beam shape. Finally, we show the first ion measurements that have been performed together with one of the two mass spectrometers.
Resumo:
The relationship between phytoplankton assemblages and the associated optical properties of the water body is important for the further development of algorithms for large-scale remote sensing of phytoplankton biomass and the identification of phytoplankton functional types (PFTs), which are often representative for different biogeochemical export scenarios. Optical in-situ measurements aid in the identification of phytoplankton groups with differing pigment compositions and are widely used to validate remote sensing data. In this study we present results from an interdisciplinary cruise aboard the RV Polarstern along a north-to-south transect in the eastern Atlantic Ocean in November 2008. Phytoplankton community composition was identified using a broad set of in-situ measurements. Water samples from the surface and the depth of maximum chlorophyll concentration were analyzed by high performance liquid chromatography (HPLC), flow cytometry, spectrophotometry and microscopy. Simultaneously, the above- and underwater light field was measured by a set of high spectral resolution (hyperspectral) radiometers. An unsupervised cluster algorithm applied to the measured parameters allowed us to define bio-optical provinces, which we compared to ecological provinces proposed elsewhere in the literature. As could be expected, picophytoplankton was responsible for most of the variability of PFTs in the eastern Atlantic Ocean. Our bio-optical clusters agreed well with established provinces and thus can be used to classify areas of similar biogeography. This method has the potential to become an automated approach where satellite data could be used to identify shifting boundaries of established ecological provinces or to track exceptions from the rule to improve our understanding of the biogeochemical cycles in the ocean.
Resumo:
The main purpose of robot calibration is the correction of the possible errors in the robot parameters. This paper presents a method for a kinematic calibration of a parallel robot that is equipped with one camera in hand. In order to preserve the mechanical configuration of the robot, the camera is utilized to acquire incremental positions of the end effector from a spherical object that is fixed in the word reference frame. The positions of the end effector are related to incremental positions of resolvers of the motors of the robot, and a kinematic model of the robot is used to find a new group of parameters which minimizes errors in the kinematic equations. Additionally, properties of the spherical object and intrinsic camera parameters are utilized to model the projection of the object in the image and improving spatial measurements. Finally, the robotic system is designed to carry out tracking tasks and the calibration of the robot is validated by means of integrating the errors of the visual controller.
Resumo:
A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.
Resumo:
Este artículo propone un método para llevar a cabo la calibración de las familias de discontinuidades en macizos rocosos. We present a novel approach for calibration of stochastic discontinuity network parameters based on genetic algorithms (GAs). To validate the approach, examples of application of the method to cases with known parameters of the original Poisson discontinuity network are presented. Parameters of the model are encoded as chromosomes using a binary representation, and such chromosomes evolve as successive generations of a randomly generated initial population, subjected to GA operations of selection, crossover and mutation. Such back-calculated parameters are employed to make assessments about the inference capabilities of the model using different objective functions with different probabilities of crossover and mutation. Results show that the predictive capabilities of GAs significantly depend on the type of objective function considered; and they also show that the calibration capabilities of the genetic algorithm can be acceptable for practical engineering applications, since in most cases they can be expected to provide parameter estimates with relatively small errors for those parameters of the network (such as intensity and mean size of discontinuities) that have the strongest influence on many engineering applications.
Resumo:
This paper presents a novel method for the calibration of a parallel robot, which allows a more accurate configuration instead of a configuration based on nominal parameters. It is used, as the main sensor with one camera installed in the robot hand that determines the relative position of the robot with respect to a spherical object fixed in the working area of the robot. The positions of the end effector are related to the incremental positions of resolvers of the robot motors. A kinematic model of the robot is used to find a new group of parameters, which minimizes errors in the kinematic equations. Additionally, properties of the spherical object and intrinsic camera parameters are utilized to model the projection of the object in the image and thereby improve spatial measurements. Finally, several working tests, static and tracking tests are executed in order to verify how the robotic system behaviour improves by using calibrated parameters against nominal parameters. In order to emphasize that, this proposed new method uses neither external nor expensive sensor. That is why new robots are useful in teaching and research activities.
Resumo:
Flash floods are of major relevance in natural disaster management in the Mediterranean region. In many cases, the damaging effects of flash floods can be mitigated by adequate management of flood control reservoirs. This requires the development of suitable models for optimal operation of reservoirs. A probabilistic methodology for calibrating the parameters of a reservoir flood control model (RFCM) that takes into account the stochastic variability of flood events is presented. This study addresses the crucial problem of operating reservoirs during flood events, considering downstream river damages and dam failure risk as conflicting operation criteria. These two criteria are aggregated into a single objective of total expected damages from both the maximum released flows and stored volumes (overall risk index). For each selected parameter set the RFCM is run under a wide range of hydrologic loads (determined through Monte Carlo simulation). The optimal parameter set is obtained through the overall risk index (balanced solution) and then compared with other solutions of the Pareto front. The proposed methodology is implemented at three different reservoirs in the southeast of Spain. The results obtained show that the balanced solution offers a good compromise between the two main objectives of reservoir flood control management
Resumo:
Validating modern oceanographic theories using models produced through stereo computer vision principles has recently emerged. Space-time (4-D) models of the ocean surface may be generated by stacking a series of 3-D reconstructions independently generated for each time instant or, in a more robust manner, by simultaneously processing several snapshots coherently in a true ?4-D reconstruction.? However, the accuracy of these computer-vision-generated models is subject to the estimations of camera parameters, which may be corrupted under the influence of natural factors such as wind and vibrations. Therefore, removing the unpredictable errors of the camera parameters is necessary for an accurate reconstruction. In this paper, we propose a novel algorithm that can jointly perform a 4-D reconstruction as well as correct the camera parameter errors introduced by external factors. The technique is founded upon variational optimization methods to benefit from their numerous advantages: continuity of the estimated surface in space and time, robustness, and accuracy. The performance of the proposed algorithm is tested using synthetic data produced through computer graphics techniques, based on which the errors of the camera parameters arising from natural factors can be simulated.
Resumo:
A mathematical model of the process employed by a sonic anemometer to build up the measured wind vector in a steady flow is presented to illustrate the way the geometry of these sensors as well as the characteristics of aerodynamic disturbance on the acoustic path can lead to singularities in the transformation function that relates the measured (disturbed) wind vector with the real (corrected) wind vector, impeding the application of correction/calibration functions for some wind conditions. An implicit function theorem allows for the identification of those combinations of real wind conditions and design parameters that lead to undefined correction/ calibration functions. In general, orthogonal path sensors do not show problematic combination of parameters. However, some geometric sonic sensor designs, available in the market, with paths forming smaller angles could lead to undefined correction functions for some levels of aerodynamic disturbances and for certain wind directions. The parameters studied have a strong influence on the existence and number of singularities in the correction/ calibration function as well as on the number of singularities for some combination of parameters. Some conclusions concerning good design practices are included.
Resumo:
High-resolution proxy data analyzed on two high-sedimentation shallow water sedimentary sequences (PO287-26B and PO287-28B) recovered off Lisbon (Portugal) provide the means for comparison to long-term instrumental time series of marine and atmospheric parameters (sea surface temperature (SST), precipitation, total river flow, and upwelling intensity computed from sea level pressure) and the possibility to do the necessary calibration for the quantification of past climate conditions. XRF Fe is used as proxy for river flow, and the upwelling-related diatom genus Chaetoceros is our upwelling proxy. SST is estimated from the coccolithophore-synthesized alkenones and Uk'37 index. Comparison of the Fe record to the instrumental data reveals its similarity to a mean average run of the instrumentally measured winter (JFMA) river flow on both sites. The upwelling diatom record concurs with the upwelling indices at both sites; however, high opal dissolution, below 20-25 cm, prevents its use for quantitative reconstructions. Alkenone-derived SST at site 28B does not show interannual variation; it has a mean value around 16°C and compares quite well with the instrumental winter/spring temperature. At site 26B the mean SST is the same, but a high degree of interannual variability (up to 4°C) appears to be determined by summer upwelling conditions. Stepwise regression analyses of the instrumental and proxy data sets provided regressions that explain from 65 to 94% of the variability contained in the original data, and reflect spring and summer river flow, as well as summer and winter upwelling indices, substantiating the relevance of seasons to the interpretation of the different proxy signals. The lack of analogs and the small data set available do not allow quantitative reconstructions at this time, but this might be a powerful tool for reconstructing past North Atlantic Oscillation conditions, should we be able to find continuous high-resolution records and overcome the analog problem.
Resumo:
Use of nonlinear parameter estimation techniques is now commonplace in ground water model calibration. However, there is still ample room for further development of these techniques in order to enable them to extract more information from calibration datasets, to more thoroughly explore the uncertainty associated with model predictions, and to make them easier to implement in various modeling contexts. This paper describes the use of pilot points as a methodology for spatial hydraulic property characterization. When used in conjunction with nonlinear parameter estimation software that incorporates advanced regularization functionality (such as PEST), use of pilot points can add a great deal of flexibility to the calibration process at the same time as it makes this process easier to implement. Pilot points can be used either as a substitute for zones of piecewise parameter uniformity, or in conjunction with such zones. In either case, they allow the disposition of areas of high and low hydraulic property value to be inferred through the calibration process, without the need for the modeler to guess the geometry of such areas prior to estimating the parameters that pertain to them. Pilot points and regularization can also be used as an adjunct to geostatistically based stochastic parameterization methods. Using the techniques described herein, a series of hydraulic property fields can be generated, all of which recognize the stochastic characterization of an area at the same time that they satisfy the constraints imposed on hydraulic property values by the need to ensure that model outputs match field measurements. Model predictions can then be made using all of these fields as a mechanism for exploring predictive uncertainty.
Resumo:
A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a Solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The cost of uniqueness is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, ill turn, can lead to erroneous predictions made by a model that is ostensibly well calibrated. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as all inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based oil pilot points, and calibration is Implemented using both zones of piecewise constancy and constrained minimization regularization. (C) 2005 Elsevier Ltd. All rights reserved.