811 resultados para Algorithm Calibration
Resumo:
Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone, and are often limited to optical see-through HMDs. Building on our existing approach to HMD calibration Gilson et al. (2008), we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside a HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in multiple positions. The centroids of the markers on the calibration object are recovered and their locations re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the HMD display's intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors without the need for error-prone human judgements.
Resumo:
This study was undertaken to explore gel permeation chromatography (GPC) for estimating molecular weights of proanthocyanidin fractions isolated from sainfoin (Onobrychis viciifolia). The results were compared with data obtained by thiolytic degradation of the same fractions. Polystyrene, polyethylene glycol and polymethyl methacrylate standards were not suitable for estimating the molecular weights of underivatized proanthocyanidins. Therefore, a novel HPLC-GPC method was developed based on two serially connected PolarGel-L columns using DMF that contained 5% water, 1% acetic acid and 0.15 M LiBr at 0.7 ml/min and 50 degrees C. This yielded a single calibration curve for galloyl glucoses (trigalloyl glucose, pentagalloyl glucose), ellagitannins (pedunculagin, vescalagin, punicalagin, oenothein B, gemin A), proanthocyanidins (procyanidin B2, cinnamtannin B1), and several other polyphenols (catechin, epicatechin gallate, epicallocatechin gallate, amentoflavone). These GPC predicted molecular weights represented a considerable advance over previously reported HPLC-GPC methods for underivatized proanthocyanidins. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The Along-Track Scanning Radiometers (ATSRs) provide a long time-series of measurements suitable for the retrieval of cloud properties. This work evaluates the freely-available Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) dataset (version 3) created from the ATSR-2 (1995�2003) and Advanced ATSR (AATSR; 2002 onwards) records. Users are recommended to consider only retrievals flagged as high-quality, where there is a good consistency between the measurements and the retrieved state (corresponding to about 60% of converged retrievals over sea, and more than 80% over land). Cloud properties are found to be generally free of any significant spurious trends relating to satellite zenith angle. Estimates of the random error on retrieved cloud properties are suggested to be generally appropriate for optically-thick clouds, and up to a factor of two too small for optically-thin cases. The correspondence between ATSR-2 and AATSR cloud properties is high, but a relative calibration difference between the sensors of order 5�10% at 660 nm and 870 nm limits the potential of the current version of the dataset for trend analysis. As ATSR-2 is thought to have the better absolute calibration, the discussion focusses on this portion of the record. Cloud-top heights from GRAPE compare well to ground-based data at four sites, particularly for shallow clouds. Clouds forming in boundary-layer inversions are typically around 1 km too high in GRAPE due to poorly-resolved inversions in the modelled temperature profiles used. Global cloud fields are compared to satellite products derived from the Moderate Resolution Imaging Spectroradiometer (MODIS), Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) measurements, and a climatology of liquid water content derived from satellite microwave radiometers. In all cases the main reasons for differences are linked to differing sensitivity to, and treatment of, multi-layer cloud systems. The correlation coefficient between GRAPE and the two MODIS products considered is generally high (greater than 0.7 for most cloud properties), except for liquid and ice cloud effective radius, which also show biases between the datasets. For liquid clouds, part of the difference is linked to choice of wavelengths used in the retrieval. Total cloud cover is slightly lower in GRAPE (0.64) than the CALIOP dataset (0.66). GRAPE underestimates liquid cloud water path relative to microwave radiometers by up to 100 g m�2 near the Equator and overestimates by around 50 g m�2 in the storm tracks. Finally, potential future improvements to the algorithm are outlined.
Resumo:
The aerosol component of the Oxford-Rutherford Aerosol and Cloud (ORAC) combined cloud and aerosol retrieval scheme is described and the theoretical performance of the algorithm is analysed. ORAC is an optimal estimation retrieval scheme for deriving cloud and aerosol properties from measurements made by imaging satellite radiometers and, when applied to cloud free radiances, provides estimates of aerosol optical depth at a wavelength of 550 nm, aerosol effective radius and surface reflectance at 550 nm. The aerosol retrieval component of ORAC has several incarnations – this paper addresses the version which operates in conjunction with the cloud retrieval component of ORAC (described by Watts et al., 1998), as applied in producing the Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) data-set.
Resumo:
This article presents and assesses an algorithm that constructs 3D distributions of cloud from passive satellite imagery and collocated 2D nadir profiles of cloud properties inferred synergistically from lidar, cloud radar and imager data. It effectively widens the active–passive retrieved cross-section (RXS) of cloud properties, thereby enabling computation of radiative fluxes and radiances that can be compared with measured values in an attempt to perform radiative closure experiments that aim to assess the RXS. For this introductory study, A-train data were used to verify the scene-construction algorithm and only 1D radiative transfer calculations were performed. The construction algorithm fills off-RXS recipient pixels by computing sums of squared differences (a cost function F) between their spectral radiances and those of potential donor pixels/columns on the RXS. Of the RXS pixels with F lower than a certain value, the one with the smallest Euclidean distance to the recipient pixel is designated as the donor, and its retrieved cloud properties and other attributes such as 1D radiative heating rates are consigned to the recipient. It is shown that both the RXS itself and Moderate Resolution Imaging Spectroradiometer (MODIS) imagery can be reconstructed extremely well using just visible and thermal infrared channels. Suitable donors usually lie within 10 km of the recipient. RXSs and their associated radiative heating profiles are reconstructed best for extensive planar clouds and less reliably for broken convective clouds. Domain-average 1D broadband radiative fluxes at the top of theatmosphere(TOA)for (21 km)2 domains constructed from MODIS, CloudSat andCloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) data agree well with coincidental values derived from Clouds and the Earth’s Radiant Energy System (CERES) radiances: differences betweenmodelled and measured reflected shortwave fluxes are within±10Wm−2 for∼35% of the several hundred domains constructed for eight orbits. Correspondingly, for outgoing longwave radiation∼65% are within ±10Wm−2.
Resumo:
Recent research has shown that Lighthill–Ford spontaneous gravity wave generation theory, when applied to numerical model data, can help predict areas of clear-air turbulence. It is hypothesized that this is the case because spontaneously generated atmospheric gravity waves may initiate turbulence by locally modifying the stability and wind shear. As an improvement on the original research, this paper describes the creation of an ‘operational’ algorithm (ULTURB) with three modifications to the original method: (1) extending the altitude range for which the method is effective downward to the top of the boundary layer, (2) adding turbulent kinetic energy production from the environment to the locally produced turbulent kinetic energy production, and, (3) transforming turbulent kinetic energy dissipation to eddy dissipation rate, the turbulence metric becoming the worldwide ‘standard’. In a comparison of ULTURB with the original method and with the Graphical Turbulence Guidance second version (GTG2) automated procedure for forecasting mid- and upper-level aircraft turbulence ULTURB performed better for all turbulence intensities. Since ULTURB, unlike GTG2, is founded on a self-consistent dynamical theory, it may offer forecasters better insight into the causes of the clear-air turbulence and may ultimately enhance its predictability.
Resumo:
Data assimilation is predominantly used for state estimation; combining observational data with model predictions to produce an updated model state that most accurately approximates the true system state whilst keeping the model parameters fixed. This updated model state is then used to initiate the next model forecast. Even with perfect initial data, inaccurate model parameters will lead to the growth of prediction errors. To generate reliable forecasts we need good estimates of both the current system state and the model parameters. This paper presents research into data assimilation methods for morphodynamic model state and parameter estimation. First, we focus on state estimation and describe implementation of a three dimensional variational(3D-Var) data assimilation scheme in a simple 2D morphodynamic model of Morecambe Bay, UK. The assimilation of observations of bathymetry derived from SAR satellite imagery and a ship-borne survey is shown to significantly improve the predictive capability of the model over a 2 year run. Here, the model parameters are set by manual calibration; this is laborious and is found to produce different parameter values depending on the type and coverage of the validation dataset. The second part of this paper considers the problem of model parameter estimation in more detail. We explain how, by employing the technique of state augmentation, it is possible to use data assimilation to estimate uncertain model parameters concurrently with the model state. This approach removes inefficiencies associated with manual calibration and enables more effective use of observational data. We outline the development of a novel hybrid sequential 3D-Var data assimilation algorithm for joint state-parameter estimation and demonstrate its efficacy using an idealised 1D sediment transport model. The results of this study are extremely positive and suggest that there is great potential for the use of data assimilation-based state-parameter estimation in coastal morphodynamic modelling.
Resumo:
Producing projections of future crop yields requires careful thought about the appropriate use of atmosphere-ocean global climate model (AOGCM) simulations. Here we describe and demonstrate multiple methods for ‘calibrating’ climate projections using an ensemble of AOGCM simulations in a ‘perfect sibling’ framework. Crucially, this type of analysis assesses the ability of each calibration methodology to produce reliable estimates of future climate, which is not possible just using historical observations. This type of approach could be more widely adopted for assessing calibration methodologies for crop modelling. The calibration methods assessed include the commonly used ‘delta’ (change factor) and ‘nudging’ (bias correction) approaches. We focus on daily maximum temperature in summer over Europe for this idealised case study, but the methods can be generalised to other variables and other regions. The calibration methods, which are relatively easy to implement given appropriate observations, produce more robust projections of future daily maximum temperatures and heat stress than using raw model output. The choice over which calibration method to use will likely depend on the situation, but change factor approaches tend to perform best in our examples. Finally, we demonstrate that the uncertainty due to the choice of calibration methodology is a significant contributor to the total uncertainty in future climate projections for impact studies. We conclude that utilising a variety of calibration methods on output from a wide range of AOGCMs is essential to produce climate data that will ensure robust and reliable crop yield projections.
Resumo:
In this paper we propose an efficient two-level model identification method for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularization parameters in the elastic net are optimized using a particle swarm optimization (PSO) algorithm at the upper level by minimizing the leave one out (LOO) mean square error (LOOMSE). Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
Advances in hardware and software in the past decade allow to capture, record and process fast data streams at a large scale. The research area of data stream mining has emerged as a consequence from these advances in order to cope with the real time analysis of potentially large and changing data streams. Examples of data streams include Google searches, credit card transactions, telemetric data and data of continuous chemical production processes. In some cases the data can be processed in batches by traditional data mining approaches. However, in some applications it is required to analyse the data in real time as soon as it is being captured. Such cases are for example if the data stream is infinite, fast changing, or simply too large in size to be stored. One of the most important data mining techniques on data streams is classification. This involves training the classifier on the data stream in real time and adapting it to concept drifts. Most data stream classifiers are based on decision trees. However, it is well known in the data mining community that there is no single optimal algorithm. An algorithm may work well on one or several datasets but badly on others. This paper introduces eRules, a new rule based adaptive classifier for data streams, based on an evolving set of Rules. eRules induces a set of rules that is constantly evaluated and adapted to changes in the data stream by adding new and removing old rules. It is different from the more popular decision tree based classifiers as it tends to leave data instances rather unclassified than forcing a classification that could be wrong. The ongoing development of eRules aims to improve its accuracy further through dynamic parameter setting which will also address the problem of changing feature domain values.
Resumo:
This contribution introduces a new digital predistorter to compensate serious distortions caused by memory high power amplifiers (HPAs) which exhibit output saturation characteristics. The proposed design is based on direct learning using a data-driven B-spline Wiener system modeling approach. The nonlinear HPA with memory is first identified based on the B-spline neural network model using the Gauss-Newton algorithm, which incorporates the efficient De Boor algorithm with both B-spline curve and first derivative recursions. The estimated Wiener HPA model is then used to design the Hammerstein predistorter. In particular, the inverse of the amplitude distortion of the HPA's static nonlinearity can be calculated effectively using the Newton-Raphson formula based on the inverse of De Boor algorithm. A major advantage of this approach is that both the Wiener HPA identification and the Hammerstein predistorter inverse can be achieved very efficiently and accurately. Simulation results obtained are presented to demonstrate the effectiveness of this novel digital predistorter design.
Resumo:
Evolutionary meta-algorithms for pulse shaping of broadband femtosecond duration laser pulses are proposed. The genetic algorithm searching the evolutionary landscape for desired pulse shapes consists of a population of waveforms (genes), each made from two concatenated vectors, specifying phases and magnitudes, respectively, over a range of frequencies. Frequency domain operators such as mutation, two-point crossover average crossover, polynomial phase mutation, creep and three-point smoothing as well as a time-domain crossover are combined to produce fitter offsprings at each iteration step. The algorithm applies roulette wheel selection; elitists and linear fitness scaling to the gene population. A differential evolution (DE) operator that provides a source of directed mutation and new wavelet operators are proposed. Using properly tuned parameters for DE, the meta-algorithm is used to solve a waveform matching problem. Tuning allows either a greedy directed search near the best known solution or a robust search across the entire parameter space.