117 resultados para Operator Error
em CentAUR: Central Archive University of Reading - UK
Resumo:
We investigate the error dynamics for cycled data assimilation systems, such that the inverse problem of state determination is solved at tk, k = 1, 2, 3, ..., with a first guess given by the state propagated via a dynamical system model from time tk − 1 to time tk. In particular, for nonlinear dynamical systems that are Lipschitz continuous with respect to their initial states, we provide deterministic estimates for the development of the error ||ek|| := ||x(a)k − x(t)k|| between the estimated state x(a) and the true state x(t) over time. Clearly, observation error of size δ > 0 leads to an estimation error in every assimilation step. These errors can accumulate, if they are not (a) controlled in the reconstruction and (b) damped by the dynamical system under consideration. A data assimilation method is called stable, if the error in the estimate is bounded in time by some constant C. The key task of this work is to provide estimates for the error ||ek||, depending on the size δ of the observation error, the reconstruction operator Rα, the observation operator H and the Lipschitz constants K(1) and K(2) on the lower and higher modes of controlling the damping behaviour of the dynamics. We show that systems can be stabilized by choosing α sufficiently small, but the bound C will then depend on the data error δ in the form c||Rα||δ with some constant c. Since ||Rα|| → ∞ for α → 0, the constant might be large. Numerical examples for this behaviour in the nonlinear case are provided using a (low-dimensional) Lorenz '63 system.
Resumo:
In this paper we study convergence of the L2-projection onto the space of polynomials up to degree p on a simplex in Rd, d >= 2. Optimal error estimates are established in the case of Sobolev regularity and illustrated on several numerical examples. The proof is based on the collapsed coordinate transform and the expansion into various polynomial bases involving Jacobi polynomials and their antiderivatives. The results of the present paper generalize corresponding estimates for cubes in Rd from [P. Houston, C. Schwab, E. Süli, Discontinuous hp-finite element methods for advection-diffusion-reaction problems. SIAM J. Numer. Anal. 39 (2002), no. 6, 2133-2163].
Resumo:
The observation-error covariance matrix used in data assimilation contains contributions from instrument errors, representativity errors and errors introduced by the approximated observation operator. Forward model errors arise when the observation operator does not correctly model the observations or when observations can resolve spatial scales that the model cannot. Previous work to estimate the observation-error covariance matrix for particular observing instruments has shown that it contains signifcant correlations. In particular, correlations for humidity data are more significant than those for temperature. However it is not known what proportion of these correlations can be attributed to the representativity errors. In this article we apply an existing method for calculating representativity error, previously applied to an idealised system, to NWP data. We calculate horizontal errors of representativity for temperature and humidity using data from the Met Office high-resolution UK variable resolution model. Our results show that errors of representativity are correlated and more significant for specific humidity than temperature. We also find that representativity error varies with height. This suggests that the assimilation scheme may be improved if these errors are explicitly included in a data assimilation scheme. This article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.
Resumo:
With the development of convection-permitting numerical weather prediction the efficient use of high resolution observations in data assimilation is becoming increasingly important. The operational assimilation of these observations, such as Dopplerradar radial winds, is now common, though to avoid violating the assumption of un- correlated observation errors the observation density is severely reduced. To improve the quantity of observations used and the impact that they have on the forecast will require the introduction of the full, potentially correlated, error statistics. In this work, observation error statistics are calculated for the Doppler radar radial winds that are assimilated into the Met Office high resolution UK model using a diagnostic that makes use of statistical averages of observation-minus-background and observation-minus-analysis residuals. This is the first in-depth study using the diagnostic to estimate both horizontal and along-beam correlated observation errors. By considering the new results obtained it is found that the Doppler radar radial wind error standard deviations are similar to those used operationally and increase as the observation height increases. Surprisingly the estimated observation error correlation length scales are longer than the operational thinning distance. They are dependent on both the height of the observation and on the distance of the observation away from the radar. Further tests show that the long correlations cannot be attributed to the use of superobservations or the background error covariance matrix used in the assimilation. The large horizontal correlation length scales are, however, in part, a result of using a simplified observation operator.
Resumo:
Two wavelet-based control variable transform schemes are described and are used to model some important features of forecast error statistics for use in variational data assimilation. The first is a conventional wavelet scheme and the other is an approximation of it. Their ability to capture the position and scale-dependent aspects of covariance structures is tested in a two-dimensional latitude-height context. This is done by comparing the covariance structures implied by the wavelet schemes with those found from the explicit forecast error covariance matrix, and with a non-wavelet- based covariance scheme used currently in an operational assimilation scheme. Qualitatively, the wavelet-based schemes show potential at modeling forecast error statistics well without giving preference to either position or scale-dependent aspects. The degree of spectral representation can be controlled by changing the number of spectral bands in the schemes, and the least number of bands that achieves adequate results is found for the model domain used. Evidence is found of a trade-off between the localization of features in positional and spectral spaces when the number of bands is changed. By examining implied covariance diagnostics, the wavelet-based schemes are found, on the whole, to give results that are closer to diagnostics found from the explicit matrix than from the nonwavelet scheme. Even though the nature of the covariances has the right qualities in spectral space, variances are found to be too low at some wavenumbers and vertical correlation length scales are found to be too long at most scales. The wavelet schemes are found to be good at resolving variations in position and scale-dependent horizontal length scales, although the length scales reproduced are usually too short. The second of the wavelet-based schemes is often found to be better than the first in some important respects, but, unlike the first, it has no exact inverse transform.
Resumo:
Resumo:
We have developed an ensemble Kalman Filter (EnKF) to estimate 8-day regional surface fluxes of CO2 from space-borne CO2 dry-air mole fraction observations (XCO2) and evaluate the approach using a series of synthetic experiments, in preparation for data from the NASA Orbiting Carbon Observatory (OCO). The 32-day duty cycle of OCO alternates every 16 days between nadir and glint measurements of backscattered solar radiation at short-wave infrared wavelengths. The EnKF uses an ensemble of states to represent the error covariances to estimate 8-day CO2 surface fluxes over 144 geographical regions. We use a 12×8-day lag window, recognising that XCO2 measurements include surface flux information from prior time windows. The observation operator that relates surface CO2 fluxes to atmospheric distributions of XCO2 includes: a) the GEOS-Chem transport model that relates surface fluxes to global 3-D distributions of CO2 concentrations, which are sampled at the time and location of OCO measurements that are cloud-free and have aerosol optical depths <0.3; and b) scene-dependent averaging kernels that relate the CO2 profiles to XCO2, accounting for differences between nadir and glint measurements, and the associated scene-dependent observation errors. We show that OCO XCO2 measurements significantly reduce the uncertainties of surface CO2 flux estimates. Glint measurements are generally better at constraining ocean CO2 flux estimates. Nadir XCO2 measurements over the terrestrial tropics are sparse throughout the year because of either clouds or smoke. Glint measurements provide the most effective constraint for estimating tropical terrestrial CO2 fluxes by accurately sampling fresh continental outflow over neighbouring oceans. We also present results from sensitivity experiments that investigate how flux estimates change with 1) bias and unbiased errors, 2) alternative duty cycles, 3) measurement density and correlations, 4) the spatial resolution of estimated flux estimates, and 5) reducing the length of the lag window and the size of the ensemble. At the revision stage of this manuscript, the OCO instrument failed to reach its orbit after it was launched on 24 February 2009. The EnKF formulation presented here is also applicable to GOSAT measurements of CO2 and CH4.
Resumo:
Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.
Resumo:
Background Pharmacy aseptic units prepare and supply injectables to minimise risks. The UK National Aseptic Error Reporting Scheme has been collecting data on pharmacy compounding errors, including near-misses, since 2003. Objectives The cumulative reports from January 2004 to December 2007, inclusive, were analysed. Methods The different variables of product types, error types, staff making and detecting errors, stage errors detected, perceived contributory factors, and potential or actual outcomes were presented by cross-tabulation of data. Results A total of 4691 reports were submitted against an estimated 958 532 items made, returning 0.49% as the overall error rate. Most of the errors were detected before reaching patients, with only 24 detected during or after administration. The highest number of reports related to adult cytotoxic preparations (40%) and the most frequently recorded error was a labelling error (34.2%). Errors were mostly detected at first check in assembly area (46.6%). Individual staff error contributed most (78.1%) to overall errors, while errors with paediatric parenteral nutrition appeared to be blamed on low staff levels more than other products were. The majority of errors (68.6%) had no potential patient outcomes attached, while it appeared that paediatric cytotoxic products and paediatric parenteral nutrition were associated with greater levels of perceived patient harm. Conclusions The majority of reports were related to near-misses, and this study highlights scope for examining current arrangements for checking and releasing products, certainly for paediatric cytotoxic and paediatric parenteral nutrition preparations within aseptic units, but in the context of resource and capacity constraints.