921 resultados para Error analysis (Mathematics)
Resumo:
For certain observing types, such as those that are remotely sensed, the observation errors are correlated and these correlations are state- and time-dependent. In this work, we develop a method for diagnosing and incorporating spatially correlated and time-dependent observation error in an ensemble data assimilation system. The method combines an ensemble transform Kalman filter with a method that uses statistical averages of background and analysis innovations to provide an estimate of the observation error covariance matrix. To evaluate the performance of the method, we perform identical twin experiments using the Lorenz ’96 and Kuramoto-Sivashinsky models. Using our approach, a good approximation to the true observation error covariance can be recovered in cases where the initial estimate of the error covariance is incorrect. Spatial observation error covariances where the length scale of the true covariance changes slowly in time can also be captured. We find that using the estimated correlated observation error in the assimilation improves the analysis.
Resumo:
The analysis step of the (ensemble) Kalman filter is optimal when (1) the distribution of the background is Gaussian, (2) state variables and observations are related via a linear operator, and (3) the observational error is of additive nature and has Gaussian distribution. When these conditions are largely violated, a pre-processing step known as Gaussian anamorphosis (GA) can be applied. The objective of this procedure is to obtain state variables and observations that better fulfil the Gaussianity conditions in some sense. In this work we analyse GA from a joint perspective, paying attention to the effects of transformations in the joint state variable/observation space. First, we study transformations for state variables and observations that are independent from each other. Then, we introduce a targeted joint transformation with the objective to obtain joint Gaussianity in the transformed space. We focus primarily in the univariate case, and briefly comment on the multivariate one. A key point of this paper is that, when (1)-(3) are violated, using the analysis step of the EnKF will not recover the exact posterior density in spite of any transformations one may perform. These transformations, however, provide approximations of different quality to the Bayesian solution of the problem. Using an example in which the Bayesian posterior can be analytically computed, we assess the quality of the analysis distributions generated after applying the EnKF analysis step in conjunction with different GA options. The value of the targeted joint transformation is particularly clear for the case when the prior is Gaussian, the marginal density for the observations is close to Gaussian, and the likelihood is a Gaussian mixture.
Resumo:
Wave solutions to a mechanochemical model for cytoskeletal activity are studied and the results applied to the waves of chemical and mechanical activity that sweep over an egg shortly after fertilization. The model takes into account the calcium-controlled presence of actively contractile units in the cytoplasm, and consists of a viscoelastic force equilibrium equation and a conservation equation for calcium. Using piecewise linear caricatures, we obtain analytic solutions for travelling waves on a strip and demonstrate uiat the full nonlinear system behaves as predicted by the analytic solutions. The equations are solved on a sphere and the numerical results are similar to the analytic solutions. We indicate how the speed of the waves can be used as a diagnostic tool with which the chemical reactivity of the egg surface can be measured.
Resumo:
Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.
Resumo:
Recent work has shown that both the amplitude of upper-level Rossby waves and the tropopause sharpness decrease with forecast lead time for several days in some operational weather forecast systems. In this contribution, the evolution of error growth in a case study of this forecast error type is diagnosed through analysis of operational forecasts and hindcast simulations. Potential vorticity (PV) on the 320-K isentropic surface is used to diagnose Rossby waves. The Rossby-wave forecast error in the operational ECMWF high-resolution forecast is shown to be associated with errors in the forecast of a warm conveyor belt (WCB) through trajectory analysis and an error metric for WCB outflows. The WCB forecast error is characterised by an overestimation of WCB amplitude, a location of the WCB outflow regions that is too far to the southeast, and a resulting underestimation of the magnitude of the negative PV anomaly in the outflow. Essentially the same forecast error development also occurred in all members of the ECMWF Ensemble Prediction System and the Met Office MOGREPS-15 suggesting that in this case model error made an important contribution to the development of forecast error in addition to initial condition error. Exploiting this forecast error robustness, a comparison was performed between the realised flow evolution, proxied by a sequence of short-range simulations, and a contemporaneous forecast. Both the proxy to the realised flow and the contemporaneous forecast a were produced with the Met Office Unified Model enhanced with tracers of diabatic processes modifying potential temperature and PV. Clear differences were found in the way potential temperature and PV are modified in the WCB between proxy and forecast. These results demonstrate that differences in potential temperature and PV modification in the WCB can be responsible for forecast errors in Rossby waves.
Resumo:
We give an a priori analysis of a semi-discrete discontinuous Galerkin scheme approximating solutions to a model of multiphase elastodynamics which involves an energy density depending not only on the strain but also the strain gradient. A key component in the analysis is the reduced relative entropy stability framework developed in Giesselmann (SIAM J Math Anal 46(5):3518–3539, 2014). The estimate we derive is optimal in the L∞(0,T;dG) norm for the strain and the L2(0,T;dG) norm for the velocity, where dG is an appropriate mesh dependent H1-like space.
Resumo:
This paper discusses an important issue related to the implementation and interpretation of the analysis scheme in the ensemble Kalman filter . I t i s shown that the obser vations must be treated as random variables at the analysis steps. That is, one should add random perturbations with the correct statistics to the obser vations and generate an ensemble of obser vations that then is used in updating the ensemble of model states. T raditionally , this has not been done in previous applications of the ensemble Kalman filter and, as will be shown, this has resulted in an updated ensemble with a variance that is too low . This simple modification of the analysis scheme results in a completely consistent approach if the covariance of the ensemble of model states is interpreted as the prediction error covariance, and there are no further requirements on the ensemble Kalman filter method, except for the use of an ensemble of sufficient size. Thus, there is a unique correspondence between the error statistics from the ensemble Kalman filter and the standard Kalman filter approach
Resumo:
Considering the sea ice decline in the Arctic during the last decades, polynyas are of high research interest since these features are core areas of new ice formation. The determination of ice formation requires accurate retrieval of polynya area and thin-ice thickness (TIT) distribution within the polynya.We use an established energy balance model to derive TITs with MODIS ice surface temperatures (Ts) and NCEP/DOE Reanalysis II in the Laptev Sea for two winter seasons. Improvements of the algorithm mainly concern the implementation of an iterative approach to calculate the atmospheric flux components taking the atmospheric stratification into account. Furthermore, a sensitivity study is performed to analyze the errors of the ice thickness. The results are the following: 1) 2-m air temperatures (Ta) and Ts have the highest impact on the retrieved ice thickness; 2) an overestimation of Ta yields smaller ice thickness errors as an underestimation of Ta; 3) NCEP Ta shows often a warm bias; and 4) the mean absolute error for ice thicknesses up to 20 cm is ±4.7 cm. Based on these results, we conclude that, despite the shortcomings of the NCEP data (coarse spatial resolution and no polynyas), this data set is appropriate in combination with MODIS Ts for the retrieval of TITs up to 20 cm in the Laptev Sea region. The TIT algorithm can be applied to other polynya regions and to past and future time periods. Our TIT product is a valuable data set for verification of other model and remote sensing ice thickness data.
Resumo:
A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion combining local component analysis for the finite mixture model. We start with a Parzen window estimator which has the Gaussian kernels with a common covariance matrix, the local component analysis is initially applied to find the covariance matrix using expectation maximization algorithm. Since the constraint on the mixing coefficients of a finite mixture model is on the multinomial manifold, we then use the well-known Riemannian trust-region algorithm to find the set of sparse mixing coefficients. The first and second order Riemannian geometry of the multinomial manifold are utilized in the Riemannian trust-region algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.
Resumo:
Atmosphere only and ocean only variational data assimilation (DA) schemes are able to use window lengths that are optimal for the error growth rate, non-linearity and observation density of the respective systems. Typical window lengths are 6-12 hours for the atmosphere and 2-10 days for the ocean. However, in the implementation of coupled DA schemes it has been necessary to match the window length of the ocean to that of the atmosphere, which may potentially sacrifice the accuracy of the ocean analysis in order to provide a more balanced coupled state. This paper investigates how extending the window length in the presence of model error affects both the analysis of the coupled state and the initialized forecast when using coupled DA with differing degrees of coupling. Results are illustrated using an idealized single column model of the coupled atmosphere-ocean system. It is found that the analysis error from an uncoupled DA scheme can be smaller than that from a coupled analysis at the initial time, due to faster error growth in the coupled system. However, this does not necessarily lead to a more accurate forecast due to imbalances in the coupled state. Instead coupled DA is more able to update the initial state to reduce the impact of the model error on the accuracy of the forecast. The effect of model error is potentially most detrimental in the weakly coupled formulation due to the inconsistency between the coupled model used in the outer loop and uncoupled models used in the inner loop.
An improved estimate of leaf area index based on the histogram analysis of hemispherical photographs
Resumo:
Leaf area index (LAI) is a key parameter that affects the surface fluxes of energy, mass, and momentum over vegetated lands, but observational measurements are scarce, especially in remote areas with complex canopy structure. In this paper we present an indirect method to calculate the LAI based on the analyses of histograms of hemispherical photographs. The optimal threshold value (OTV), the gray-level required to separate the background (sky) and the foreground (leaves), was analytically calculated using the entropy crossover method (Sahoo, P.K., Slaaf, D.W., Albert, T.A., 1997. Threshold selection using a minimal histogram entropy difference. Optical Engineering 36(7) 1976-1981). The OTV was used to calculate the LAI using the well-known gap fraction method. This methodology was tested in two different ecosystems, including Amazon forest and pasturelands in Brazil. In general, the error between observed and calculated LAI was similar to 6%. The methodology presented is suitable for the calculation of LAI since it is responsive to sky conditions, automatic, easy to implement, faster than commercially available software, and requires less data storage. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Non-linear methods for estimating variability in time-series are currently of widespread use. Among such methods are approximate entropy (ApEn) and sample approximate entropy (SampEn). The applicability of ApEn and SampEn in analyzing data is evident and their use is increasing. However, consistency is a point of concern in these tools, i.e., the classification of the temporal organization of a data set might indicate a relative less ordered series in relation to another when the opposite is true. As highlighted by their proponents themselves, ApEn and SampEn might present incorrect results due to this lack of consistency. In this study, we present a method which gains consistency by using ApEn repeatedly in a wide range of combinations of window lengths and matching error tolerance. The tool is called volumetric approximate entropy, vApEn. We analyze nine artificially generated prototypical time-series with different degrees of temporal order (combinations of sine waves, logistic maps with different control parameter values, random noises). While ApEn/SampEn clearly fail to consistently identify the temporal order of the sequences, vApEn correctly do. In order to validate the tool we performed shuffled and surrogate data analysis. Statistical analysis confirmed the consistency of the method. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, the generalized log-gamma regression model is modified to allow the possibility that long-term survivors may be present in the data. This modification leads to a generalized log-gamma regression model with a cure rate, encompassing, as special cases, the log-exponential, log-Weibull and log-normal regression models with a cure rate typically used to model such data. The models attempt to simultaneously estimate the effects of explanatory variables on the timing acceleration/deceleration of a given event and the surviving fraction, that is, the proportion of the population for which the event never occurs. The normal curvatures of local influence are derived under some usual perturbation schemes and two martingale-type residuals are proposed to assess departures from the generalized log-gamma error assumption as well as to detect outlying observations. Finally, a data set from the medical area is analyzed.
Resumo:
We focus this work on the theoretical investigation of the block-copolymer poly [oxyoctyleneoxy-(2,6-dimethoxy-1,4phenylene-1,2-ethinylene-phenanthrene-2,4diyl) named as LaPPS19, recently proposed for optoelectronic applications. We used for that a variety of methods, from molecular mechanics to quantum semiempirical techniques (AMI, ZINDO/S-CIS). Our results show that as expected isolated LaPPS19 chains present relevant electron localization over the phenanthrene group. We found, however, that LaPPS19 could assemble in a pi-stacked form, leading to impressive interchain interaction; the stacking induces electronic delocalization between neighbor chains and introduces new states below the phenanthrene-related absorption; these results allowed us to associate the red-shift of the absorption edge, seen in the experimental results, to spontaneous pi-stack aggregation of the chains. (C) 2009 Wiley Periodicals, Inc. Int J Quantum Chem 110: 885-892, 2010