967 resultados para Data matrix
Conditioning of incremental variational data assimilation, with application to the Met Office system
Resumo:
Implementations of incremental variational data assimilation require the iterative minimization of a series of linear least-squares cost functions. The accuracy and speed with which these linear minimization problems can be solved is determined by the condition number of the Hessian of the problem. In this study, we examine how different components of the assimilation system influence this condition number. Theoretical bounds on the condition number for a single parameter system are presented and used to predict how the condition number is affected by the observation distribution and accuracy and by the specified lengthscales in the background error covariance matrix. The theoretical results are verified in the Met Office variational data assimilation system, using both pseudo-observations and real data.
Resumo:
In numerical weather prediction (NWP) data assimilation (DA) methods are used to combine available observations with numerical model estimates. This is done by minimising measures of error on both observations and model estimates with more weight given to data that can be more trusted. For any DA method an estimate of the initial forecast error covariance matrix is required. For convective scale data assimilation, however, the properties of the error covariances are not well understood. An effective way to investigate covariance properties in the presence of convection is to use an ensemble-based method for which an estimate of the error covariance is readily available at each time step. In this work, we investigate the performance of the ensemble square root filter (EnSRF) in the presence of cloud growth applied to an idealised 1D convective column model of the atmosphere. We show that the EnSRF performs well in capturing cloud growth, but the ensemble does not cope well with discontinuities introduced into the system by parameterised rain. The state estimates lose accuracy, and more importantly the ensemble is unable to capture the spread (variance) of the estimates correctly. We also find, counter-intuitively, that by reducing the spatial frequency of observations and/or the accuracy of the observations, the ensemble is able to capture the states and their variability successfully across all regimes.
Resumo:
This paper describes the implementation of a 3D variational (3D-Var) data assimilation scheme for a morphodynamic model applied to Morecambe Bay, UK. A simple decoupled hydrodynamic and sediment transport model is combined with a data assimilation scheme to investigate the ability of such methods to improve the accuracy of the predicted bathymetry. The inverse forecast error covariance matrix is modelled using a Laplacian approximation which is calibrated for the length scale parameter required. Calibration is also performed for the Soulsby-van Rijn sediment transport equations. The data used for assimilation purposes comprises waterlines derived from SAR imagery covering the entire period of the model run, and swath bathymetry data collected by a ship-borne survey for one date towards the end of the model run. A LiDAR survey of the entire bay carried out in November 2005 is used for validation purposes. The comparison of the predictive ability of the model alone with the model-forecast-assimilation system demonstrates that using data assimilation significantly improves the forecast skill. An investigation of the assimilation of the swath bathymetry as well as the waterlines demonstrates that the overall improvement is initially large, but decreases over time as the bathymetry evolves away from that observed by the survey. The result of combining the calibration runs into a pseudo-ensemble provides a higher skill score than for a single optimized model run. A brief comparison of the Optimal Interpolation assimilation method with the 3D-Var method shows that the two schemes give similar results.
Resumo:
We propose a new algorithm for summarizing properties of large-scale time-evolving networks. This type of data, recording connections that come and go over time, is being generated in many modern applications, including telecommunications and on-line human social behavior. The algorithm computes a dynamic measure of how well pairs of nodes can communicate by taking account of routes through the network that respect the arrow of time. We take the conventional approach of downweighting for length (messages become corrupted as they are passed along) and add the novel feature of downweighting for age (messages go out of date). This allows us to generalize widely used Katz-style centrality measures that have proved popular in network science to the case of dynamic networks sampled at non-uniform points in time. We illustrate the new approach on synthetic and real data.
Resumo:
Background and Aims Forest trees directly contribute to carbon cycling in forest soils through the turnover of their fine roots. In this study we aimed to calculate root turnover rates of common European forest tree species and to compare them with most frequently published values. Methods We compiled available European data and applied various turnover rate calculation methods to the resulting database. We used Decision Matrix and Maximum-Minimum formula as suggested in the literature. Results Mean turnover rates obtained by the combination of sequential coring and Decision Matrix were 0.86 yr−1 for Fagus sylvatica and 0.88 yr−1 for Picea abies when maximum biomass data were used for the calculation, and 1.11 yr−1 for both species when mean biomass data were used. Using mean biomass rather than maximum resulted in about 30 % higher values of root turnover. Using the Decision Matrix to calculate turnover rate doubled the rates when compared to the Maximum-Minimum formula. The Decision Matrix, however, makes use of more input information than the Maximum-Minimum formula. Conclusions We propose that calculations using the Decision Matrix with mean biomass give the most reliable estimates of root turnover rates in European forests and should preferentially be used in models and C reporting.
Resumo:
We show that the four-dimensional variational data assimilation method (4DVar) can be interpreted as a form of Tikhonov regularization, a very familiar method for solving ill-posed inverse problems. It is known from image restoration problems that L1-norm penalty regularization recovers sharp edges in the image more accurately than Tikhonov, or L2-norm, penalty regularization. We apply this idea from stationary inverse problems to 4DVar, a dynamical inverse problem, and give examples for an L1-norm penalty approach and a mixed total variation (TV) L1–L2-norm penalty approach. For problems with model error where sharp fronts are present and the background and observation error covariances are known, the mixed TV L1–L2-norm penalty performs better than either the L1-norm method or the strong constraint 4DVar (L2-norm)method. A strength of the mixed TV L1–L2-norm regularization is that in the case where a simplified form of the background error covariance matrix is used it produces a much more accurate analysis than 4DVar. The method thus has the potential in numerical weather prediction to overcome operational problems with poorly tuned background error covariance matrices.
Resumo:
The problem of spurious excitation of gravity waves in the context of four-dimensional data assimilation is investigated using a simple model of balanced dynamics. The model admits a chaotic vortical mode coupled to a comparatively fast gravity wave mode, and can be initialized such that the model evolves on a so-called slow manifold, where the fast motion is suppressed. Identical twin assimilation experiments are performed, comparing the extended and ensemble Kalman filters (EKF and EnKF, respectively). The EKF uses a tangent linear model (TLM) to estimate the evolution of forecast error statistics in time, whereas the EnKF uses the statistics of an ensemble of nonlinear model integrations. Specifically, the case is examined where the true state is balanced, but observation errors project onto all degrees of freedom, including the fast modes. It is shown that the EKF and EnKF will assimilate observations in a balanced way only if certain assumptions hold, and that, outside of ideal cases (i.e., with very frequent observations), dynamical balance can easily be lost in the assimilation. For the EKF, the repeated adjustment of the covariances by the assimilation of observations can easily unbalance the TLM, and destroy the assumptions on which balanced assimilation rests. It is shown that an important factor is the choice of initial forecast error covariance matrix. A balance-constrained EKF is described and compared to the standard EKF, and shown to offer significant improvement for observation frequencies where balance in the standard EKF is lost. The EnKF is advantageous in that balance in the error covariances relies only on a balanced forecast ensemble, and that the analysis step is an ensemble-mean operation. Numerical experiments show that the EnKF may be preferable to the EKF in terms of balance, though its validity is limited by ensemble size. It is also found that overobserving can lead to a more unbalanced forecast ensemble and thus to an unbalanced analysis.
Resumo:
Remote sensing observations often have correlated errors, but the correlations are typically ignored in data assimilation for numerical weather prediction. The assumption of zero correlations is often used with data thinning methods, resulting in a loss of information. As operational centres move towards higher-resolution forecasting, there is a requirement to retain data providing detail on appropriate scales. Thus an alternative approach to dealing with observation error correlations is needed. In this article, we consider several approaches to approximating observation error correlation matrices: diagonal approximations, eigendecomposition approximations and Markov matrices. These approximations are applied in incremental variational assimilation experiments with a 1-D shallow water model using synthetic observations. Our experiments quantify analysis accuracy in comparison with a reference or ‘truth’ trajectory, as well as with analyses using the ‘true’ observation error covariance matrix. We show that it is often better to include an approximate correlation structure in the observation error covariance matrix than to incorrectly assume error independence. Furthermore, by choosing a suitable matrix approximation, it is feasible and computationally cheap to include error correlation structure in a variational data assimilation algorithm.
Resumo:
We systematically compare the performance of ETKF-4DVAR, 4DVAR-BEN and 4DENVAR with respect to two traditional methods (4DVAR and ETKF) and an ensemble transform Kalman smoother (ETKS) on the Lorenz 1963 model. We specifically investigated this performance with increasing nonlinearity and using a quasi-static variational assimilation algorithm as a comparison. Using the analysis root mean square error (RMSE) as a metric, these methods have been compared considering (1) assimilation window length and observation interval size and (2) ensemble size to investigate the influence of hybrid background error covariance matrices and nonlinearity on the performance of the methods. For short assimilation windows with close to linear dynamics, it has been shown that all hybrid methods show an improvement in RMSE compared to the traditional methods. For long assimilation window lengths in which nonlinear dynamics are substantial, the variational framework can have diffculties fnding the global minimum of the cost function, so we explore a quasi-static variational assimilation (QSVA) framework. Of the hybrid methods, it is seen that under certain parameters, hybrid methods which do not use a climatological background error covariance do not need QSVA to perform accurately. Generally, results show that the ETKS and hybrid methods that do not use a climatological background error covariance matrix with QSVA outperform all other methods due to the full flow dependency of the background error covariance matrix which also allows for the most nonlinearity.
Resumo:
Cell migration is a highly coordinated process and any aberration in the regulatory mechanisms could result in pathological conditions such as cancer. The ability of cancer cells to disseminate to distant sites within the body has made it difficult to treat. Cancer cells also exhibit plasticity that makes them able to interconvert from an elongated, mesenchymal morphology to an amoeboid blebbing form under different physiological conditions. Blebs are spherical membrane protrusions formed by actomyosin-mediated contractility of cortical actin resulting in increased hydrostatic pressure and subsequent detachment of the membrane from the cortex. Tumour cells use blebbing as an alternative mode of migration by squeezing through preexisting gaps in the ECM, and bleb formation is believed to be mediated by the Rho-ROCK signaling pathway. However, the involvement of transmembrane water and ion channels in cell blebbing has not been examined. In the present study, the role of the transmembrane water channels, aquaporins, transmembrane ion transporters and lipid signaling enzymes in the regulation of blebbing was investigated. Using 3D matrigel matrix as an in vitro model to mimic normal extracellular matrix, and a combination of confocal and time-lapse microscopy, it was found that AQP1 knockdown by siRNA ablated blebbing of HT1080 and ACHN cells, and overexpression of AQP1-GFP not only significantly increased bleb size with a corresponding decrease in bleb numbers, but also induced bleb formation in non-blebbing cell lines. Importantly, AQP1 overexpression reduces bleb lifespan due to faster bleb retraction. This novel finding of AQP1-facilitated bleb retraction requires the activity of the Na+/H+ pump as inhibition of the ion transporter, which was found localized to intracellular vesicles, blocked bleb retraction in both cell lines. This study also demonstrated that a differential regulation of cell blebbing by AQP isoforms exists as knockdown of AQP5 had no effect on bleb formation. Data from this study also demonstrates that the lipid signaling PLD2 signals through PA in the LPA-LPAR-Rho-ROCK axis to positively regulate bleb formation in both cell lines. Taken together, this work provides a novel role of AQP1 and Na+/H+ pump in regulation of cell blebbing, and this could be exploited in the development of new therapy to treat cancer.
Resumo:
Background: Diabetes and periodontitis produce a protein discharge that can be reflected in saliva. This study evaluates the salivary concentrations of interleukin (IL)-6, matrix metalloproteinase (MMP)-8, and osteoprotegerin (OPG) in patients with periodontitis with type 2 diabetes. Methods: Whole saliva samples were obtained from 90 subjects who were divided into four groups: healthy (control; n = 22), untreated periodontitis (UPD; n = 24), diabetes mellitus (DM; n = 20), and UPD + DM (n = 24) groups. Clinical and metabolic data were recorded. Salivary IL-6, MMP-8, and OPG concentrations were determined by a standard enzyme-linked immunosorbent assay. Results: The UPD and UPD + DM groups exhibited higher salivary IL-6 than the control and DM groups (P <0.01). The salivary MMP-8 concentrations in all diseased groups (UPD, DM, and UPD + DM) were higher than in the control group (P <0.01). The salivary OPG concentrations in the DM group were higher than in the UPD and control groups (P<0.05). In the UPD + DM group, salivary IL-6 was correlated with glycated hemoglobin (HbA1c) levels (r = 0.60; P<0.05). The regression analysis indicated that the number of remaining teeth, clinical attachment level, and IL-6 might have influenced the HbA1c levels in patients with diabetes. Conclusions: Salivary 1L-6 concentrations were elevated in patients with periodontitis with or without diabetes. Salivary MMP-8 and OPG concentrations were elevated regardless of periodontal inflammation in patients with diabetes. Therefore, periodontitis and diabetes are conditions that may interfere with protein expression and should be considered when using saliva for diagnoses. J Periodontol 2010;81:384-391.
Resumo:
Introduction: The objective of this study was to investigate the expression of matrix metalloproteinases (MM Ps) in apical periodontitis and during the periapical healing phase after root canal treatment. Methods: Apical periodontitis was induced in dog teeth, and root canal treatment was performed in a single visit or by using an additional calcium hydroxide root canal dressing. One hundred eighty days after treatment the presence of inflammation was examined, and tissues were stained to detect bacteria. Bacterial status was correlated to the degree of tissue organization, and to further investigate molecules involved in this process, tissues were stained for MMP-1, MMP-2, MMP-8, and MMP-9. Data were analyzed by using one-way analysis of variance followed by Tukey test or Kruskal-Wallis followed by Dunn test. Results: Teeth with apical periodontitis that had root canal therapy performed in a single visit presented an intense inflammatory cell infiltrate. Periapical tissue was extremely disorganized, and this was correlated with the presence of bacteria. Higher MMP expression was evident, similar to teeth with untreated apical periodontitis. In contrast, teeth with apical periodontitis submitted to root canal treatment with calcium hydroxide presented a lower inflammatory cell infiltrate. This group had moderately organized connective tissue, lower prevalence of bacteria, and lower number of MMP-positive cells, similar to healthy teeth submitted to treatment. Conclusions: Teeth treated with calcium hydroxide root canal dressing exhibited a lower percentage of bacterial contamination, a lower MMP expression, and a more organized extracellular matrix, unlike those treated in a single visit. This suggests that calcium hydroxide might be beneficial in tissue repair processes. (J Endod 2010;36:231-237)
Resumo:
Introduction: The inability to distinguish periapical cysts from granulomas before performing root canal treatment leads to uncertainty in treatment outcomes because cysts have lower healing rates. Searching for differential expression of molecules within cysts or granulomas could provide information with regard to the identity of the lesion or suggest mechanistic differences that may form the basis for future therapeutic intervention. Thus, we investigated whether granulomas and cysts exhibit differential expression of extracellular matrix (ECM) molecules. Methods: Human periapical granulomas, periapical cysts, and healthy periodontal ligament tissues were used to investigate the differential expression of ECM molecules by microarray analysis. Because matrix metalloproteinases (MMP) showed the highest differential expression in the microarray analysis, MMPs were further examined by in situ zymography and immunohistochemistry. Data were analyzed by using one-way analysis of variance followed by the Tu-key test. Results: We observed that cysts and granulomas differentially expressed several ECM molecules, especially those from the MMP family. Compared with cysts, granulomas exhibited higher MMP enzymatic activity in areas stained for MMP-9. These areas were composed of polymorphonuclear cells (PMNs) in contrast to cysts. Similarly, MMP-13 was expressed by a greater number of cells in granulomas compared with cysts. Conclusion: Our findings indicate that high enzymatic MIMP activity in PMNs together with MMP-9 and MMP-13 stained cells could be a molecular signature of granulomas unlike periapical cysts. (J Endod 2009;35:1234-1242)
Resumo:
The matrix-tolerance hypothesis suggests that the most abundant species in the inter-habitat matrix would be less vulnerable to their habitat fragmentation. This model was tested with leaf-litter frogs in the Atlantic Forest where the fragmentation process is older and more severe than in the Amazon, where the model was first developed. Frog abundance data from the agricultural matrix, forest fragments and continuous forest localities were used. We found an expected negative correlation between the abundance of frogs in the matrix and their vulnerability to fragmentation, however, results varied with fragment size and species traits. Smaller fragments exhibited stronger matrix-vulnerability correlation than intermediate fragments, while no significant relation was observed for large fragments. Moreover, some species that avoid the matrix were not sensitive to a decrease in the patch size, and the opposite was also true, indicating significant differences with that expected from the model. Most of the species that use the matrix were forest species with aquatic larvae development, but those species do not necessarily respond to fragmentation or fragment size, and thus affect more intensively the strengthen of the expected relationship. Therefore, the main relationship expected by the matrix-tolerance hypothesis was observed in the Atlantic Forest; however we noted that the prediction of this hypothesis can be substantially affected by the size of the fragments, and by species traits. We propose that matrix-tolerance model should be broadened to become a more effective model, including other patch characteristics, particularly fragment size, and individual species traits (e. g., reproductive mode and habitat preference).
Resumo:
The statement that pairs of individuals from different populations are often more genetically similar than pairs from the same population is a widespread idea inside and outside the scientific community. Witherspoon et al. [""Genetic similarities within and between human populations,"" Genetics 176:351-359 (2007)] proposed an index called the dissimilarity fraction (omega) to access in a quantitative way the validity of this statement for genetic systems. Witherspoon demonstrated that, as the number of loci increases, omega decreases to a point where, when enough sampling is available, the statement is false. In this study, we applied the dissimilarity fraction to Howells`s craniometric database to establish whether or not similar results are obtained for cranial morphological traits. Although in genetic studies thousands of loci are available, Howells`s database provides no more than 55 metric traits, making the contribution of each variable important. To cope with this limitation, we developed a routine that takes this effect into consideration when calculating. omega Contrary to what was observed for the genetic data, our results show that cranial morphology asymptotically approaches a mean omega of 0.3 and therefore supports the initial statement-that is, that individuals from the same geographic region do not form clear and discrete clusters-further questioning the idea of the existence of discrete biological clusters in the human species. Finally, by assuming that cranial morphology is under an additive polygenetic model, we can say that the population history signal of human craniometric traits presents the same resolution as a neutral genetic system dependent on no more than 20 loci.