40 resultados para Probability Density Function
em Universit
Resumo:
The geometry and connectivity of fractures exert a strong influence on the flow and transport properties of fracture networks. We present a novel approach to stochastically generate three-dimensional discrete networks of connected fractures that are conditioned to hydrological and geophysical data. A hierarchical rejection sampling algorithm is used to draw realizations from the posterior probability density function at different conditioning levels. The method is applied to a well-studied granitic formation using data acquired within two boreholes located 6 m apart. The prior models include 27 fractures with their geometry (position and orientation) bounded by information derived from single-hole ground-penetrating radar (GPR) data acquired during saline tracer tests and optical televiewer logs. Eleven cross-hole hydraulic connections between fractures in neighboring boreholes and the order in which the tracer arrives at different fractures are used for conditioning. Furthermore, the networks are conditioned to the observed relative hydraulic importance of the different hydraulic connections by numerically simulating the flow response. Among the conditioning data considered, constraints on the relative flow contributions were the most effective in determining the variability among the network realizations. Nevertheless, we find that the posterior model space is strongly determined by the imposed prior bounds. Strong prior bounds were derived from GPR measurements and helped to make the approach computationally feasible. We analyze a set of 230 posterior realizations that reproduce all data given their uncertainties assuming the same uniform transmissivity in all fractures. The posterior models provide valuable statistics on length scales and density of connected fractures, as well as their connectivity. In an additional analysis, effective transmissivity estimates of the posterior realizations indicate a strong influence of the DFN structure, in that it induces large variations of equivalent transmissivities between realizations. The transmissivity estimates agree well with previous estimates at the site based on pumping, flowmeter and temperature data.
Resumo:
Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic inversion approaches, probabilistic inversion provides the full posterior probability density function of the saturation field and accounts for the uncertainties inherent in the petrophysical parameters relating the resistivity to saturation. In this study, the data are from benchtop ERT experiments conducted during gas injection into a quasi-2D brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. The saturation fields are estimated by Markov chain Monte Carlo inversion of the measured data and compared to independent saturation measurements from light transmission through the chamber. Different model parameterizations are evaluated in terms of the recovered saturation and petrophysical parameter values. The saturation field is parameterized (1) in Cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values in structural elements whose shape and location is assumed known or represented by an arbitrary Gaussian Bell structure. Results show that the estimated saturation fields are in overall agreement with saturations measured by light transmission, but differ strongly in terms of parameter estimates, parameter uncertainties and computational intensity. Discretization in the frequency domain (as in the discrete cosine transform parameterization) provides more accurate models at a lower computational cost compared to spatially discretized (Cartesian) models. A priori knowledge about the expected geologic structures allows for non-discretized model descriptions with markedly reduced degrees of freedom. Constraining the solutions to the known injected gas volume improved estimates of saturation and parameter values of the petrophysical relationship. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the scale of a field site represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed downscaling procedure based on a non-linear Bayesian sequential simulation approach. The main objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity logged at collocated wells and surface resistivity measurements, which are available throughout the studied site. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariatekernel density function. Then a stochastic integration of low-resolution, large-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities is applied. The overall viability of this downscaling approach is tested and validated by comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure allows obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
The H(+)-gated acid-sensing ion channels (ASICs) are expressed in dorsal root ganglion (DRG) neurones. Studies with ASIC knockout mice indicated either a pro-nociceptive or a modulatory role of ASICs in pain sensation. We have investigated in freshly isolated rat DRG neurones whether neurones with different ASIC current properties exist, which may explain distinct cellular roles, and we have investigated ASIC regulation in an experimental model of neuropathic pain. Small-diameter DRG neurones expressed three different ASIC current types which were all preferentially expressed in putative nociceptors. Type 1 currents were mediated by ASIC1a homomultimers and characterized by steep pH dependence of current activation in the pH range 6.8-6.0. Type 3 currents were activated in a similar pH range as type 1, while type 2 currents were activated at pH < 6. When activated by acidification to pH 6.8 or 6.5, the probability of inducing action potentials correlated with the ASIC current density. Nerve injury induced differential regulation of ASIC subunit expression and selective changes in ASIC function in DRG neurones, suggesting a complex reorganization of ASICs during the development of neuropathic pain. In summary, we describe a basis for distinct cellular functions of different ASIC types in small-diameter DRG neurones.
Resumo:
The quantity of interest for high-energy photon beam therapy recommended by most dosimetric protocols is the absorbed dose to water. Thus, ionization chambers are calibrated in absorbed dose to water, which is the same quantity as what is calculated by most treatment planning systems (TPS). However, when measurements are performed in a low-density medium, the presence of the ionization chamber generates a perturbation at the level of the secondary particle range. Therefore, the measured quantity is close to the absorbed dose to a volume of water equivalent to the chamber volume. This quantity is not equivalent to the dose calculated by a TPS, which is the absorbed dose to an infinitesimally small volume of water. This phenomenon can lead to an overestimation of the absorbed dose measured with an ionization chamber of up to 40% in extreme cases. In this paper, we propose a method to calculate correction factors based on the Monte Carlo simulations. These correction factors are obtained by the ratio of the absorbed dose to water in a low-density medium □D(w,Q,V1)(low) averaged over a scoring volume V₁ for a geometry where V₁ is filled with the low-density medium and the absorbed dose to water □D(w,QV2)(low) averaged over a volume V₂ for a geometry where V₂ is filled with water. In the Monte Carlo simulations, □D(w,QV2)(low) is obtained by replacing the volume of the ionization chamber by an equivalent volume of water, according to the definition of the absorbed dose to water. The method is validated in two different configurations which allowed us to study the behavior of this correction factor as a function of depth in phantom, photon beam energy, phantom density and field size.
Resumo:
We found that lumbar spine texture analysis using trabecular bone score (TBS) is a risk factor for MOF and a risk factor for death in a retrospective cohort study from a large clinical registry for the province of Manitoba, Canada. INTRODUCTION: FRAX® estimates the 10-year probability of major osteoporotic fracture (MOF) using clinical risk factors and femoral neck bone mineral density (BMD). Trabecular bone score (TBS), derived from texture in the spine dual X-ray absorptiometry (DXA) image, is related to bone microarchitecture and fracture risk independently of BMD. Our objective was to determine whether TBS provides information on MOF probability beyond that provided by the FRAX variables. METHODS: We included 33,352 women aged 40-100 years (mean 63 years) with baseline DXA measurements of lumbar spine TBS and femoral neck BMD. The association between TBS, the FRAX variables, and the risk of MOF or death was examined using an extension of the Poisson regression model. RESULTS: During the mean of 4.7 years, 1,754 women died and 1,872 sustained one or more MOF. For each standard deviation reduction in TBS, there was a 36 % increase in MOF risk (HR 1.36, 95 % CI 1.30-1.42, p < 0.001) and a 32 % increase in death (HR 1.32, 95 % CI 1.26-1.39, p < 0.001). When adjusted for significant clinical risk factors and femoral neck BMD, lumbar spine TBS was still a significant predictor of MOF (HR 1.18, 95 % CI 1.12-1.23) and death (HR 1.20, 95 % CI 1.14-1.26). Models for estimating MOF probability, accounting for competing mortality, showed that low TBS (10th percentile) increased risk by 1.5-1.6-fold compared with high TBS (90th percentile) across a broad range of ages and femoral neck T-scores. CONCLUSIONS: Lumbar spine TBS is able to predict incident MOF independent of FRAX clinical risk factors and femoral neck BMD even after accounting for the increased death hazard.
Resumo:
Exogenous oxidized cholesterol disturbs both lipid metabolism and immune functions. Therefore, it may perturb these modulations with ageing. Effects of the dietary protein type on oxidized cholesterol-induced modulations of age-related changes in lipid metabolism and immune function was examined using differently aged (4 weeks versus 8 months) male Sprague-Dawley rats when casein, soybean protein or milk whey protein isolate (WPI) was the dietary protein source, respectively. The rats were given one of the three proteins in diet containing 0.2% oxidized cholesterols mixture. Soybean protein, as compared with the other two proteins, significantly lowered both the serum thiobarbituric acid reactive substances value and cholesterol, whereas it elevated the ratio of high density lipoprotein-cholesterol/cholesterol in young rats, but not in adult. Moreover, soybean protein, but not casein and WPI, suppressed the elevation of Delta6 desaturation indices of phospholipids in both liver and spleen, particularly in young. On the other hand, WPI, compared to the other two proteins, inhibited the leukotriene B4 production of spleen, irrespective of age. Soybean protein reduced the ratio of CD4(+)/CD8(+) T-cells in splenic lymphocytes. Therefore, the levels of immunoglobulin (Ig)A, IgE and IgG in serum were lowered in rats given soybean protein in both age groups except for IgA in adult, although these observations were not shown in rats given other proteins. Thus, various perturbations of lipid metabolism and immune function caused by oxidized cholesterol were modified depending on the type of dietary protein. The moderation by soybean protein on the change of lipid metabolism seems to be susceptible in young rats whose homeostatic ability is immature. These observations may be exerted through both the promotion of oxidized cholesterol excretion to feces and the change of hormonal release, while WPI may suppress the disturbance of immune function by oxidized cholesterol in both ages. This alleviation may be associated with a large amount of lactoglobulin in WPI. These results thus showed a possibility that oxidized cholesterol-induced perturbations of age-related changes of lipid metabolism and immune function can be moderated by both the selection and combination of dietary protein.
Resumo:
Background: The prevalence of a low bone mineral density (T-score <-1 SD) in postmenopausal women with a fragility fracture may vary from 70% to less than 50%. In one study (Siris ES. Arch Intern Med. 2004;164:1108-12), the prevalence of osteoporosis was very low at 6.4%. The corresponding values in men are rarely reported. Methods: In a nationwide Swiss survey, all consecutive patients aged 50+ presenting with one or more fractures to the emergency ward, were recruited by 8 participating hospitals (University Hospitals: Basel, Bern, and Lausanne; cantonal hospitals: Fribourg, Luzern, and St Gallen; regional hospitals: Estavayer and Riaz) between 2004 and 2006. Diagnostic workup was collected for descriptive analysis. Results: 3667 consecutive patients with a fragility fracture, 2797 women (73.8 ± 11.6 years) and 870 men (70.0 ± 12.1 years), were included. DXA measurement was performed in 1152 (44%) patients. The mean of the lowest T-score values was -2.34 SD in women and -2.16 SD in men. In the 908 women, the prevalence of osteoporosis and osteopenia according to the fracture type was: sacrum (100%, 0%), rib (100%, 0%), thoracic vertebral (78%, 22%), femur trochanter (67%, 26%), pelvis (66%, 32%), lumbar vertebral (63%, 28%), femoral neck (53%, 34%), femur shaft (50%, 50%), proximal humerus (50%, 34%), distal forearm (41%, 45%), tibia proximal (41%, 31%), malleolar lateral (28%, 46%), malleolar median (13%, 47%). The corresponding percentages in the 244 men were: distal forearm (70%, 19%), rib (63%, 11%), pelvis (60%, 20%), malleolar median (60%, 32%), femur trochanter (48%, 31%), thoracic vertebral (47%, 53%), lumbar vertebral (43%, 36%), proximal humerus (40%, 43%), femoral neck (28%, 55%), tibia proximal (26%, 36%), malleolar lateral (18%, 56%). Conclusion: The probability of underlying osteoporosis or osteopenia in men and women aged 50+ who experienced a fragility fracture was beyond 75% in fractures of the sacrum, pelvis, spine, femur, proximal humerus and distal forearm. The medial and lateral malleolar fractures had the lowest predictive value in women, not in men.
Resumo:
A newly identified cytokine, osteoprotegerin (OPG) appears to be involved in the regulation of bone remodeling. In vitro studies suggest that OPG, a soluble member of the TNF receptor family of proteins, inhibits osteoclastogenesis by interrupting the intercellular signaling between osteoblastic stromal cells and osteoclast progenitors. As patients with chronic renal failure (CRF) often have renal osteodystrophy (ROD), we investigated the role of osteoprotegerin (OPG) in ROD, and investigated whether there was any relationship between serum OPG, intact parathyroid (PTH) (iPTH), vitamin D, and trabecular bone. Serum OPG combined with iPTH might be a useful tool in the noninvasive diagnosis of ROD, at least in cases in which the range of PTH values compromises reliable diagnosis. Thirty-six patients on maintenance hemodiafiltration (HDF) and a control group of 36 age and sex matched healthy subjects with no known metabolic bone disease were studied. The following assays were made on serum: iPTH, osteocalcin (BGP), bone alkaline phosphatase, 25(OH)-cholecalciferol, calcium, phosphate, OPG, IGF-1, estradiol, and free testosterone. Serum Ca++, P, B-ALP, BGP, IGF-1, iPTH, and OPG levels were significantly higher in HDF patients than in controls, while DXA measurements and quantitative ultrasound (QUS) parameters were significantly lower. On grouping patients according to their mean OPG levels, we observed significantly lower serum IGF-1, vitamin D3 concentrations, and lumbar spine and hip bone mineral density in the high OPG groups. No correlation was found between OPG and bone turnover markers, whereas a negative correlation was found between serum OPG and IGF-1 levels (r=-0.64, p=0.032). Serum iPTH concentrations were positively correlated with bone alkaline phosphatase (B-ALP) (r=0.69, p=0.038) and BGP (r=0.92, p<0.001). The findings made suggest that an increase in OPG levels may be a compensatory response to elevated bone loss. The low bone mineral density (BMD) levels found in the high OPG group might have been due to the significant decrease in serum IGF-1 and vitamin D3 observed. In conclusion, the findings made in the present study demonstrate that increased OPG in hemodiafiltration patients is only partly due to decreased renal clearance. As it may partly reflect a compensatory response to increased bone loss, this parameter might be helpful in the identification of patients with a marked reduction in trabecular BMD.
Resumo:
The slow vacuolar (SV) channel, a Ca2+-regulated vacuolar cation conductance channel, in Arabidopsis thaliana is encoded by the single-copy gene AtTPC1. Although loss-of-function tpc1 mutants were reported to exhibit a stoma phenotype, knowledge about the underlying guard cell-specific features of SV/TPC1 channels is still lacking. Here we demonstrate that TPC1 transcripts and SV current density in guard cells were much more pronounced than in mesophyll cells. Furthermore, the SV channel in motor cells exhibited a higher cytosolic Ca2+ sensitivity than in mesophyll cells. These distinct features of the guard cell SV channel therefore probably account for the published stomatal phenotype of tpc1-2.