999 resultados para Joint conditional distributions


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The World Health Organization fracture risk assessment tool, FRAX(®), is an advance in clinical care that can assist in clinical decision-making. However, with increasing clinical utilization, numerous questions have arisen regarding how to best estimate fracture risk in an individual patient. Recognizing the need to assist clinicians in optimal use of FRAX(®), the International Osteoporosis Foundation (IOF) in conjunction with the International Society for Clinical Densitometry (ISCD) assembled an international panel of experts that ultimately developed joint Official Positions of the ISCD and IOF advising clinicians regarding FRAX(®) usage. As part of the process, the charge of the FRAX(®) Clinical Task Force was to review and synthesize data surrounding a number of recognized clinical risk factors including rheumatoid arthritis, smoking, alcohol, prior fracture, falls, bone turnover markers and glucocorticoid use. This synthesis was presented to the expert panel and constitutes the data on which the subsequent Official Positions are predicated. A summary of the Clinical Task Force composition and charge is presented here.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We review methods to estimate the average crystal (grain) size and the crystal (grain) size distribution in solid rocks. Average grain sizes often provide the base for stress estimates or rheological calculations requiring the quantification of grain sizes in a rock's microstructure. The primary data for grain size data are either 1D (i.e. line intercept methods), 2D (area analysis) or 3D (e.g., computed tomography, serial sectioning). These data have been used for different data treatments over the years, whereas several studies assume a certain probability function (e.g., logarithm, square root) to calculate statistical parameters as the mean, median, mode or the skewness of a crystal size distribution. The finally calculated average grain sizes have to be compatible between the different grain size estimation approaches in order to be properly applied, for example, in paleo-piezometers or grain size sensitive flow laws. Such compatibility is tested for different data treatments using one- and two-dimensional measurements. We propose an empirical conversion matrix for different datasets. These conversion factors provide the option to make different datasets compatible with each other, although the primary calculations were obtained in different ways. In order to present an average grain size, we propose to use the area-weighted and volume-weighted mean in the case of unimodal grain size distributions, respectively, for 2D and 3D measurements. The shape of the crystal size distribution is important for studies of nucleation and growth of minerals. The shape of the crystal size distribution of garnet populations is compared between different 2D and 3D measurements, which are serial sectioning and computed tomography. The comparison of different direct measured 3D data; stereological data and direct presented 20 data show the problems of the quality of the smallest grain sizes and the overestimation of small grain sizes in stereological tools, depending on the type of CSD. (C) 2011 Published by Elsevier Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The speed and width of front solutions to reaction-dispersal models are analyzed both analytically and numerically. We perform our analysis for Laplace and Gaussian distribution kernels, both for delayed and nondelayed models. The results are discussed in terms of the characteristic parameters of the models

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background:¦Infection after total or partial hip arthroplasty (HA) leads to significant long-­term morbidity and high healthcare cost. We evaluated reasons for treatment failure of different surgical modalities in a 12-­year prosthetic hip joint infection cohort study.¦Method:¦All patients hospitalized at our institution with infected HA were included either retrospectively (1999-­‐2007) or prospectively¦(2008-­‐2010). HA infection was defined as growth of the same microorganism in ≥2 tissues or synovialfluid culture, visible purulence, sinus tract or acute inflammation on tissue histopathology. Outcome analysis was performed at outpatient visits, followed by contacting patients, their relatives and/or treating physicians afterwards.¦Results:¦During the study period, 117 patients with infected HA were identified. We excluded 2 patients due to missing data. The average age was 69 years (range, 33-­‐102 years); 42% were female. HA was mainly performed for osteoarthritis (n=84), followed by trauma (n=22), necrosis (n=4), dysplasia(n=2), rheumatoid arthritis (n=1), osteosarcoma (n=1) and tuberculosis (n=1). 28 infections occurred early(≤3 months), 25 delayed (3-­‐24 months) and 63 late (≥24 months after surgery). Infected HA were¦treated with (i) two-­‐stage exchange in 59 patients (51%, cure rate: 93%), (ii) one-­‐stage exchange in 5 (4.3%, cure rate: 100%), (iii) debridement with change of mobile parts in 18 (17%, cure rate: 83%), (iv) debridement without change of mobile¦parts in 17 (14%, cure rate : 53% ), (v) Girdlestone in 13 (11%, cure rate: 100%), and (vi) two-­‐stage exchange followed by¦removal in 3 (2.6%). Patients were followed for an average of 3.9 years (range, 0.1 to 9 years), 7 patients died unrelated to the infected HA. 15 patients (13%) needed additional operations, 1 for mechanical reasons(dislocation of spacer) and 14 for persistent infection: 11 treated with debridement and retention (8 without change; and 3 with change of mobile parts) and 3 with two-­‐stage exchange. The average number of surgery was 2.2 (range, 1 to 5). The infection was finally eradicated in all patients, but the functional outcome remained unsatisfactory in 20% (persistent pain or impaired mobility due to spacer or Girdlestone situation).¦Conclusions:¦Non-­‐respect of current treatment concept leads to treatment failure with subsequent operations. Precise analysis of each treatment failure can be used for improving the treatment algorithm leading to better results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the analysis of cases in which the inclusion or exclusion of a particular suspect, as a possible contributor to a DNA mixture, depends on the value of a variable (the number of contributors) that cannot be determined with certainty. It offers alternative ways to deal with such cases, including sensitivity analysis and object-oriented Bayesian networks, that separate uncertainty about the inclusion of the suspect from uncertainty about other variables. The paper presents a case study in which the value of DNA evidence varies radically depending on the number of contributors to a DNA mixture: if there are two contributors, the suspect is excluded; if there are three or more, the suspect is included; but the number of contributors cannot be determined with certainty. It shows how an object-oriented Bayesian network can accommodate and integrate varying perspectives on the unknown variable and how it can reduce the potential for bias by directing attention to relevant considerations and distinguishing different sources of uncertainty. It also discusses the challenge of presenting such evidence to lay audiences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Audit report on the Iowa Water Pollution Control Works Financing Program (Clean Water Program) and the Iowa Drinking Water Facilities Financing Program (Drinking Water Program), joint programs of the Iowa Finance Authority and the Iowa Department of Natural Resources for the year ended June 30, 2009

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Els dies 11 i 12 d'agost va tenir lloc a Copenhaguen, Dinamarca, el seminari de treball Library and Information Science Education in Europe: ¿Issues in joint curriculum development and Bologna perspectives¿. Aquest seminari, que va estar coordinat per la Royal School of Library and Information Science de Dinamarca, amb la col·laboració de l'European Association for Library and Information Education and Research (EUCLID), es va organitzar en el marc d'un projecte europeu subvencionat pel programa Sòcrates. La Facultat de Biblioteconomia i Documentació de la Universitat de Barcelona, present entre 2001 i 2005 en la Junta de Govern de l'EUCLID, va participar-hi com a soci del projecte. L'objectiu del seminari era aplegar una cinquantena d'experts europeus de l'àrea de Biblioteconomia i Documentació ¿tots ells professors d'escoles i de facultats d'universitats europees¿ per discutir qüestions relacionades amb els plans d'estudis dels ensenyaments des de la perspectiva del procés de Bolonya. El seminari consistí en dues conferències i en les reunions de treball de dotze grups formats per experts que examinaren dotze grans temes ¿prèviament acordats pels organitzadors de l'esdeveniment¿ relacionats amb els plans d'estudis d'aquells ensenyaments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prediction of species' distributions is central to diverse applications in ecology, evolution and conservation science. There is increasing electronic access to vast sets of occurrence records in museums and herbaria, yet little effective guidance on how best to use this information in the context of numerous approaches for modelling distributions. To meet this need, we compared 16 modelling methods over 226 species from 6 regions of the world, creating the most comprehensive set of model comparisons to date. We used presence-only data to fit models, and independent presence-absence data to evaluate the predictions. Along with well-established modelling methods such as generalised additive models and GARP and BIOCLIM, we explored methods that either have been developed recently or have rarely been applied to modelling species' distributions. These include machine-learning methods and community models, both of which have features that may make them particularly well suited to noisy or sparse information, as is typical of species' occurrence data. Presence-only data were effective for modelling species' distributions for many species and regions. The novel methods consistently outperformed more established methods. The results of our analysis are promising for the use of data from museums and herbaria, especially as methods suited to the noise inherent in such data improve.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Brain-Derived Neurotrophic Factor (BDNF) is the main candidate for neuroprotective therapy for Huntington's disease (HD), but its conditional administration is one of its most challenging problems. Results Here we used transgenic mice that over-express BDNF under the control of the Glial Fibrillary Acidic Protein (GFAP) promoter (pGFAP-BDNF mice) to test whether up-regulation and release of BDNF, dependent on astrogliosis, could be protective in HD. Thus, we cross-mated pGFAP-BDNF mice with R6/2 mice to generate a double-mutant mouse with mutant huntingtin protein and with a conditional over-expression of BDNF, only under pathological conditions. In these R6/2:pGFAP-BDNF animals, the decrease in striatal BDNF levels induced by mutant huntingtin was prevented in comparison to R6/2 animals at 12 weeks of age. The recovery of the neurotrophin levels in R6/2:pGFAP-BDNF mice correlated with an improvement in several motor coordination tasks and with a significant delay in anxiety and clasping alterations. Therefore, we next examined a possible improvement in cortico-striatal connectivity in R62:pGFAP-BDNF mice. Interestingly, we found that the over-expression of BDNF prevented the decrease of cortico-striatal presynaptic (VGLUT1) and postsynaptic (PSD-95) markers in the R6/2:pGFAP-BDNF striatum. Electrophysiological studies also showed that basal synaptic transmission and synaptic fatigue both improved in R6/2:pGAP-BDNF mice. Conclusions These results indicate that the conditional administration of BDNF under the GFAP promoter could become a therapeutic strategy for HD due to its positive effects on synaptic plasticity.