933 resultados para Bayesian hierarchical linear model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new methodology is being devised for ensemble ocean forecasting using distributions of the surface wind field derived from a Bayesian Hierarchical Model (BHM). The ocean members are forced with samples from the posterior distribution of the wind during the assimilation of satellite and in-situ ocean data. The initial condition perturbations are then consistent with the best available knowledge of the ocean state at the beginning of the forecast and amplify the ocean response to uncertainty only in the forcing. The ECMWF Ensemble Prediction System (EPS) surface winds are also used to generate a reference ocean ensemble to evaluate the performance of the BHM method that proves to be eective in concentrating the forecast uncertainty at the ocean meso-scale. An height month experiment of weekly BHM ensemble forecasts was performed in the framework of the operational Mediterranean Forecasting System. The statistical properties of the ensemble are compared with model errors throughout the seasonal cycle proving the existence of a strong relationship between forecast uncertainties due to atmospheric forcing and the seasonal cycle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Various inference procedures for linear regression models with censored failure times have been studied extensively. Recent developments on efficient algorithms to implement these procedures enhance the practical usage of such models in survival analysis. In this article, we present robust inferences for certain covariate effects on the failure time in the presence of "nuisance" confounders under a semiparametric, partial linear regression setting. Specifically, the estimation procedures for the regression coefficients of interest are derived from a working linear model and are valid even when the function of the confounders in the model is not correctly specified. The new proposals are illustrated with two examples and their validity for cases with practical sample sizes is demonstrated via a simulation study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Receiver Operating Characteristic (ROC) curve is a prominent tool for characterizing the accuracy of continuous diagnostic test. To account for factors that might invluence the test accuracy, various ROC regression methods have been proposed. However, as in any regression analysis, when the assumed models do not fit the data well, these methods may render invalid and misleading results. To date practical model checking techniques suitable for validating existing ROC regression models are not yet available. In this paper, we develop cumulative residual based procedures to graphically and numerically assess the goodness-of-fit for some commonly used ROC regression models, and show how specific components of these models can be examined within this framework. We derive asymptotic null distributions for the residual process and discuss resampling procedures to approximate these distributions in practice. We illustrate our methods with a dataset from the Cystic Fibrosis registry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Medical errors originating in health care facilities are a significant source of preventable morbidity, mortality, and healthcare costs. Voluntary error report systems that collect information on the causes and contributing factors of medi- cal errors regardless of the resulting harm may be useful for developing effective harm prevention strategies. Some patient safety experts question the utility of data from errors that did not lead to harm to the patient, also called near misses. A near miss (a.k.a. close call) is an unplanned event that did not result in injury to the patient. Only a fortunate break in the chain of events prevented injury. We use data from a large voluntary reporting system of 836,174 medication errors from 1999 to 2005 to provide evidence that the causes and contributing factors of errors that result in harm are similar to the causes and contributing factors of near misses. We develop Bayesian hierarchical models for estimating the log odds of selecting a given cause (or contributing factor) of error given harm has occurred and the log odds of selecting the same cause given that harm did not occur. The posterior distribution of the correlation between these two vectors of log-odds is used as a measure of the evidence supporting the use of data from near misses and their causes and contributing factors to prevent medical errors. In addition, we identify the causes and contributing factors that have the highest or lowest log-odds ratio of harm versus no harm. These causes and contributing factors should also be a focus in the design of prevention strategies. This paper provides important evidence on the utility of data from near misses, which constitute the vast majority of errors in our data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: Obstructive sleep apnea (OSA) can have adverse effects on cognitive functioning, mood, and cardiovascular functioning. OSA brings with it disturbances in sleep architecture, oxygenation, sympathetic nervous system function, and inflammatory processes. It is not clear which of these mechanisms is linked to the decrease in cognitive functioning. This study examined the effect of inflammatory parameters on cognitive dysfunction. MATERIALS AND METHODS: Thirty-nine patients with untreated sleep apnea were evaluated by polysomnography and completed a battery of neuropsychological tests. After the first night of evaluation in the sleep laboratory, blood samples were taken for analysis of interleukin 6, tumor necrosis factor-alpha (TNF-alpha), and soluble TNF receptor 1 (sTNF-R1). RESULTS: sTNF-R1 significantly correlated with cognitive dysfunction. In hierarchical linear regression analysis, measures of obstructive sleep apnea severity explained 5.5% of the variance in cognitive dysfunction (n.s.). After including sTNF-R1, percentage of variance explained by the full model increased more than threefold to 19.6% (F = 2.84, df = 3, 36, p = 0.05). Only sTNF-R1 had a significant individual relationship with cognitive dysfunction (beta = 0.376 t = 2.48, p = 0.02). CONCLUSIONS: sTNF-R1 as a marker of chronic inflammation may be associated with diminished neuropsychological functioning in patients with OSA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brain tumor is one of the most aggressive types of cancer in humans, with an estimated median survival time of 12 months and only 4% of the patients surviving more than 5 years after disease diagnosis. Until recently, brain tumor prognosis has been based only on clinical information such as tumor grade and patient age, but there are reports indicating that molecular profiling of gliomas can reveal subgroups of patients with distinct survival rates. We hypothesize that coupling molecular profiling of brain tumors with clinical information might improve predictions of patient survival time and, consequently, better guide future treatment decisions. In order to evaluate this hypothesis, the general goal of this research is to build models for survival prediction of glioma patients using DNA molecular profiles (U133 Affymetrix gene expression microarrays) along with clinical information. First, a predictive Random Forest model is built for binary outcomes (i.e. short vs. long-term survival) and a small subset of genes whose expression values can be used to predict survival time is selected. Following, a new statistical methodology is developed for predicting time-to-death outcomes using Bayesian ensemble trees. Due to a large heterogeneity observed within prognostic classes obtained by the Random Forest model, prediction can be improved by relating time-to-death with gene expression profile directly. We propose a Bayesian ensemble model for survival prediction which is appropriate for high-dimensional data such as gene expression data. Our approach is based on the ensemble "sum-of-trees" model which is flexible to incorporate additive and interaction effects between genes. We specify a fully Bayesian hierarchical approach and illustrate our methodology for the CPH, Weibull, and AFT survival models. We overcome the lack of conjugacy using a latent variable formulation to model the covariate effects which decreases computation time for model fitting. Also, our proposed models provides a model-free way to select important predictive prognostic markers based on controlling false discovery rates. We compare the performance of our methods with baseline reference survival methods and apply our methodology to an unpublished data set of brain tumor survival times and gene expression data, selecting genes potentially related to the development of the disease under study. A closing discussion compares results obtained by Random Forest and Bayesian ensemble methods under the biological/clinical perspectives and highlights the statistical advantages and disadvantages of the new methodology in the context of DNA microarray data analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Pathology studies have shown delayed arterial healing in culprit lesions of patients with acute coronary syndrome (ACS) compared with stable coronary artery disease (CAD) after placement of drug-eluting stents (DES). It is unknown whether similar differences exist in-vivo during long-term follow-up. Using optical coherence tomography (OCT), we assessed differences in arterial healing between patients with ACS and stable CAD five years after DES implantation. METHODS AND RESULTS A total of 88 patients comprised of 53 ACS lesions with 7864 struts and 35 stable lesions with 5298 struts were suitable for final OCT analysis five years after DES implantation. The analytical approach was based on a hierarchical Bayesian random-effects model. OCT endpoints were strut coverage, malapposition, protrusion, evaginations and cluster formation. Uncovered (1.7% vs. 0.7%, adjusted p=0.041) or protruding struts (0.50% vs. 0.13%, adjusted p=0.038) were more frequent among ACS compared with stable CAD lesions. A similar trend was observed for malapposed struts (1.33% vs. 0.45%, adj. p=0.072). Clusters of uncovered or malapposed/protruding struts were present in 34.0% of ACS and 14.1% of stable patients (adj. p=0.041). Coronary evaginations were more frequent in patients with ST-elevation myocardial infarction compared with stable CAD patients (0.16 vs. 0.13 per cross section, p=0.027). CONCLUSION Uncovered, malapposed, and protruding stent struts as well as clusters of delayed healing may be more frequent in culprit lesions of ACS compared with stable CAD patients late after DES implantation. Our observational findings suggest a differential healing response attributable to lesion characteristics of patients with ACS compared with stable CAD in-vivo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the literature, contrasting effects of plant species richness on the soil water balance are reported. Our objective was to assess the effects of plant species and functional richness and functional identity on soil water contents and water fluxes in the experimental grassland of the Jena Experiment. The Jena Experiment comprises 86 plots on which plant species richness (0, 1, 2, 4, 8, 16, and 60) and functional group composition (zero to four functional groups: legumes, grasses, tall herbs, and small herbs) were manipulated in a factorial design. We recorded meteorological data and soil water contents of the 0·0–0·3 and 0·3–0·7 m soil layers and calculated actual evapotranspiration (ETa), downward flux (DF), and capillary rise with a soil water balance model for the period 2003–2007. Missing water contents were estimated with a Bayesian hierarchical model. Species richness decreased water contents in subsoil during wet soil conditions. Presence of tall herbs increased soil water contents in topsoil during dry conditions and decreased soil water contents in subsoil during wet conditions. Presence of grasses generally decreased water contents in topsoil, particularly during dry phases; increased ETa and decreased DF from topsoil; and decreased ETa from subsoil. Presence of legumes, in contrast, decreased ETa and increased DF from topsoil and increased ETa from subsoil. Species richness probably resulted in complementary water use. Specific functional groups likely affected the water balance via specific root traits (e.g. shallow dense roots of grasses and deep taproots of tall herbs) or specific shading intensity caused by functional group effects on vegetation cover. Copyright © 2013 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Temperate C3-grasslands are of high agricultural and ecological importance in Central Europe. Plant growth and consequently grassland yields depend strongly on water supply during the growing season, which is projected to change in the future. We therefore investigated the effect of summer drought on the water uptake of an intensively managed lowland and an extensively managed sub-alpine grassland in Switzerland. Summer drought was simulated by using transparent shelters. Standing above- and belowground biomass was sampled during three growing seasons. Soil and plant xylem waters were analyzed for oxygen (and hydrogen) stable isotope ratios, and the depths of plant water uptake were estimated by two different approaches: (1) linear interpolation method and (2) Bayesian calibrated mixing model. Relative to the control, aboveground biomass was reduced under drought conditions. In contrast to our expectations, lowland grassland plants subjected to summer drought were more likely (43–68 %) to rely on water in the topsoil (0–10 cm), whereas control plants relied less on the topsoil (4–37 %) and shifted to deeper soil layers (20–35 cm) during the drought period (29–48 %). Sub-alpine grassland plants did not differ significantly in uptake depth between drought and control plots during the drought period. Both approaches yielded similar results and showed that the drought treatment in the two grasslands did not induce a shift to deeper uptake depths, but rather continued or shifted water uptake to even more shallower soil depths. These findings illustrate the importance of shallow soil depths for plant performance under drought conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aims at assessing the skill of several climate field reconstruction techniques (CFR) to reconstruct past precipitation over continental Europe and the Mediterranean at seasonal time scales over the last two millennia from proxy records. A number of pseudoproxy experiments are performed within the virtual reality ofa regional paleoclimate simulation at 45 km resolution to analyse different aspects of reconstruction skill. Canonical Correlation Analysis (CCA), two versions of an Analog Method (AM) and Bayesian hierarchical modeling (BHM) are applied to reconstruct precipitation from a synthetic network of pseudoproxies that are contaminated with various types of noise. The skill of the derived reconstructions is assessed through comparison with precipitation simulated by the regional climate model. Unlike BHM, CCA systematically underestimates the variance. The AM can be adjusted to overcome this shortcoming, presenting an intermediate behaviour between the two aforementioned techniques. However, a trade-off between reconstruction-target correlations and reconstructed variance is the drawback of all CFR techniques. CCA (BHM) presents the largest (lowest) skill in preserving the temporal evolution, whereas the AM can be tuned to reproduce better correlation at the expense of losing variance. While BHM has been shown to perform well for temperatures, it relies heavily on prescribed spatial correlation lengths. While this assumption is valid for temperature, it is hardly warranted for precipitation. In general, none of the methods outperforms the other. All experiments agree that a dense and regularly distributed proxy network is required to reconstruct precipitation accurately, reflecting its high spatial and temporal variability. This is especially true in summer, when a specifically short de-correlation distance from the proxy location is caused by localised summertime convective precipitation events.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The spatial context is critical when assessing present-day climate anomalies, attributing them to potential forcings and making statements regarding their frequency and severity in a long-term perspective. Recent international initiatives have expanded the number of high-quality proxy-records and developed new statistical reconstruction methods. These advances allow more rigorous regional past temperature reconstructions and, in turn, the possibility of evaluating climate models on policy-relevant, spatio-temporal scales. Here we provide a new proxy-based, annually-resolved, spatial reconstruction of the European summer (June–August) temperature fields back to 755 CE based on Bayesian hierarchical modelling (BHM), together with estimates of the European mean temperature variation since 138 BCE based on BHM and composite-plus-scaling (CPS). Our reconstructions compare well with independent instrumental and proxy-based temperature estimates, but suggest a larger amplitude in summer temperature variability than previously reported. Both CPS and BHM reconstructions indicate that the mean 20th century European summer temperature was not significantly different from some earlier centuries, including the 1st, 2nd, 8th and 10th centuries CE. The 1st century (in BHM also the 10th century) may even have been slightly warmer than the 20th century, but the difference is not statistically significant. Comparing each 50 yr period with the 1951–2000 period reveals a similar pattern. Recent summers, however, have been unusually warm in the context of the last two millennia and there are no 30 yr periods in either reconstruction that exceed the mean average European summer temperature of the last 3 decades (1986–2015 CE). A comparison with an ensemble of climate model simulations suggests that the reconstructed European summer temperature variability over the period 850–2000 CE reflects changes in both internal variability and external forcing on multi-decadal time-scales. For pan-European temperatures we find slightly better agreement between the reconstruction and the model simulations with high-end estimates for total solar irradiance. Temperature differences between the medieval period, the recent period and the Little Ice Age are larger in the reconstructions than the simulations. This may indicate inflated variability of the reconstructions, a lack of sensitivity and processes to changes in external forcing on the simulated European climate and/or an underestimation of internal variability on centennial and longer time scales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study investigated how ease of imagery influences source monitoring accuracy. Two experiments were conducted in order to examine how ease of imagery influences the probability of source confusions of perceived and imagined completions of natural symmetric shapes. The stimuli consisted of binary pictures of natural objects, namely symmetric pictures of birds, butterflies, insects, and leaves. The ease of imagery (indicating the similarity of the sources) and the discriminability (indicating the similarity of the items) of each stimulus were estimated in a pretest and included as predictors of the memory performance for these stimuli. It was found that confusion of the sources becomes more likely when the imagery process was relatively easy. However, if the different processes of source monitoring-item memory, source memory and guessing biases-are disentangled, both experiments support the assumption that the effect of decreased source memory for easily imagined stimuli is due to decision processes and misinformation at retrieval rather than encoding processes and memory retention. The data were modeled with a Bayesian hierarchical implementation of the one high threshold source monitoring model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Despite long-standing calls to disseminate evidence-based treatments for generalized anxiety (GAD), modest progress has been made in the study of how such treatments should be implemented. The primary objective of this study was to test three competing strategies on how to implement a cognitive behavioral treatment (CBT) for out-patients with GAD (i.e., comparison of one compensation vs. two capitalization models). METHODS: For our three-arm, single-blinded, randomized controlled trial (implementation of CBT for GAD [IMPLEMENT]), we recruited adults with GAD using advertisements in high-circulation newspapers to participate in a 14-session cognitive behavioral treatment (Mastery of your Anxiety and Worry, MAW-packet). We randomly assigned eligible patients using a full randomization procedure (1:1:1) to three different conditions of implementation: adherence priming (compensation model), which had a systematized focus on patients' individual GAD symptoms and how to compensate for these symptoms within the MAW-packet, and resource priming and supportive resource priming (capitalization model), which had systematized focuses on patients' strengths and abilities and how these strengths can be capitalized within the same packet. In the intention-to-treat population an outcome composite of primary and secondary symptoms-related self-report questionnaires was analyzed based on a hierarchical linear growth model from intake to 6-month follow-up assessment. This trial is registered at ClinicalTrials.gov (identifier: NCT02039193) and is closed to new participants. FINDINGS: From June 2012 to Nov. 2014, from 411 participants that were screened, 57 eligible participants were recruited and randomly assigned to three conditions. Forty-nine patients (86%) provided outcome data at post-assessment (14% dropout rate). All three conditions showed a highly significant reduction of symptoms over time. However, compared with the adherence priming condition, both resource priming conditions indicated faster symptom reduction. The observer ratings of a sub-sample of recorded videos (n = 100) showed that the therapists in the resource priming conditions conducted more strength-oriented interventions in comparison with the adherence priming condition. No patients died or attempted suicide. INTERPRETATION: To our knowledge, this is the first trial that focuses on capitalization and compensation models during the implementation of one prescriptive treatment packet for GAD. We have shown that GAD related symptoms were significantly faster reduced by the resource priming conditions, although the limitations of our study included a well-educated population. If replicated, our results suggest that therapists who implement a mental health treatment for GAD might profit from a systematized focus on capitalization models. FUNDING: Swiss Science National Foundation (SNSF-Nr. PZ00P1_136937/1) awarded to CF. KEYWORDS: Cognitive behavioral therapy; Evidence-based treatment; Implementation strategies; Randomized controlled trial

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although evidence suggests that the benefits of psychodynamic treatments are sustained over time, presently it is unclear whether these sustained benefits are superior to non-psychodynamic treatments. Additionally, the extant literature comparing the sustained benefits of psychodynamic treatments compared to alternative treatments is limited with methodological shortcomings. The purpose of the current study was to conduct a rigorous test of the growth of the benefits of psychodynamic treatments relative to alternative treatments across distinct domains of change (i.e., all outcome measures, targeted outcome measures, non-targeted outcome measures, and personality outcome measures). To do so, the study employed strict inclusion criteria to identify randomized clinical trials that directly compared at least one bona fide psychodynamic treatment and one bona fide non-psychodynamic treatment. Hierarchical linear modeling (Raudenbush, Bryk, Cheong, Congdon, & du Toit, 2011) was used to longitudinally model the impact of psychodynamic treatments compared to non-psychodynamic treatments at post-treatment and to compare the growth (i.e., slope) of effects beyond treatment completion. Findings from the present meta-analysis indicated that psychodynamic treatments and non-psychodynamic treatments were equally efficacious at post-treatment and at follow-up for combined outcomes (k=20), targeted outcomes (k=19), non-targeted outcomes (k=17), and personality outcomes (k=6). Clinical implications, directions for future research, and limitations are discussed.