935 resultados para Random Walk Models
Resumo:
Recent evidence suggests that transition risks from initial clinical high risk (CHR) status to psychosis are decreasing. The role played by remission in this context is mostly unknown. The present study addresses this issue by means of a meta-analysis including eight relevant studies published up to January 2012 that reported remission rates from an initial CHR status. The primary effect size measure was the longitudinal proportion of remissions compared to non-remission in subjects with a baseline CHR state. Random effect models were employed to address the high heterogeneity across studies included. To assess the robustness of the results, we performed sensitivity analyses by sequentially removing each study and rerunning the analysis. Of 773 subjects who met initial CHR criteria, 73% did not convert to psychosis along a 2-year follow. Of these, about 46% fully remitted from the baseline attenuated psychotic symptoms, as evaluated on the psychometric measures usually employed by prodromal services. The corresponding clinical remission was estimated as high as 35% of the baseline CHR sample. The CHR state is associated with a significant proportion of remitting subjects that can be accounted by the effective treatments received, a lead time bias, a dilution effect, a comorbid effect of other psychiatric diagnoses.
Resumo:
In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^
Resumo:
Background Atrioventricular (AV) conduction disturbances requiring permanent pacemaker (PPM) implantation may complicate transcatheter aortic valve replacement (TAVR). Available evidence on predictors of PPM is sparse and derived from small studies. Objectives The objective of this study was to provide summary effect estimates for clinically useful predictors of PPM implantation after TAVR. Methods We performed a systematic search for studies that reported the incidence of PPM implantation after TAVR and that provided raw data for the predictors of interest. Data on study, patient, and procedural characteristics were abstracted. Crude risk ratios (RRs) and 95% confidence intervals for each predictor were calculated by use of random effects models. Stratified analyses by type of implanted valve were performed. Results We obtained data from 41 studies that included 11,210 TAVR patients, of whom 17% required PPM implantation after intervention. The rate of PPM ranged from 2% to 51% in individual studies (with a median of 28% for the Medtronic CoreValve Revalving System [MCRS] and 6% for the Edwards SAPIEN valve [ESV]). The summary estimates indicated increased risk of PPM after TAVR for men (RR: 1.23; p < 0.01); for patients with first-degree AV block (RR: 1.52; p < 0.01), left anterior hemiblock (RR: 1.62; p < 0.01), or right bundle branch block (RR: 2.89; p < 0.01) at baseline; and for patients with intraprocedural AV block (RR: 3.49; p < 0.01). These variables remained significant predictors when only patients treated with the MCRS bioprosthesis were considered. The data for ESV were limited. Unadjusted estimates indicated a 2.5-fold higher risk for PPM implantation for patients who received the MCRS than for those who received the ESV. Conclusions Male sex, baseline conduction disturbances, and intraprocedural AV block emerged as predictors of PPM implantation after TAVR. This study provides useful tools to identify high-risk patients and to guide clinical decision making before and after intervention.
Resumo:
BACKGROUND Data on the association between subclinical thyroid dysfunction and fractures conflict. PURPOSE To assess the risk for hip and nonspine fractures associated with subclinical thyroid dysfunction among prospective cohorts. DATA SOURCES Search of MEDLINE and EMBASE (1946 to 16 March 2014) and reference lists of retrieved articles without language restriction. STUDY SELECTION Two physicians screened and identified prospective cohorts that measured thyroid function and followed participants to assess fracture outcomes. DATA EXTRACTION One reviewer extracted data using a standardized protocol, and another verified data. Both reviewers independently assessed methodological quality of the studies. DATA SYNTHESIS The 7 population-based cohorts of heterogeneous quality included 50,245 participants with 1966 hip and 3281 nonspine fractures. In random-effects models that included the 5 higher-quality studies, the pooled adjusted hazard ratios (HRs) of participants with subclinical hyperthyroidism versus euthyrodism were 1.38 (95% CI, 0.92 to 2.07) for hip fractures and 1.20 (CI, 0.83 to 1.72) for nonspine fractures without statistical heterogeneity (P = 0.82 and 0.52, respectively; I2= 0%). Pooled estimates for the 7 cohorts were 1.26 (CI, 0.96 to 1.65) for hip fractures and 1.16 (CI, 0.95 to 1.42) for nonspine fractures. When thyroxine recipients were excluded, the HRs for participants with subclinical hyperthyroidism were 2.16 (CI, 0.87 to 5.37) for hip fractures and 1.43 (CI, 0.73 to 2.78) for nonspine fractures. For participants with subclinical hypothyroidism, HRs from higher-quality studies were 1.12 (CI, 0.83 to 1.51) for hip fractures and 1.04 (CI, 0.76 to 1.42) for nonspine fractures (P for heterogeneity = 0.69 and 0.88, respectively; I2 = 0%). LIMITATIONS Selective reporting cannot be excluded. Adjustment for potential common confounders varied and was not adequately done across all studies. CONCLUSION Subclinical hyperthyroidism might be associated with an increased risk for hip and nonspine fractures, but additional large, high-quality studies are needed. PRIMARY FUNDING SOURCE Swiss National Science Foundation.
Resumo:
The central assumption in the literature on collaborative networks and policy networks is that political outcomes are affected by a variety of state and nonstate actors. Some of these actors are more powerful than others and can therefore have a considerable effect on decision making. In this article, we seek to provide a structural and institutional explanation for these power differentials in policy networks and support the explanation with empirical evidence. We use a dyadic measure of influence reputation as a proxy for power, and posit that influence reputation over the political outcome is related to vertical integration into the political system by means of formal decision-making authority, and to horizontal integration by means of being well embedded into the policy network. Hence, we argue that actors are perceived as influential because of two complementary factors: (a) their institutional roles and (b) their structural positions in the policy network. Based on temporal and cross-sectional exponential random graph models, we compare five cases about climate, telecommunications, flood prevention, and toxic chemicals politics in Switzerland and Germany. The five networks cover national and local networks at different stages of the policy cycle. The results confirm that institutional and structural drivers seem to have a crucial impact on how an actor is perceived in decision making and implementation and, therefore, their ability to significantly shape outputs and service delivery.
Resumo:
Stroke is one of the most common conditions requiring rehabilitation, and its motor impairments are a major cause of permanent disability. Hemiparesis is observed by 80% of the patients after acute stroke. Neuroimaging studies showed that real and imagined movements have similarities regarding brain activation, supplying evidence that those similarities are based on the same process. Within this context, the combination of MP with physical and occupational therapy appears to be a natural complement based on neurorehabilitation concepts. Our study seeks to investigate if MP for stroke rehabilitation of upper limbs is an effective adjunct therapy. PubMed (Medline), ISI knowledge (Institute for Scientific Information) and SciELO (Scientific Electronic Library) were terminated on 20 February 2015. Data were collected on variables as follows: sample size, type of supervision, configuration of mental practice, setting the physical practice (intensity, number of sets and repetitions, duration of contractions, rest interval between sets, weekly and total duration), measures of sensorimotor deficits used in the main studies and significant results. Random effects models were used that take into account the variance within and between studies. Seven articles were selected. As there was no statistically significant difference between the two groups (MP vs Control), showed a – 0.6 (95% CI: –1.27 to 0.04), for upper limb motor restoration after stroke. The present meta-analysis concluded that MP is not effective as adjunct therapeutic strategy for upper limb motor restoration after stroke.
Resumo:
INTRODUCTION AND OBJECTIVES There is continued debate about the routine use of aspiration thrombectomy in patients with ST-segment elevation myocardial infarction. Our aim was to evaluate clinical and procedural outcomes of aspiration thrombectomy-assisted primary percutaneous coronary intervention compared with conventional primary percutaneous coronary intervention in patients with ST-segment elevation myocardial infarction. METHODS We performed a meta-analysis of 26 randomized controlled trials with a total of 11 943 patients. Clinical outcomes were extracted up to maximum follow-up and random effect models were used to assess differences in outcomes. RESULTS We observed no difference in the risk of all-cause death (pooled risk ratio = 0.88; 95% confidence interval, 0.74-1.04; P = .124), reinfarction (pooled risk ratio = 0.85; 95% confidence interval, 0.67-1.08; P = .176), target vessel revascularization (pooled risk ratio = 0.86; 95% confidence interval, 0.73-1.00; P = .052), or definite stent thrombosis (pooled risk ratio = 0.76; 95% confidence interval, 0.49-1.16; P = .202) between the 2 groups at a mean weighted follow-up time of 10.4 months. There were significant reductions in failure to reach Thrombolysis In Myocardial Infarction 3 flow (pooled risk ratio = 0.70; 95% confidence interval, 0.60-0.81; P < .001) or myocardial blush grade 3 (pooled risk ratio = 0.76; 95% confidence interval, 0.65-0.89; P = .001), incomplete ST-segment resolution (pooled risk ratio = 0.72; 95% confidence interval, 0.62-0.84; P < .001), and evidence of distal embolization (pooled risk ratio = 0.61; 95% confidence interval, 0.46-0.81; P = .001) with aspiration thrombectomy but estimates were heterogeneous between trials. CONCLUSIONS Among unselected patients with ST-segment elevation myocardial infarction, aspiration thrombectomy-assisted primary percutaneous coronary intervention does not improve clinical outcomes, despite improved epicardial and myocardial parameters of reperfusion. Full English text available from:www.revespcardiol.org/en.
Resumo:
We introduce a multistable subordinator, which generalizes the stable subordinator to the case of time-varying stability index. This enables us to define a multifractional Poisson process. We study properties of these processes and establish the convergence of a continuous-time random walk to the multifractional Poisson process.
Resumo:
Energy shocks like the Fukushima accident can have important political consequences. This article examines their impact on collaboration patterns between collective actors in policy processes. It argues that external shocks create both behavioral uncertainty, meaning that actors do not know about other actors' preferences, and policy uncertainty on the choice and consequences of policy instruments. The context of uncertainty interacts with classical drivers of actor collaboration in policy processes. The analysis is based on a dataset comprising interview and survey data on political actors in two subsequent policy processes in Switzerland and Exponential Random Graph Models for network data. Results first show that under uncertainty, collaboration of actors in policy processes is less based on similar preferences than in stable contexts, but trust and knowledge of other actors are more important. Second, under uncertainty, scientific actors are not preferred collaboration partners.
Resumo:
The joint modeling of longitudinal and survival data is a new approach to many applications such as HIV, cancer vaccine trials and quality of life studies. There are recent developments of the methodologies with respect to each of the components of the joint model as well as statistical processes that link them together. Among these, second order polynomial random effect models and linear mixed effects models are the most commonly used for the longitudinal trajectory function. In this study, we first relax the parametric constraints for polynomial random effect models by using Dirichlet process priors, then three longitudinal markers rather than only one marker are considered in one joint model. Second, we use a linear mixed effect model for the longitudinal process in a joint model analyzing the three markers. In this research these methods were applied to the Primary Biliary Cirrhosis sequential data, which were collected from a clinical trial of primary biliary cirrhosis (PBC) of the liver. This trial was conducted between 1974 and 1984 at the Mayo Clinic. The effects of three longitudinal markers (1) Total Serum Bilirubin, (2) Serum Albumin and (3) Serum Glutamic-Oxaloacetic transaminase (SGOT) on patients' survival were investigated. Proportion of treatment effect will also be studied using the proposed joint modeling approaches. ^ Based on the results, we conclude that the proposed modeling approaches yield better fit to the data and give less biased parameter estimates for these trajectory functions than previous methods. Model fit is also improved after considering three longitudinal markers instead of one marker only. The results from analysis of proportion of treatment effects from these joint models indicate same conclusion as that from the final model of Fleming and Harrington (1991), which is Bilirubin and Albumin together has stronger impact in predicting patients' survival and as a surrogate endpoints for treatment. ^
Resumo:
Background. The gap between actual and ideal rates of routine cancer screening in the U.S., particularly for colorectal cancer screening (CRCS) (1;2), is responsible for an unnecessary burden of morbidity and mortality, particularly for disadvantaged groups. Knowledge about the effects of individual and area influences is being advanced by a growing body of research that has examined the association of area socioeconomic status (SES) and cancer screening after controlling for individual SES. The findings from this emerging and heterogeneous research in the cancer screening literature have been mixed. Moreover, multilevel studies in this area have not yet adequately explored the possibility of differential associations by population subgroup, despite some evidence suggesting gender-specific effects. ^ Objectives and methods. This dissertation reports on a systematic review of studies on the association of area SES and cancer screening and a multilevel study of the association between area SES and CRCS. The specific aims of the systematic review are to: (1) describe the study designs, constructs, methods, and measures; (2) describe the association of area SES and cancer screening; and (3) identify neglected areas of research. ^ The empiric study linked a pooled sample of respondents aged ≥50 years without a personal history of colorectal cancer from the 2003 and 2005 California Health Interview Surveys with a comprehensive set of census-tract level area SES measures from the 2000 U.S. Census. Two-level random intercept models were used to test 2 hypotheses: (1) area SES will be associated with adherence to two modalities of CRCS after controlling for individual SES; and (2) gender will moderate the relationship between area socioeconomic status and adherence to both modalities of CRCS. ^ Results. The systematic review identified 19 eligible studies that demonstrated variability in study designs, methods, constructs, and measures. The majority of tested associations were either not statistically significant or significant and in the positive direction, indicating that as area SES increased, the odds of CRCS increased. The multilevel study demonstrated that while multiple aspects of area SES were associated with CRCS after controlling for individual SES, associations differed by screening modality and in the case of endoscopy, they also differed by gender. ^ Conclusions. Conceptual and methodologic heterogeneity and weaknesses in the literature to date limit definitive conclusions about the underlying relationships between area SES and cancer screening. The multilevel study provided partial support for both hypotheses. Future research should continue to explore the role of gender as a moderating influence with the aim of identifying the mechanisms linking area SES and cancer prevention behaviors. ^
Resumo:
El río Mendoza conforma el oasis norte que es el más importante de la provincia. El crecimiento urbano ha avanzado sobre áreas originalmente agrícolas, rodeando la red de canales y desagües, que también recibe los desagües pluviales urbanos, producto de tormentas convectivas. La actividad antropogénica utiliza el recurso para bebida, saneamiento, riego, recreación, etc., y vuelca sus excedentes a la red, contaminándola. Para conocer la calidad del agua de esta cuenca se seleccionaron, estratégicamente, 15 sitios de muestreo: 3 a lo largo del río y a partir del dique derivador Cipolletti (R_I a R_III), 5 en la red de canales (C_I a C_V) y 7 ubicados en los colectores de drenaje (D_I a D_VII). Se realizaron los siguientes análisis físico-químicos y microbiológicos; en el río y en la red de canales: conductividad eléctrica, temperatura, pH, aniones y cationes (cálculo de RAS), oxígeno disuelto (OD), sólidos sedimentables, demanda química de oxígeno (DQO), bacterias aerobias mesófilas (BAM), coliformes totales y fecales y metales pesados. En la red de drenaje sólo se realizaron los cuatro primeros. Los resultados de los análisis, se incorporaron a una base de datos y se sometieron a un análisis estadístico descriptivo e inferencial. Este último consistió en la aplicación de diversas pruebas en busca de posibles diferencias entre los sitios de muestreo, para cada variable respuesta, a un α = 0.05. Se realizó el análisis de la varianza de efectos fijos y de efectos aleatorios y se probaron los supuestos de homocedasticidad y de normalidad de los errores. En el caso de violación de los supuestos, se utilizó la prueba de Kruskal- Wallis. Se compararon los siguientes sitios de muestreo entre sí: ríos, R_I-canales y drenajes. Se concluyó que hay un aumento significativo de la salinidad y la sodicidad en R_II, que los cambios de calidad ocurridos entre R_II y R_III podrían deberse al aporte de otras aguas. Con respecto a la comparación de los parámetros entre la cabeza del sistema (R_I) y la red de canales se puede decir que los aportes realizados por los escurrimientos urbanos ubicados hacia el oeste del canal Cacique Guaymallén, sumados a los vuelcos de Campo Espejo (detectados en C_II), incrementan significativamente la salinidad (+55 %) y sodicidad del agua (+95 %) respecto del punto R_I, aunque el valor de sodicidad sigue siendo bajo. También se han encontrado incrementos de salinidad (+80 %), de DQO (+1159 %) y BAM (+2873 %) con lógica disminución de OD (-58 %) en el punto C_V (canal Auxiliar Tulumaya) respecto del punto R_I, ocasionados por aportes urbanos (Gran Mendoza) sumados a la carga contaminante del canal Pescara. Los metales pesados no presentan grandes diferencias entre sitios de muestreo.
Resumo:
This paper empirically analyzes the market efficiency of microfinance investment funds. For the empirical analysis, we use an index of the microfinance investment funds and apply two kinds of variance ratio tests to examine whether or not this index follows a random walk. We use the entire sample period from December 2003 to June 2010 as well as two sub-samples which divide the entire period before and after January 2007. The empirical evidence demonstrates that the index does not follow a random walk, suggesting that the market of the microfinance investment funds is not efficient. This result is not affected by changes in either empirical techniques or sample periods.
Resumo:
All meta-analyses should include a heterogeneity analysis. Even so, it is not easy to decide whether a set of studies are homogeneous or heterogeneous because of the low statistical power of the statistics used (usually the Q test). Objective: Determine a set of rules enabling SE researchers to find out, based on the characteristics of the experiments to be aggregated, whether or not it is feasible to accurately detect heterogeneity. Method: Evaluate the statistical power of heterogeneity detection methods using a Monte Carlo simulation process. Results: The Q test is not powerful when the meta-analysis contains up to a total of about 200 experimental subjects and the effect size difference is less than 1. Conclusions: The Q test cannot be used as a decision-making criterion for meta-analysis in small sample settings like SE. Random effects models should be used instead of fixed effects models. Caution should be exercised when applying Q test-mediated decomposition into subgroups.
Resumo:
Belief propagation (BP) is a technique for distributed inference in wireless networks and is often used even when the underlying graphical model contains cycles. In this paper, we propose a uniformly reweighted BP scheme that reduces the impact of cycles by weighting messages by a constant ?edge appearance probability? rho ? 1. We apply this algorithm to distributed binary hypothesis testing problems (e.g., distributed detection) in wireless networks with Markov random field models. We demonstrate that in the considered setting the proposed method outperforms standard BP, while maintaining similar complexity. We then show that the optimal ? can be approximated as a simple function of the average node degree, and can hence be computed in a distributed fashion through a consensus algorithm.