942 resultados para Regression To The Mean
Resumo:
In this article, we illustrate experimentally an important consequence of the stochastic component in choice behaviour which has not been acknowledged so far. Namely, its potential to produce ‘regression to the mean’ (RTM) effects. We employ a novel approach to individual choice under risk, based on repeated multiple-lottery choices (i.e. choices among many lotteries), to show how the high degree of stochastic variability present in individual decisions can distort crucially certain results through RTM effects. We demonstrate the point in the context of a social comparison experiment.
Resumo:
Background Regression to the mean (RTM) is a statistical phenomenon that can make natural variation in repeated data look like real change. It happens when unusually large or small measurements tend to be followed by measurements that are closer to the mean. Methods We give some examples of the phenomenon, and discuss methods to overcome it at the design and analysis stages of a study. Results The effect of RTM in a sample becomes more noticeable with increasing measurement error and when follow-up measurements are only examined on a sub-sample selected using a baseline value. Conclusions RTM is a ubiquitous phenomenon in repeated data and should always be considered as a possible cause of an observed change. Its effect can be alleviated through better study design and use of suitable statistical methods.
Resumo:
Red light cameras (RLCs) have been used in a number of US cities to yield a demonstrable reduction in red light violations; however, evaluating their impact on safety (crashes) has been relatively more difficult. Accurately estimating the safety impacts of RLCs is challenging for several reasons. First, many safety related factors are uncontrolled and/or confounded during the periods of observation. Second, “spillover” effects caused by drivers reacting to non-RLC equipped intersections and approaches can make the selection of comparison sites difficult. Third, sites selected for RLC installation may not be selected randomly, and as a result may suffer from the regression to the mean bias. Finally, crash severity and resulting costs need to be considered in order to fully understand the safety impacts of RLCs. Recognizing these challenges, a study was conducted to estimate the safety impacts of RLCs on traffic crashes at signalized intersections in the cities of Phoenix and Scottsdale, Arizona. Twenty-four RLC equipped intersections in both cities are examined in detail and conclusions are drawn. Four different evaluation methodologies were employed to cope with the technical challenges described in this paper and to assess the sensitivity of results based on analytical assumptions. The evaluation results indicated that both Phoenix and Scottsdale are operating cost-effective installations of RLCs: however, the variability in RLC effectiveness within jurisdictions is larger in Phoenix. Consistent with findings in other regions, angle and left-turn crashes are reduced in general, while rear-end crashes tend to increase as a result of RLCs.
Resumo:
In the study of traffic safety, expected crash frequencies across sites are generally estimated via the negative binomial model, assuming time invariant safety. Since the time invariant safety assumption may be invalid, Hauer (1997) proposed a modified empirical Bayes (EB) method. Despite the modification, no attempts have been made to examine the generalisable form of the marginal distribution resulting from the modified EB framework. Because the hyper-parameters needed to apply the modified EB method are not readily available, an assessment is lacking on how accurately the modified EB method estimates safety in the presence of the time variant safety and regression-to-the-mean (RTM) effects. This study derives the closed form marginal distribution, and reveals that the marginal distribution in the modified EB method is equivalent to the negative multinomial (NM) distribution, which is essentially the same as the likelihood function used in the random effects Poisson model. As a result, this study shows that the gamma posterior distribution from the multivariate Poisson-gamma mixture can be estimated using the NM model or the random effects Poisson model. This study also shows that the estimation errors from the modified EB method are systematically smaller than those from the comparison group method by simultaneously accounting for the RTM and time variant safety effects. Hence, the modified EB method via the NM model is a generalisable method for estimating safety in the presence of the time variant safety and the RTM effects.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The relationship between degree of diastolic blood pressure (DBP) reduction and mortality was examined among hypertensives, ages 30-69, in the Hypertension Detection and Follow-up Program (HDFP). The HDFP was a multi-center community-based trial, which followed 10,940 hypertensive participants for five years. One-year survival was required for inclusion in this investigation since the one-year annual visit was the first occasion where change in blood pressure could be measured on all participants. During the subsequent four years of follow-up on 10,052 participants, 568 deaths occurred. For levels of change in DBP and for categories of variables related to mortality, the crude mortality rate was calculated. Time-dependent life tables were also calculated so as to utilize available blood pressure data over time. In addition, the Cox life table regression model, extended to take into account both time-constant and time-dependent covariates, was used to examine the relationship change in blood pressure over time and mortality.^ The results of the time-dependent life table and time-dependent Cox life table regression analyses supported the existence of a quadratic function which modeled the relationship between DBP reduction and mortality, even after adjusting for other risk factors. The minimum mortality hazard ratio, based on a particular model, occurred at a DBP reduction of 22.6 mm Hg (standard error = 10.6) in the whole population and 8.5 mm Hg (standard error = 4.6) in the baseline DBP stratum 90-104. After this reduction, there was a small increase in the risk of death. There was not evidence of the quadratic function after fitting the same model using systolic blood pressure. Methodologic issues involved in studying a particular degree of blood pressure reduction were considered. The confidence interval around the change corresponding to the minimum hazard ratio was wide and the obtained blood pressure level should not be interpreted as a goal for treatment. Blood pressure reduction was attributed, not only to pharmacologic therapy, but also to regression to the mean, and to other unknown factors unrelated to treatment. Therefore, the surprising results of this study do not provide direct implications for treatment, but strongly suggest replication in other populations. ^
Resumo:
The size frequency distributions of diffuse, primitive and classic β- amyloid (Aβ) deposits were studied in single sections of cortical tissue from patients with Alzheimer's disease (AD) and Down's syndrome (DS) and compared with those predicted by the log-normal model. In a sample of brain regions, these size distributions were compared with those obtained by serial reconstruction through the tissue and the data used to adjust the size distributions obtained in single sections. The adjusted size distributions of the diffuse, primitive and classic deposits deviated significantly from a log-normal model in AD and DS, the greatest deviations from the model being observed in AD. More Aβ deposits were observed close to the mean and fewer in the larger size classes than predicted by the model. Hence, the growth of Aβ deposits in AD and DS does not strictly follow the log-normal model, deposits growing to within a more restricted size range than predicted. However, Aβ deposits grow to a larger size in DS compared with AD which may reflect differences in the mechanism of Aβ formation.
Resumo:
¿What have we learnt from the 2006-2012 crisis, including events such as the subprime crisis, the bankruptcy of Lehman Brothers or the European sovereign debt crisis, among others? It is usually assumed that in firms that have a CDS quotation, this CDS is the key factor in establishing the credit premiumrisk for a new financial asset. Thus, the CDS is a key element for any investor in taking relative value opportunities across a firm’s capital structure. In the first chapter we study the most relevant aspects of the microstructure of the CDS market in terms of pricing, to have a clear idea of how this market works. We consider that such an analysis is a necessary point for establishing a solid base for the rest of the chapters in order to carry out the different empirical studies we perform. In its document “Basel III: A global regulatory framework for more resilient banks and banking systems”, Basel sets the requirement of a capital charge for credit valuation adjustment (CVA) risk in the trading book and its methodology for the computation for the capital requirement. This regulatory requirement has added extra pressure for in-depth knowledge of the CDS market and this motivates the analysis performed in this thesis. The problem arises in estimating of the credit risk premium for those counterparties without a directly quoted CDS in the market. How can we estimate the credit spread for an issuer without CDS? In addition to this, given the high volatility period in the credit market in the last few years and, in particular, after the default of Lehman Brothers on 15 September 2008, we observe the presence of big outliers in the distribution of credit spread in the different combinations of rating, industry and region. After an exhaustive analysis of the results from the different models studied, we have reached the following conclusions. It is clear that hierarchical regression models fit the data much better than those of non-hierarchical regression. Furthermore,we generally prefer the median model (50%-quantile regression) to the mean model (standard OLS regression) due to its robustness when assigning the price to a new credit asset without spread,minimizing the “inversion problem”. Finally, an additional fundamental reason to prefer the median model is the typical "right skewness" distribution of CDS spreads...
Resumo:
Understanding the expected safety performance of rural signalized intersections is critical for (a) identifying high-risk sites where the observed safety performance is substantially worse than the expected safety performance, (b) understanding influential factors associated with crashes, and (c) predicting the future performance of sites and helping plan safety-enhancing activities. These three critical activities are routinely conducted for safety management and planning purposes in jurisdictions throughout the United States and around the world. This paper aims to develop baseline expected safety performance functions of rural signalized intersections in South Korea, which to date have not yet been established or reported in the literature. Data are examined from numerous locations within South Korea for both three-legged and four-legged configurations. The safety effects of a host of operational and geometric variables on the safety performance of these sites are also examined. In addition, supplementary tables and graphs are developed for comparing the baseline safety performance of sites with various geometric and operational features. These graphs identify how various factors are associated with safety. The expected safety prediction tables offer advantages over regression prediction equations by allowing the safety manager to isolate specific features of the intersections and examine their impact on expected safety. The examination of the expected safety performance tables through illustrated examples highlights the need to correct for regression-to-the-mean effects, emphasizes the negative impacts of multicollinearity, shows why multivariate models do not translate well to accident modification factors, and illuminates the need to examine road safety carefully and methodically. Caveats are provided on the use of the safety performance prediction graphs developed in this paper.
Resumo:
The need to address substance use among people with psychosis has been well established. However, treatment studies targeting substance use in this population have reported mixed results. Substance users with psychosis in no or minimal treatment control groups achieve similar reductions in substance use compared to those in more active substance use treatment, suggesting a role for natural recovery from substance use. This meta-analysis aims to quantify the amount of natural recovery from substance use within control groups of treatment studies containing samples of psychotic substance users, with a particular focus on changes in cannabis use. A systematic search was conducted to identify substance use treatment studies. Meta-analyses were performed to quantify reductions in the frequency of substance use in the past 30 days. Significant but modest reductions (mean reduction of 0.3–0.4 SD across the time points) in the frequency of substance use were found at 6 to 24 months follow up. The current study is the first to quantify changes in substance use in samples enrolled in no treatment or minimal treatment control conditions. These findings highlight the potential role of natural recovery from substance use among individuals with psychosis, although they do not rule out effects of regression to the mean. Additionally, the results provide a baseline from which to estimate likely changes or needed effects sizes in intervention studies. Future research is required to identify the processes underpinning these changes, in order to identify strategies that may better support self-management of substance use in people with psychosis.
Resumo:
Pós-graduação em Saúde Coletiva - FMB
Resumo:
Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the nationwide Swiss radon database collected between 1994 and 2004. Of these, 80% randomly selected measurements were used for model development and the remaining 20% for an independent model validation. A multivariable log-linear regression model was fitted and relevant predictors selected according to evidence from the literature, the adjusted R², the Akaike's information criterion (AIC), and the Bayesian information criterion (BIC). The prediction model was evaluated by calculating Spearman rank correlation between measured and predicted values. Additionally, the predicted values were categorised into three categories (50th, 50th-90th and 90th percentile) and compared with measured categories using a weighted Kappa statistic. The most relevant predictors for indoor radon levels were tectonic units and year of construction of the building, followed by soil texture, degree of urbanisation, floor of the building where the measurement was taken and housing type (P-values <0.001 for all). Mean predicted radon values (geometric mean) were 66 Bq/m³ (interquartile range 40-111 Bq/m³) in the lowest exposure category, 126 Bq/m³ (69-215 Bq/m³) in the medium category, and 219 Bq/m³ (108-427 Bq/m³) in the highest category. Spearman correlation between predictions and measurements was 0.45 (95%-CI: 0.44; 0.46) for the development dataset and 0.44 (95%-CI: 0.42; 0.46) for the validation dataset. Kappa coefficients were 0.31 for the development and 0.30 for the validation dataset, respectively. The model explained 20% overall variability (adjusted R²). In conclusion, this residential radon prediction model, based on a large number of measurements, was demonstrated to be robust through validation with an independent dataset. The model is appropriate for predicting radon level exposure of the Swiss population in epidemiological research. Nevertheless, some exposure misclassification and regression to the mean is unavoidable and should be taken into account in future applications of the model.
Resumo:
Systematic reviews and meta-analyses allow for a more transparent and objective appraisal of the evidence. They may decrease the number of false-negative results and prevent delays in the introduction of effective interventions into clinical practice. However, as for any other tool, their misuse can result in severely misleading results. In this article, we discuss the main steps that should be taken when conducting systematic reviews and meta-analyses, namely the preparation of a review protocol, identification of eligible trials, and data extraction, pooling of treatment effects across trials, investigation of potential reasons for differences in treatment effects across trials, and complete reporting of the review methods and findings. We also discuss common pitfalls that should be avoided, including the use of quality assessment tools to derive summary quality scores, pooling of data across trials as if they belonged to a single large trial, and inappropriate uses of meta-regression that could result in misleading estimates of treatment effects because of regression to the mean or the ecological fallacy. If conducted and reported properly, systematic reviews and meta-analyses will increase our understanding of the strengths and weaknesses of the available evidence, which may eventually facilitate clinical decision making.