856 resultados para Random regression models
Resumo:
Background mortality is an essential component of any forest growth and yield model. Forecasts of mortality contribute largely to the variability and accuracy of model predictions at the tree, stand and forest level. In the present study, I implement and evaluate state-of-the-art techniques to increase the accuracy of individual tree mortality models, similar to those used in many of the current variants of the Forest Vegetation Simulator, using data from North Idaho and Montana. The first technique addresses methods to correct for bias induced by measurement error typically present in competition variables. The second implements survival regression and evaluates its performance against the traditional logistic regression approach. I selected the regression calibration (RC) algorithm as a good candidate for addressing the measurement error problem. Two logistic regression models for each species were fitted, one ignoring the measurement error, which is the “naïve” approach, and the other applying RC. The models fitted with RC outperformed the naïve models in terms of discrimination when the competition variable was found to be statistically significant. The effect of RC was more obvious where measurement error variance was large and for more shade-intolerant species. The process of model fitting and variable selection revealed that past emphasis on DBH as a predictor variable for mortality, while producing models with strong metrics of fit, may make models less generalizable. The evaluation of the error variance estimator developed by Stage and Wykoff (1998), and core to the implementation of RC, in different spatial patterns and diameter distributions, revealed that the Stage and Wykoff estimate notably overestimated the true variance in all simulated stands, but those that are clustered. Results show a systematic bias even when all the assumptions made by the authors are guaranteed. I argue that this is the result of the Poisson-based estimate ignoring the overlapping area of potential plots around a tree. Effects, especially in the application phase, of the variance estimate justify suggested future efforts of improving the accuracy of the variance estimate. The second technique implemented and evaluated is a survival regression model that accounts for the time dependent nature of variables, such as diameter and competition variables, and the interval-censored nature of data collected from remeasured plots. The performance of the model is compared with the traditional logistic regression model as a tool to predict individual tree mortality. Validation of both approaches shows that the survival regression approach discriminates better between dead and alive trees for all species. In conclusion, I showed that the proposed techniques do increase the accuracy of individual tree mortality models, and are a promising first step towards the next generation of background mortality models. I have also identified the next steps to undertake in order to advance mortality models further.
Resumo:
OBJECTIVES: To synthesize the evidence on the risk of HIV transmission through unprotected sexual intercourse according to viral load and treatment with combination antiretroviral therapy (ART). DESIGN: Systematic review and meta-analysis. METHODS: We searched Medline, Embase and conference abstracts from 1996-2009. We included longitudinal studies of serodiscordant couples reporting on HIV transmission according to plasma viral load or use of ART and used random-effects Poisson regression models to obtain summary transmission rates [with 95% confidence intervals, (CI)]. If there were no transmission events we estimated an upper 97.5% confidence limit. RESULTS: We identified 11 cohorts reporting on 5021 heterosexual couples and 461 HIV-transmission events. The rate of transmission overall from ART-treated patients was 0.46 (95% CI 0.19-1.09) per 100 person-years, based on five events. The transmission rate from a seropositive partner with viral load below 400 copies/ml on ART, based on two studies, was zero with an upper 97.5% confidence limit of 1.27 per 100 person-years, and 0.16 (95% CI 0.02-1.13) per 100 person-years if not on ART, based on five studies and one event. There were insufficient data to calculate rates according to the presence or absence of sexually transmitted infections, condom use, or vaginal or anal intercourse. CONCLUSION: Studies of heterosexual discordant couples observed no transmission in patients treated with ART and with viral load below 400 copies/ml, but data were compatible with one transmission per 79 person-years. Further studies are needed to better define the risk of HIV transmission from patients on ART.
Resumo:
PURPOSE: To assess the literature on accuracy and clinical performance of computer technology applications in surgical implant dentistry. MATERIALS AND METHODS: Electronic and manual literature searches were conducted to collect information about (1) the accuracy and (2) clinical performance of computer-assisted implant systems. Meta-regression analysis was performed for summarizing the accuracy studies. Failure/complication rates were analyzed using random-effects Poisson regression models to obtain summary estimates of 12-month proportions. RESULTS: Twenty-nine different image guidance systems were included. From 2,827 articles, 13 clinical and 19 accuracy studies were included in this systematic review. The meta-analysis of the accuracy (19 clinical and preclinical studies) revealed a total mean error of 0.74 mm (maximum of 4.5 mm) at the entry point in the bone and 0.85 mm at the apex (maximum of 7.1 mm). For the 5 included clinical studies (total of 506 implants) using computer-assisted implant dentistry, the mean failure rate was 3.36% (0% to 8.45%) after an observation period of at least 12 months. In 4.6% of the treated cases, intraoperative complications were reported; these included limited interocclusal distances to perform guided implant placement, limited primary implant stability, or need for additional grafting procedures. CONCLUSION: Differing levels and quantity of evidence were available for computer-assisted implant placement, revealing high implant survival rates after only 12 months of observation in different indications and a reasonable level of accuracy. However, future long-term clinical data are necessary to identify clinical indications and to justify additional radiation doses, effort, and costs associated with computer-assisted implant surgery.
Resumo:
AIM: The purpose of this study was to systematically review the literature on the survival rates of palatal implants, Onplants((R)), miniplates and mini screws. MATERIAL AND METHODS: An electronic MEDLINE search supplemented by manual searching was conducted to identify randomized clinical trials, prospective and retrospective cohort studies on palatal implants, Onplants((R)), miniplates and miniscrews with a mean follow-up time of at least 12 weeks and of at least 10 units per modality having been examined clinically at a follow-up visit. Assessment of studies and data abstraction was performed independently by two reviewers. Reported failures of used devices were analyzed using random-effects Poisson regression models to obtain summary estimates and 95% confidence intervals (CI) of failure and survival proportions. RESULTS: The search up to January 2009 provided 390 titles and 71 abstracts with full-text analysis of 34 articles, yielding 27 studies that met the inclusion criteria. In meta-analysis, the failure rate for Onplants((R)) was 17.2% (95% CI: 5.9-35.8%), 10.5% for palatal implants (95% CI: 6.1-18.1%), 16.4% for miniscrews (95% CI: 13.4-20.1%) and 7.3% for miniplates (95% CI: 5.4-9.9%). Miniplates and palatal implants, representing torque-resisting temporary anchorage devices (TADs), when grouped together, showed a 1.92-fold (95% CI: 1.06-2.78) lower clinical failure rate than miniscrews. CONCLUSION: Based on the available evidence in the literature, palatal implants and miniplates showed comparable survival rates of >or=90% over a period of at least 12 weeks, and yielded superior survival than miniscrews. Palatal implants and miniplates for temporary anchorage provide reliable absolute orthodontic anchorage. If the intended orthodontic treatment would require multiple miniscrew placement to provide adequate anchorage, the reliability of such systems is questionable. For patients who are undergoing extensive orthodontic treatment, force vectors may need to be varied or the roots of the teeth to be moved may need to slide past the anchors. In this context, palatal implants or miniplates should be the TADs of choice.
Resumo:
OBJECTIVES: The objective of this systematic review was to assess the 5-year survival rates and incidences of complications associated with ceramic abutments and to compare them with those of metal abutments. METHODS: An electronic Medline search complemented by manual searching was conducted to identify randomized-controlled clinical trials, and prospective and retrospective studies providing information on ceramic and metal abutments with a mean follow-up time of at least 3 years. Patients had to have been examined clinically at the follow-up visit. Assessment of the identified studies and data abstraction was performed independently by three reviewers. Failure rates were analyzed using standard and random-effects Poisson regression models to obtain summary estimates of 5-year survival proportions. RESULTS: Twenty-nine clinical and 22 laboratory studies were selected from an initial yield of 7136 titles and data were extracted. The estimated 5-year survival rate of ceramic abutments was 99.1% [95% confidence interval (CI): 93.8-99.9%] and 97.4% (95% CI: 96-98.3%) for metal abutments. The estimated cumulative incidence of technical complications after 5 years was 6.9% (95% CI: 3.5-13.4%) for ceramic abutments and 15.9% (95% CI: 11.6-21.5%) for metal abutments. Abutment screw loosening was the most frequent technical problem, occurring at an estimated cumulative incidence after 5 years of 5.1% (95% CI: 3.3-7.7%). All-ceramic crowns supported by ceramic abutments exhibited similar annual fracture rates as metal-ceramic crowns supported by metal abutments. The cumulative incidence of biological complications after 5 years was estimated at 5.2% (95% CI: 0.4-52%) for ceramic and 7.7% (95% CI: 4.7-12.5%) for metal abutments. Esthetic complications tended to be more frequent at metal abutments. A meta-analysis of the laboratory data was impossible due to the non-standardized test methods of the studies included. CONCLUSION: The 5-year survival rates estimated from annual failure rates appeared to be similar for ceramic and metal abutments. The information included in this review did not provide evidence for differences of the technical and biological outcomes of ceramic and metal abutments. However, the information for ceramic abutments was limited in the number of studies and abutments analyzed as well as the accrued follow-up time. Standardized methods for the analysis of abutment strength are needed.
Resumo:
OBJECTIVES Femoroacetabular impingement is proposed to cause early osteoarthritis (OA) in the non-dysplastic hip. We previously reported on the prevalence of femoral deformities in a young asymptomatic male population. The aim of this study was to determine the prevalence of both femoral and acetabular types of impingement in young females. METHODS We conducted a population-based cross-sectional study of asymptomatic young females. All participants completed a set of questionnaires and underwent clinical examination of the hip. A random sample was subsequently invited to obtain magnetic resonance images (MRI) of the hip. All MRIs were read for cam-type deformities, increased acetabular depths, labral lesions, and impingement pits. Prevalence estimates of cam-type deformities and increased acetabular depths were estimated, and relationships between deformities and signs of joint damage were examined using logistic regression models. RESULTS The study included 283 subjects, and 80 asymptomatic females with a mean age of 19.3 years attended MRI. Fifteen showed some evidence of cam-type deformities, but none were scored to be definite. The overall prevalence was therefore 0% [95% confidence interval (95% CI) 0-5%]. The prevalence of increased acetabular depth was 10% (95% CI 5-19). No association was found between increased acetabular depth and decreased internal rotation of the hip. Increased acetabular depth was not associated with signs of labral damage. CONCLUSIONS Definite cam-type deformities in women are rare compared to men, whereas the prevalence of increased acetabular depth is higher, suggesting that femoroacetabular impingement has different gender-related biomechanical mechanisms.
Resumo:
Background: Accelerometry has been established as an objective method that can be used to assess physical activity behavior in large groups. The purpose of the current study was to provide a validated equation to translate accelerometer counts of the triaxial GT3X into energy expenditure in young children. Methods: Thirty-two children aged 5–9 years performed locomotor and play activities that are typical for their age group. Children wore a GT3X accelerometer and their energy expenditure was measured with indirect calorimetry. Twenty-one children were randomly selected to serve as development group. A cubic 2-regression model involving separate equations for locomotor and play activities was developed on the basis of model fit. It was then validated using data of the remaining children and compared with a linear 2-regression model and a linear 1-regression model. Results: All 3 regression models produced strong correlations between predicted and measured MET values. Agreement was acceptable for the cubic model and good for both linear regression approaches. Conclusions: The current linear 1-regression model provides valid estimates of energy expenditure for ActiGraph GT3X data for 5- to 9-year-old children and shows equal or better predictive validity than a cubic or a linear 2-regression model.
Resumo:
BACKGROUND Empirical research has illustrated an association between study size and relative treatment effects, but conclusions have been inconsistent about the association of study size with the risk of bias items. Small studies give generally imprecisely estimated treatment effects, and study variance can serve as a surrogate for study size. METHODS We conducted a network meta-epidemiological study analyzing 32 networks including 613 randomized controlled trials, and used Bayesian network meta-analysis and meta-regression models to evaluate the impact of trial characteristics and study variance on the results of network meta-analysis. We examined changes in relative effects and between-studies variation in network meta-regression models as a function of the variance of the observed effect size and indicators for the adequacy of each risk of bias item. Adjustment was performed both within and across networks, allowing for between-networks variability. RESULTS Imprecise studies with large variances tended to exaggerate the effects of the active or new intervention in the majority of networks, with a ratio of odds ratios of 1.83 (95% CI: 1.09,3.32). Inappropriate or unclear conduct of random sequence generation and allocation concealment, as well as lack of blinding of patients and outcome assessors, did not materially impact on the summary results. Imprecise studies also appeared to be more prone to inadequate conduct. CONCLUSIONS Compared to more precise studies, studies with large variance may give substantially different answers that alter the results of network meta-analyses for dichotomous outcomes.
Resumo:
In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^
Resumo:
The use of group-randomized trials is particularly widespread in the evaluation of health care, educational, and screening strategies. Group-randomized trials represent a subset of a larger class of designs often labeled nested, hierarchical, or multilevel and are characterized by the randomization of intact social units or groups, rather than individuals. The application of random effects models to group-randomized trials requires the specification of fixed and random components of the model. The underlying assumption is usually that these random components are normally distributed. This research is intended to determine if the Type I error rate and power are affected when the assumption of normality for the random component representing the group effect is violated. ^ In this study, simulated data are used to examine the Type I error rate, power, bias and mean squared error of the estimates of the fixed effect and the observed intraclass correlation coefficient (ICC) when the random component representing the group effect possess distributions with non-normal characteristics, such as heavy tails or severe skewness. The simulated data are generated with various characteristics (e.g. number of schools per condition, number of students per school, and several within school ICCs) observed in most small, school-based, group-randomized trials. The analysis is carried out using SAS PROC MIXED, Version 6.12, with random effects specified in a random statement and restricted maximum likelihood (REML) estimation specified. The results from the non-normally distributed data are compared to the results obtained from the analysis of data with similar design characteristics but normally distributed random effects. ^ The results suggest that the violation of the normality assumption for the group component by a skewed or heavy-tailed distribution does not appear to influence the estimation of the fixed effect, Type I error, and power. Negative biases were detected when estimating the sample ICC and dramatically increased in magnitude as the true ICC increased. These biases were not as pronounced when the true ICC was within the range observed in most group-randomized trials (i.e. 0.00 to 0.05). The normally distributed group effect also resulted in bias ICC estimates when the true ICC was greater than 0.05. However, this may be a result of higher correlation within the data. ^
Resumo:
robreg provides a number of robust estimators for linear regression models. Among them are the high breakdown-point and high efficiency MM-estimator, the Huber and bisquare M-estimator, and the S-estimator, each supporting classic or robust standard errors. Furthermore, basic versions of the LMS/LQS (least median of squares) and LTS (least trimmed squares) estimators are provided. Note that the moremata package, also available from SSC, is required.
Resumo:
Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest.
Resumo:
AIMS The preferred antithrombotic strategy for secondary prevention in patients with cryptogenic stroke (CS) and patent foramen ovale (PFO) is unknown. We pooled multiple observational studies and used propensity score-based methods to estimate the comparative effectiveness of oral anticoagulation (OAC) compared with antiplatelet therapy (APT). METHODS AND RESULTS Individual participant data from 12 databases of medically treated patients with CS and PFO were analysed with Cox regression models, to estimate database-specific hazard ratios (HRs) comparing OAC with APT, for both the primary composite outcome [recurrent stroke, transient ischaemic attack (TIA), or death] and stroke alone. Propensity scores were applied via inverse probability of treatment weighting to control for confounding. We synthesized database-specific HRs using random-effects meta-analysis models. This analysis included 2385 (OAC = 804 and APT = 1581) patients with 227 composite endpoints (stroke/TIA/death). The difference between OAC and APT was not statistically significant for the primary composite outcome [adjusted HR = 0.76, 95% confidence interval (CI) 0.52-1.12] or for the secondary outcome of stroke alone (adjusted HR = 0.75, 95% CI 0.44-1.27). Results were consistent in analyses applying alternative weighting schemes, with the exception that OAC had a statistically significant beneficial effect on the composite outcome in analyses standardized to the patient population who actually received APT (adjusted HR = 0.64, 95% CI 0.42-0.99). Subgroup analyses did not detect statistically significant heterogeneity of treatment effects across clinically important patient groups. CONCLUSION We did not find a statistically significant difference comparing OAC with APT; our results justify randomized trials comparing different antithrombotic approaches in these patients.
Resumo:
BACKGROUND Anxiety disorders have been linked to an increased risk of incident coronary heart disease in which inflammation plays a key pathogenic role. To date, no studies have looked at the association between proinflammatory markers and agoraphobia. METHODS In a random Swiss population sample of 2890 persons (35-67 years, 53% women), we diagnosed a total of 124 individuals (4.3%) with agoraphobia using a validated semi-structured psychiatric interview. We also assessed socioeconomic status, traditional cardiovascular risk factors (i.e., body mass index, hypertension, blood glucose levels, total cholesterol/high-density lipoprotein-cholesterol ratio), and health behaviors (i.e., smoking, alcohol consumption, and physical activity), and other major psychiatric diseases (other anxiety disorders, major depressive disorder, drug dependence) which were treated as covariates in linear regression models. Circulating levels of inflammatory markers, statistically controlled for the baseline demographic and health-related measures, were determined at a mean follow-up of 5.5 ± 0.4 years (range 4.7 - 8.5). RESULTS Individuals with agoraphobia had significantly higher follow-up levels of C-reactive protein (p = 0.007) and tumor-necrosis-factor-α (p = 0.042) as well as lower levels of the cardioprotective marker adiponectin (p = 0.032) than their non-agoraphobic counterparts. Follow-up levels of interleukin (IL)-1β and IL-6 did not significantly differ between the two groups. CONCLUSIONS Our results suggest an increase in chronic low-grade inflammation in agoraphobia over time. Such a mechanism might link agoraphobia with an increased risk of atherosclerosis and coronary heart disease, and needs to be tested in longitudinal studies.
Resumo:
BACKGROUND Quality of life (QoL) is a subjective perception whose components may vary in importance between individuals. Little is known about which domains of QoL older people deem most important. OBJECTIVE This study investigated in community-dwelling older people the relationships between the importance given to domains defining their QoL and socioeconomic, demographic and health status. METHODS Data were compiled from older people enrolled in the Lc65+ cohort study and two additional, population-based, stratified random samples (n = 5,300). Principal components analysis (PCA) was used to determine the underlying domains among 28 items that participants defined as important to their QoL. The components extracted were used as dependent variables in multiple linear regression models to explore their associations with socioeconomic, demographic and health status. RESULTS PCA identified seven domains that older persons considered important to their QoL. In order of importance (highest to lowest): feeling of safety, health and mobility, autonomy, close entourage, material resources, esteem and recognition, and social and cultural life. A total of six and five domains of importance were significantly associated with education and depressive symptoms, respectively. The importance of material resources was significantly associated with a good financial situation (β = 0.16, P = 0.011), as was close entourage with living with others (β = 0.20, P = 0.007) and as was health and mobility with age (β = -0.16, P = 0.014). CONCLUSION The importance older people give to domains of their QoL appears strongly related to their actual resources and experienced losses. These findings may help clinicians, researchers and policy makers better adapt strategies to individuals' needs.