856 resultados para Random regression models
Resumo:
BACKGROUND Non-steroidal anti-inflammatory drugs (NSAIDs) are the backbone of osteoarthritis pain management. We aimed to assess the effectiveness of different preparations and doses of NSAIDs on osteoarthritis pain in a network meta-analysis. METHODS For this network meta-analysis, we considered randomised trials comparing any of the following interventions: NSAIDs, paracetamol, or placebo, for the treatment of osteoarthritis pain. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) and the reference lists of relevant articles for trials published between Jan 1, 1980, and Feb 24, 2015, with at least 100 patients per group. The prespecified primary and secondary outcomes were pain and physical function, and were extracted in duplicate for up to seven timepoints after the start of treatment. We used an extension of multivariable Bayesian random effects models for mixed multiple treatment comparisons with a random effect at the level of trials. For the primary analysis, a random walk of first order was used to account for multiple follow-up outcome data within a trial. Preparations that used different total daily dose were considered separately in the analysis. To assess a potential dose-response relation, we used preparation-specific covariates assuming linearity on log relative dose. FINDINGS We identified 8973 manuscripts from our search, of which 74 randomised trials with a total of 58 556 patients were included in this analysis. 23 nodes concerning seven different NSAIDs or paracetamol with specific daily dose of administration or placebo were considered. All preparations, irrespective of dose, improved point estimates of pain symptoms when compared with placebo. For six interventions (diclofenac 150 mg/day, etoricoxib 30 mg/day, 60 mg/day, and 90 mg/day, and rofecoxib 25 mg/day and 50 mg/day), the probability that the difference to placebo is at or below a prespecified minimum clinically important effect for pain reduction (effect size [ES] -0·37) was at least 95%. Among maximally approved daily doses, diclofenac 150 mg/day (ES -0·57, 95% credibility interval [CrI] -0·69 to -0·46) and etoricoxib 60 mg/day (ES -0·58, -0·73 to -0·43) had the highest probability to be the best intervention, both with 100% probability to reach the minimum clinically important difference. Treatment effects increased as drug dose increased, but corresponding tests for a linear dose effect were significant only for celecoxib (p=0·030), diclofenac (p=0·031), and naproxen (p=0·026). We found no evidence that treatment effects varied over the duration of treatment. Model fit was good, and between-trial heterogeneity and inconsistency were low in all analyses. All trials were deemed to have a low risk of bias for blinding of patients. Effect estimates did not change in sensitivity analyses with two additional statistical models and accounting for methodological quality criteria in meta-regression analysis. INTERPRETATION On the basis of the available data, we see no role for single-agent paracetamol for the treatment of patients with osteoarthritis irrespective of dose. We provide sound evidence that diclofenac 150 mg/day is the most effective NSAID available at present, in terms of improving both pain and function. Nevertheless, in view of the safety profile of these drugs, physicians need to consider our results together with all known safety information when selecting the preparation and dose for individual patients. FUNDING Swiss National Science Foundation (grant number 405340-104762) and Arco Foundation, Switzerland.
Resumo:
Background. In over 30 years, the prevalence of overweight for children and adolescents has increased across the United States (Barlow et al., 2007; Ogden, Flegal, Carroll, & Johnson, 2002). Childhood obesity is linked with adverse physiological and psychological issues in youth and affects ethnic/minority populations in disproportionate rates (Barlow et al., 2007; Butte et al., 2006; Butte, Cai, Cole, Wilson, Fisher, Zakeri, Ellis, & Comuzzie, 2007). More importantly, overweight in children and youth tends to track into adulthood (McNaughton, Ball, Mishra, & Crawford, 2008; Ogden et al., 2002). Childhood obesity affects body functions such as the cardiovascular, respiratory, gastrointestinal, and endocrine systems, including emotional health (Barlow et al., 2007, Ogden et al., 2002). Several dietary factors have been associated with the development of obesity in children; however, these factors have not been fully elucidated, especially in ethnic/minority children. In particular, few studies have been done to determine the effects of different meal patterns on the development of obesity in children. Purpose. The purpose of the study is to examine the relationships between daily proportions of energy consumed and energy derived from fat across breakfast, lunch, dinner, and snack, and obesity among Hispanic children and adolescents. Methods. A cross-sectional design was used to evaluate the relationship between dietary patterns and overweight status in Hispanic children and adolescents 4-19 years of age who participated in the Viva La Familia Study. The goal of the Viva La Familia Study was to evaluate genetic and environmental factors affecting childhood obesity and its co-morbidities in the Hispanic population (Butte et al., 2006, 2007). The study enrolled 1030 Hispanic children and adolescents from 319 families and examined factors related to increased body weight by focusing on a multilevel analysis of extensive sociodemographic, genetic, metabolic, and behavioral data. Baseline dietary intakes of the children were collected using 24-hour recalls, and body mass index was calculated from measured height and weight, and classified using the CDC standards. Dietary data were analyzed using a GEE population-averaged panel-data model with a cluster variable family identifier to include possible correlations within related data sets. A linear regression model was used to analyze associations of dietary patterns using possible covariates, and to examine the percentage of daily energy coming from breakfast, lunch, dinner, and snack while adjusting for age, sex, and BMI z-score. Random-effects logistic regression models were used to determine the relationship of the dietary variables with obesity status and to understand if the percent energy intake (%EI) derived from fat from all meals (breakfast, lunch, dinner, and snacks) affected obesity. Results. Older children (age 4-19 years) consumed a higher percent of energy at lunch and dinner and less percent energy from snacks compared to younger children. Age was significantly associated with percentage of total energy intake (%TEI) for lunch, as well as dinner, while no association was found by gender. Percent of energy consumed from dinner significantly differed by obesity status, with obese children consuming more energy at dinner (p = 0.03), but no associations were found between percent energy from fat and obesity across all meals. Conclusions. Information from this study can be used to develop interventions that target dietary intake patterns in obesity prevention programs for Hispanic children and adolescents. In particular, intervention programs for children should target dietary patterns with energy intake that is spread throughout the day and earlier in the day. These results indicate that a longitudinal study should be used to further explore the relationship of dietary patterns and BMI in this and other populations (Dubois et al., 2008; Rodriquez & Moreno, 2006; Thompson et al., 2005; Wilson et al., in review, 2008). ^
Resumo:
Para la cuantificación de nitratos hay numerosas técnicas y no existe entre los analistas unanimidad en la selección de la más adecuada. Por tal motivo, se compararon cuatro métodos para la determinación de nitratos en muestras vegetales con el fin de evaluar la correlación entre los mismos y establecer pautas para su utilización. Se utilizaron 690 muestras de lechuga (Lactuca sativa L. var. capitata), pertenecientes a los tipos arrepollado y mantecoso, recolectadas a lo largo de un año en el Mercado Cooperativo de Guaymallén (Mendoza, Argentina). Según los tenores de nitratos encontrados en la población estudiada se efectuó un sub-muestreo aleatorio estratificado proporcional para lograr un número de muestras que representaran la variabilidad del total de la población. Se utilizaron cuatro métodos para la determinación de nitratos: 1. destilación por arrastre con vapor, considerado como método de referencia 2. colorimetría por nitración con ácido salicílico 3. colorimetría modificada 4. potenciometría con electrodo selectivo Se probaron diferentes modelos de regresión entre el método de referencia y los otros tres, siendo el lineal el que mejor se ajustó en todos los casos. Los métodos estudiados tuvieron comportamiento semejante. La mayor correlación (r2 = 93 %) se observó entre la destilación por arrastre con vapor y la potenciometría; no obstante, los restantes también presentaron alta correlación. Consecuentemente, la elección del procedimiento analítico dependerá principalmente del número de muestras a analizar, del tiempo requerido por el análisis y del costo del mismo.
Resumo:
Sediment samples and hydrographic conditions were studied at 28 stations around Iceland. At these sites, Conductivity-Temperature-Depth (CTD) casts were conducted to collect hydrographic data and multicorer casts were conductd to collect data on sediment characteristics including grain size distribution, carbon and nitrogen concentration, and chloroplastic pigment concentration. A total of 14 environmental predictors were used to model sediment characteristics around Iceland on regional geographic space. For these, two approaches were used: Multivariate Adaptation Regression Splines (MARS) and randomForest regression models. RandomForest outperformed MARS in predicting grain size distribution. MARS models had a greater tendency to over- and underpredict sediment values in areas outside the environmental envelope defined by the training dataset. We provide first GIS layers on sediment characteristics around Iceland, that can be used as predictors in future models. Although models performed well, more samples, especially from the shelf areas, will be needed to improve the models in future.
Resumo:
Background: Hospital performance reports based on administrative data should distinguish differences in quality of care between hospitals from case mix related variation and random error effects. A study was undertaken to determine which of 12 diagnosis-outcome indicators measured across all hospitals in one state had significant risk adjusted systematic ( or special cause) variation (SV) suggesting differences in quality of care. For those that did, we determined whether SV persists within hospital peer groups, whether indicator results correlate at the individual hospital level, and how many adverse outcomes would be avoided if all hospitals achieved indicator values equal to the best performing 20% of hospitals. Methods: All patients admitted during a 12 month period to 180 acute care hospitals in Queensland, Australia with heart failure (n = 5745), acute myocardial infarction ( AMI) ( n = 3427), or stroke ( n = 2955) were entered into the study. Outcomes comprised in-hospital deaths, long hospital stays, and 30 day readmissions. Regression models produced standardised, risk adjusted diagnosis specific outcome event ratios for each hospital. Systematic and random variation in ratio distributions for each indicator were then apportioned using hierarchical statistical models. Results: Only five of 12 (42%) diagnosis-outcome indicators showed significant SV across all hospitals ( long stays and same diagnosis readmissions for heart failure; in-hospital deaths and same diagnosis readmissions for AMI; and in-hospital deaths for stroke). Significant SV was only seen for two indicators within hospital peer groups ( same diagnosis readmissions for heart failure in tertiary hospitals and inhospital mortality for AMI in community hospitals). Only two pairs of indicators showed significant correlation. If all hospitals emulated the best performers, at least 20% of AMI and stroke deaths, heart failure long stays, and heart failure and AMI readmissions could be avoided. Conclusions: Diagnosis-outcome indicators based on administrative data require validation as markers of significant risk adjusted SV. Validated indicators allow quantification of realisable outcome benefits if all hospitals achieved best performer levels. The overall level of quality of care within single institutions cannot be inferred from the results of one or a few indicators.
Resumo:
Many studies on birds focus on the collection of data through an experimental design, suitable for investigation in a classical analysis of variance (ANOVA) framework. Although many findings are confirmed by one or more experts, expert information is rarely used in conjunction with the survey data to enhance the explanatory and predictive power of the model. We explore this neglected aspect of ecological modelling through a study on Australian woodland birds, focusing on the potential impact of different intensities of commercial cattle grazing on bird density in woodland habitat. We examine a number of Bayesian hierarchical random effects models, which cater for overdispersion and a high frequency of zeros in the data using WinBUGS and explore the variation between and within different grazing regimes and species. The impact and value of expert information is investigated through the inclusion of priors that reflect the experience of 20 experts in the field of bird responses to disturbance. Results indicate that expert information moderates the survey data, especially in situations where there are little or no data. When experts agreed, credible intervals for predictions were tightened considerably. When experts failed to agree, results were similar to those evaluated in the absence of expert information. Overall, we found that without expert opinion our knowledge was quite weak. The fact that the survey data is quite consistent, in general, with expert opinion shows that we do know something about birds and grazing and we could learn a lot faster if we used this approach more in ecology, where data are scarce. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Loading of the femoral neck (FN) is dominated by bending and compressive stresses. We hypothesize that adaptation of the FN to physical activity would be manifested in the cross-sectional area (CSA) and section modulus (Z) of bone, indices of axial and bending strength, respectively. We investigated the influence of physical activity on bone strength during adolescence using 7 years of longitudinal data from 109 boys and 121 girls from the Saskatchewan Paediatric Bone and Mineral Accrual Study (PBMAS). Physical activity data (PAC-Q physical activity inventory) and anthropometric measurements were taken every 6 months and DXA bone scans were measured annually (Hologic QDR2000, array mode). We applied hip structural analysis to derive strength and geometric indices of the femoral neck using DXA scans. To control for maturation, we determined a biological maturity age defined as years from age at peak height velocity (APHV). To account for the repeated measures within individual nature of longitudinal data, multilevel random effects regression analyses were used to analyze the data. When biological maturity age and body size (height and weight) were controlled, in both boys and girls, physical activity was a significant positive independent predictor of CSA and Z of the narrow region of the femoral neck (P < 0.05). There was no independent effect of physical activity on the subperiosteal width of the femoral neck. When leg length and leg lean mass were introduced into the random effects models to control for size and muscle mass of the leg (instead of height and weight), all significant effects of physical activity disappeared. Even among adolescents engaged in normal levels of physical activity, the statistically significant relationship between physical activity and indices of bone strength demonstrate that modifiable lifestyle factors like exercise play an important role in optimizing bone strength during the growing years. Physical activity differences were explained by the interdependence between activity and lean mass considerations. Physical activity is important for optimal development of bone strength. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
In the Bayesian framework, predictions for a regression problem are expressed in terms of a distribution of output values. The mode of this distribution corresponds to the most probable output, while the uncertainty associated with the predictions can conveniently be expressed in terms of error bars. In this paper we consider the evaluation of error bars in the context of the class of generalized linear regression models. We provide insights into the dependence of the error bars on the location of the data points and we derive an upper bound on the true error bars in terms of the contributions from individual data points which are themselves easily evaluated.
Resumo:
Solving many scientific problems requires effective regression and/or classification models for large high-dimensional datasets. Experts from these problem domains (e.g. biologists, chemists, financial analysts) have insights into the domain which can be helpful in developing powerful models but they need a modelling framework that helps them to use these insights. Data visualisation is an effective technique for presenting data and requiring feedback from the experts. A single global regression model can rarely capture the full behavioural variability of a huge multi-dimensional dataset. Instead, local regression models, each focused on a separate area of input space, often work better since the behaviour of different areas may vary. Classical local models such as Mixture of Experts segment the input space automatically, which is not always effective and it also lacks involvement of the domain experts to guide a meaningful segmentation of the input space. In this paper we addresses this issue by allowing domain experts to interactively segment the input space using data visualisation. The segmentation output obtained is then further used to develop effective local regression models.
Resumo:
Based on a statistical mechanics approach, we develop a method for approximately computing average case learning curves and their sample fluctuations for Gaussian process regression models. We give examples for the Wiener process and show that universal relations (that are independent of the input distribution) between error measures can be derived.
Resumo:
The recent history of small shop and independent retailing has been one of decline. The most desirable form of assistance is the provision of information which will increase the efficiency model of marketing mix effeciveness which may be applied in small scale retailing. A further aim is to enhance theoretical development in the marketing field. Recent changes in retailing have affected location, product range, pricing and promotion practices. Although a large number of variables representing aspects of the marketing mix may be identified, it is not possible, on the basis of currently available information, to quantify or rank them according to their effect on sales performance. In designing a suitable study a major issue is that of access to a suitable representative sample of small retailers. The publish nature of the retail activities involved facilitates the use of a novel observation approach to data collection. A cross-sectional survey research design was used focussing on a clustered random sample of greengrocers and gent's fashion outfitters in the West Midlands. Linear multiple regression was the main analytical technique. Powerful regression models were evolved for both types of retailing. For greengrocers the major influences on trade are pedestrian traffic and shelf display space. For gent's outfitters they are centrality-to-other shopping, advertising and shelf display space. The models may be utilised by retailers to determine the relative strength of marketing mix variables. The level of precision is not sufficient to permit cost benefit analysis. Comparison of the findings for the two distinct kinds of business studied suggests an overall model of marketing mix effectiveness might be based on frequency of purchase, homogeneity of the shopping environment, elasticity of demand and bulk characteristics of the good sold by a shop.
Resumo:
The Aston Eye Study (AES) was instigated in October 2005 to determine the distribution of refractive error and associated ocular biometry in a sample of UK urban school children. The AES is the first study to compare outcome measures separately in White, South Asian and Black children. Children were selected from two age groups (Year 2 children aged 6/7 years, Year8 children aged 12/13 years of age) using random cluster sampling of schools in Birmingham, West Midlands UK. To date, the AES has examined 598 children (302 Year 2,296 Year 8). Using open-field cycloplegic autorefraction, the overall prevalence of myopia (=-0.50D SER in either eye) determined was 19.6%, with a higher prevalence in older (29.4%) compared to younger (9.9%) children (p<0.001). Using multiple logistic regression models, the risk of myopia was higher in Year 8 South Asian compared to White children and higher in children attending grammar schools relative to comprehensive schools. In addition, the prevalence of uncorrected ametropia was found to be high (Year 8: 12.84%, Year 2: 15.23%), which will be of concern to bodies responsible for the implementation of school vision screening strategies. Biometric data using non-contact partial coherence interferometry revealed a contributory effect of axial length (AL) and central corneal radius (CR) on myopic refraction, resulting in a strong coefficient of determination of the AL/CR ratio on refractive error. Ocular biometric measures did not vary significantly as a function of ethnicity, suggesting a greater miscorrelation of components in susceptible ethnic groups to account for their higher myopia prevalence. Corneal radius was found to be steeper in myopes in both age groups, but was found to flatten with increasing axial length. Due to the inextricable link between myopia and axial elongation, the paradoxical finding of the cornea demands further longitudinal investigation, particularly in relation to myopia onset. Questionnaire analysis revealed a history of myopia in parents and siblings to be significantly associated with myopia in Year 8 children, with a dose-dependent rise in the odds ratio of myopia evident with increasing number of myopic parents. By classifying socioeconomic status (SES) using Index of Multiple Deprivation values, it was found that Year 8 children from moderately deprived backgrounds were more at risk of myopia compared with children located at both extremities of the deprivation spectrum. However, the main effect of SES weakened following multivariate analysis, with South Asian ethnicity and grammar schooling remaining associated with Year 8 myopia after adjustment.
Resumo:
Solving many scientific problems requires effective regression and/or classification models for large high-dimensional datasets. Experts from these problem domains (e.g. biologists, chemists, financial analysts) have insights into the domain which can be helpful in developing powerful models but they need a modelling framework that helps them to use these insights. Data visualisation is an effective technique for presenting data and requiring feedback from the experts. A single global regression model can rarely capture the full behavioural variability of a huge multi-dimensional dataset. Instead, local regression models, each focused on a separate area of input space, often work better since the behaviour of different areas may vary. Classical local models such as Mixture of Experts segment the input space automatically, which is not always effective and it also lacks involvement of the domain experts to guide a meaningful segmentation of the input space. In this paper we addresses this issue by allowing domain experts to interactively segment the input space using data visualisation. The segmentation output obtained is then further used to develop effective local regression models.
Resumo:
Optimal design for parameter estimation in Gaussian process regression models with input-dependent noise is examined. The motivation stems from the area of computer experiments, where computationally demanding simulators are approximated using Gaussian process emulators to act as statistical surrogates. In the case of stochastic simulators, which produce a random output for a given set of model inputs, repeated evaluations are useful, supporting the use of replicate observations in the experimental design. The findings are also applicable to the wider context of experimental design for Gaussian process regression and kriging. Designs are proposed with the aim of minimising the variance of the Gaussian process parameter estimates. A heteroscedastic Gaussian process model is presented which allows for an experimental design technique based on an extension of Fisher information to heteroscedastic models. It is empirically shown that the error of the approximation of the parameter variance by the inverse of the Fisher information is reduced as the number of replicated points is increased. Through a series of simulation experiments on both synthetic data and a systems biology stochastic simulator, optimal designs with replicate observations are shown to outperform space-filling designs both with and without replicate observations. Guidance is provided on best practice for optimal experimental design for stochastic response models. © 2013 Elsevier Inc. All rights reserved.
Resumo:
Mathematics Subject Classification: 26A33, 45K05, 60J60, 60G50, 65N06, 80-99.