848 resultados para logistic regression analysis
Resumo:
Areas of the landscape that are priorities for conservation should be those that are both vulnerable to threatening processes and that if lost or degraded, will result in conservation targets being compromised. While much attention is directed towards understanding the patterns of biodiversity, much less is given to determining the areas of the landscape most vulnerable to threats. We assessed the relative vulnerability of remaining areas of native forest to conversion to plantations in the ecologically significant temperate rainforest region of south central Chile. The area of the study region is 4.2 million ha and the extent of plantations is approximately 200000 ha. First, the spatial distribution of native forest conversion to plantations was determined. The variables related to the spatial distribution of this threatening process were identified through the development of a classification tree and the generation of a multivariate. spatially explicit, statistical model. The model of native forest conversion explained 43% of the deviance and the discrimination ability of the model was high. Predictions were made of where native forest conversion is likely to occur in the future. Due to patterns of climate, topography, soils and proximity to infrastructure and towns, remaining forest areas differ in their relative risk of being converted to plantations. Another factor that may increase the vulnerability of remaining native forest in a subset of the study region is the proposed construction of a highway. We found that 90% of the area of existing plantations within this region is within 2.5 km of roads. When the predictions of native forest conversion were recalculated accounting for the construction of this highway, it was found that: approximately 27000 ha of native forest had an increased probability of conversion. The areas of native forest identified to be vulnerable to conversion are outside of the existing reserve network. (C) 2004 Elsevier Ltd. All tights reserved.
Resumo:
Background & Aims: Steatosis is a frequent histologic finding in chronic hepatitis C (CHC), but it is unclear whether steatosis is an independent predictor for liver fibrosis. We evaluated the association between steatosis and fibrosis and their common correlates in persons with CHC and in subgroup analyses according to hepatitis C virus (HCV) genotype and body mass index. Methods: We conducted a meta-analysis on individual data from 3068 patients with histologically confirmed CHC recruited from 10 clinical centers in Italy, Switzerland, France, Australia, and the United States. Results: Steatosis was present in 1561 patients (50.9%) and fibrosis in 2688 (87.6%). HCV genotype was 1 in :1694 cases (55.2%), 2 in 563 (18.4%), 3 in 669 (21.8%), and 4 in :142 (4.6%). By stepwise logistic regression, steatosis was associated independently with genotype 3, the presence of fibrosis, diabetes, hepatic inflammation, ongoing alcohol abuse, higher body mass index, and older age. Fibrosis was associated independently with inflammatory activity, steatosis, male sex, and older age, whereas HCV genotype 2 was associated with reduced fibrosis. In the subgroup analyses, the association between steatosis and fibrosis invariably was dependent on a simultaneous association between steatosis and hepatic inflammation. Conclusions: In this large and geographically different group of CHC patients, steatosis is confirmed as significantly and independently associated with fibrosis in CHC. Hepatic inflammation may mediate fibrogenesis in patients with liver steatosis. Control of metabolic factors (such as overweight, via lifestyle adjustments) appears important in the management of CHC.
Resumo:
This article applies methods of latent class analysis (LCA) to data on lifetime illicit drug use in order to determine whether qualitatively distinct classes of illicit drug users can be identified. Self-report data on lifetime illicit drug use (cannabis, stimulants, hallucinogens, sedatives, inhalants, cocaine, opioids and solvents) collected from a sample of 6265 Australian twins (average age 30 years) were analyzed using LCA. Rates of childhood sexual and physical abuse, lifetime alcohol and tobacco dependence, symptoms of illicit drug abuse/dependence and psychiatric comorbidity were compared across classes using multinomial logistic regression. LCA identified a 5-class model: Class 1 (68.5%) had low risks of the use of all drugs except cannabis; Class 2 (17.8%) had moderate risks of the use of all drugs; Class 3 (6.6%) had high rates of cocaine, other stimulant and hallucinogen use but lower risks for the use of sedatives or opioids. Conversely, Class 4 (3.0%) had relatively low risks of cocaine, other stimulant or hallucinogen use but high rates of sedative and opioid use. Finally, Class 5 (4.2%) had uniformly high probabilities for the use of all drugs. Rates of psychiatric comorbidity were highest in the polydrug class although the sedative/opioid class had elevated rates of depression/suicidal behaviors and exposure to childhood abuse. Aggregation of population-level data may obscure important subgroup differences in patterns of illicit drug use and psychiatric comorbidity. Further exploration of a 'self-medicating' subgroup is needed.
Resumo:
Multiple regression analysis is a complex statistical method with many potential uses. It has also become one of the most abused of all statistical procedures since anyone with a data base and suitable software can carry it out. An investigator should always have a clear hypothesis in mind before carrying out such a procedure and knowledge of the limitations of each aspect of the analysis. In addition, multiple regression is probably best used in an exploratory context, identifying variables that might profitably be examined by more detailed studies. Where there are many variables potentially influencing Y, they are likely to be intercorrelated and to account for relatively small amounts of the variance. Any analysis in which R squared is less than 50% should be suspect as probably not indicating the presence of significant variables. A further problem relates to sample size. It is often stated that the number of subjects or patients must be at least 5-10 times the number of variables included in the study.5 This advice should be taken only as a rough guide but it does indicate that the variables included should be selected with great care as inclusion of an obviously unimportant variable may have a significant impact on the sample size required.
Resumo:
The aim of this review was to quantify the global variation in childhood myopia prevalence over time taking account of demographic and study design factors. A systematic review identified population-based surveys with estimates of childhood myopia prevalence published by February 2015. Multilevel binomial logistic regression of log odds of myopia was used to examine the association with age, gender, urban versus rural setting and survey year, among populations of different ethnic origins, adjusting for study design factors. 143 published articles (42 countries, 374 349 subjects aged 1- 18 years, 74 847 myopia cases) were included. Increase in myopia prevalence with age varied by ethnicity. East Asians showed the highest prevalence, reaching 69% (95% credible intervals (CrI) 61% to 77%) at 15 years of age (86% among Singaporean-Chinese). Blacks in Africa had the lowest prevalence; 5.5% at 15 years (95% CrI 3% to 9%). Time trends in myopia prevalence over the last decade were small in whites, increased by 23% in East Asians, with a weaker increase among South Asians. Children from urban environments have 2.6 times the odds of myopia compared with those from rural environments. In whites and East Asians sex differences emerge at about 9 years of age; by late adolescence girls are twice as likely as boys to be myopic. Marked ethnic differences in age-specific prevalence of myopia exist. Rapid increases in myopia prevalence over time, particularly in East Asians, combined with a universally higher risk of myopia in urban settings, suggest that environmental factors play an important role in myopia development, which may offer scope for prevention.
Resumo:
In this study we have identified key genes that are critical in development of astrocytic tumors. Meta-analysis of microarray studies which compared normal tissue to astrocytoma revealed a set of 646 differentially expressed genes in the majority of astrocytoma. Reverse engineering of these 646 genes using Bayesian network analysis produced a gene network for each grade of astrocytoma (Grade I–IV), and ‘key genes’ within each grade were identified. Genes found to be most influential to development of the highest grade of astrocytoma, Glioblastoma multiforme were: COL4A1, EGFR, BTF3, MPP2, RAB31, CDK4, CD99, ANXA2, TOP2A, and SERBP1. All of these genes were up-regulated, except MPP2 (down regulated). These 10 genes were able to predict tumor status with 96–100% confidence when using logistic regression, cross validation, and the support vector machine analysis. Markov genes interact with NFkβ, ERK, MAPK, VEGF, growth hormone and collagen to produce a network whose top biological functions are cancer, neurological disease, and cellular movement. Three of the 10 genes - EGFR, COL4A1, and CDK4, in particular, seemed to be potential ‘hubs of activity’. Modified expression of these 10 Markov Blanket genes increases lifetime risk of developing glioblastoma compared to the normal population. The glioblastoma risk estimates were dramatically increased with joint effects of 4 or more than 4 Markov Blanket genes. Joint interaction effects of 4, 5, 6, 7, 8, 9 or 10 Markov Blanket genes produced 9, 13, 20.9, 26.7, 52.8, 53.2, 78.1 or 85.9%, respectively, increase in lifetime risk of developing glioblastoma compared to normal population. In summary, it appears that modified expression of several ‘key genes’ may be required for the development of glioblastoma. Further studies are needed to validate these ‘key genes’ as useful tools for early detection and novel therapeutic options for these tumors.
Resumo:
Disasters are complex events characterized by damage to key infrastructure and population displacements into disaster shelters. Assessing the living environment in shelters during disasters is a crucial health security concern. Until now, jurisdictional knowledge and preparedness on those assessment methods, or deficiencies found in shelters is limited. A cross-sectional survey (STUSA survey) ascertained knowledge and preparedness for those assessments in all 50 states, DC, and 5 US territories. Descriptive analysis of overall knowledge and preparedness was performed. Fisher’s exact statistics analyzed differences between two groups: jurisdiction type and population size. Two logistic regression models analyzed earthquakes and hurricane risks as predictors of knowledge and preparedness. A convenience sample of state shelter assessments records (n=116) was analyzed to describe environmental health deficiencies found during selected events. Overall, 55 (98%) of jurisdictions responded (states and territories) and appeared to be knowledgeable of these assessments (states 92%, territories 100%, p = 1.000), and engaged in disaster planning with shelter partners (states 96%, territories 83%, p = 0.564). Few had shelter assessment procedures (states 53%, territories 50%, p = 1.000); or training in disaster shelter assessments (states 41%, 60% territories, p = 0.638). Knowledge or preparedness was not predicted by disaster risks, population size, and jurisdiction type in neither model. Knowledge: hurricane (Adjusted OR 0.69, 95% C.I. 0.06-7.88); earthquake (OR 0.82, 95% C.I. 0.17-4.06); and both risks (OR 1.44, 95% C.I. 0.24-8.63); preparedness model: hurricane (OR 1.91, 95% C.I. 0.06-20.69); earthquake (OR 0.47, 95% C.I. 0.7-3.17); and both risks (OR 0.50, 95% C.I. 0.06-3.94). Environmental health deficiencies documented in shelter assessments occurred mostly in: sanitation (30%); facility (17%); food (15%); and sleeping areas (12%); and during ice storms and tornadoes. More research is needed in the area of environmental health assessments of disaster shelters, particularly, in those areas that may provide better insight into the living environment of all shelter occupants and potential effects in disaster morbidity and mortality. Also, to evaluate the effectiveness and usefulness of these assessments methods and the data available on environmental health deficiencies in risk management to protect those at greater risk in shelter facilities during disasters.
Resumo:
BACKGROUND & AIMS: Gluteofemoral obesity (determined by measurement of subcutaneous fat in hip and thigh regions) could reduce risks of cardiovascular and diabetic disorders associated with abdominal obesity. We evaluated whether gluteofemoral obesity also reduces risk of Barrett's esophagus (BE), a premalignant lesion associated with abdominal obesity.
METHODS: We collected data from non-Hispanic white participants in 8 studies in the Barrett's and Esophageal Adenocarcinoma Consortium. We compared measures of hip circumference (as a proxy for gluteofemoral obesity) from cases of BE (n=1559) separately with 2 control groups: 2557 population-based controls and 2064 individuals with gastroesophageal reflux disease (GERD controls). Study-specific odds ratios (OR) and 95% confidence intervals (95% CI) were estimated using individual participant data and multivariable logistic regression and combined using random effects meta-analysis.
RESULTS: We found an inverse relationship between hip circumference and BE (OR per 5 cm increase, 0.88; 95% CI, 0.81-0.96), compared with population-based controls in a multivariable model that included waist circumference. This association was not observed in models that did not include waist circumference. Similar results were observed in analyses stratified by frequency of GERD symptoms. The inverse association with hip circumference was only statistically significant among men (vs population-based controls: OR, 0.85; 95% CI, 0.76-0.96 for men; OR, 0.93; 95% CI, 0.74-1.16 for women). For men, within each category of waist circumference, a larger hip circumference was associated with decreased risk of BE. Increasing waist circumference was associated with increased risk of BE in the mutually adjusted population-based and GERD control models.
CONCLUSIONS: Although abdominal obesity is associated with increased risk of BE, there is an inverse association between gluteofemoral obesity and BE, particularly among men.
Resumo:
Vascular cognitive impairment (VCI), including its severe form, vascular dementia (VaD), is the second most common form of dementia. The genetic etiology of sporadic VCI remains largely unknown. We previously conducted a systematic review and meta-analysis of all published genetic association studies of sporadic VCI prior to 6 July 2012, which demonstrated that APOE (ɛ4, ɛ2) and MTHFR (rs1801133) variants were associated with susceptibility for VCI. De novo genotyping was conducted in a new independent relatively large collaborative European cohort of VaD (nmax = 549) and elderly non-demented samples (nmax = 552). Where available, genotype data derived from Illumina's 610-quad array for 1210 GERAD1 control samples were also included in analyses of genes examined. Associations were tested using the Cochran-Armitage trend test: MTHFR rs1801133 (OR = 1.36, 95% CI 1.16-1.58, p = <0.0001), APOE rs7412 (OR = 0.62, 95% CI 0.42-0.90, p = 0.01), and APOE rs429358 (OR = 1.59, 95% CI 1.17-2.16, p = 0.003). Association was also observed with APOE epsilon alleles; ɛ4 (OR = 1.85, 95% CI 1.35-2.52, p = <0.0001) and ɛ2 (OR = 0.67, 95% CI 0.46-0.98, p = 0.03). Logistic Regression and Bonferroni correction in a subgroup of the cohort adjusted for gender, age, and population maintained the association of APOE rs429358 and ɛ4 allele.
Resumo:
Huntington’s disease (HD) is an autosomal neurodegenerative disorder affecting approximately 5-10 persons per 100,000 worldwide. The pathophysiology of HD is not fully understood but the age of onset is known to be highly dependent on the number of CAG triplet repeats in the huntingtin gene. Using 1H NMR spectroscopy this study biochemically profiled 39 brain metabolites in post-mortem striatum (n=14) and frontal lobe (n=14) from HD sufferers and controls (n=28). Striatum metabolites were more perturbed with 15 significantly affected in HD cases, compared with only 4 in frontal lobe (P<0.05; q<0.3). The metabolite which changed most overall was urea which decreased 3.25-fold in striatum (P<0.01). Four metabolites were consistently affected in both brain regions. These included the neurotransmitter precursors tyrosine and L-phenylalanine which were significantly depleted by 1.55-1.58-fold and 1.48-1.54-fold in striatum and frontal lobe, respectively (P=0.02-0.03). They also included L-leucine which was reduced 1.54-1.69-fold (P=0.04-0.09) and myo-inositol which was increased 1.26-1.37-fold (P<0.01). Logistic regression analyses performed with MetaboAnalyst demonstrated that data obtained from striatum produced models which were profoundly more sensitive and specific than those produced from frontal lobe. The brain metabolite changes uncovered in this first 1H NMR investigation of human HD offer new insights into the disease pathophysiology. Further investigations of striatal metabolite disturbances are clearly warranted.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
A ecografia é o exame de primeira linha na identificação e caraterização de tumores anexiais. Foram descritos diversos métodos de diagnóstico diferencial incluindo a avaliação subjetiva do observador, índices descritivos simples e índices matematicamente desenvolvidos como modelos de regressão logística, continuando a avaliação subjectiva por examinador diferenciado a ser o melhor método de discriminação entre tumores malignos e benignos. No entanto, devido à subjectividade inerente a esta avaliação tornouse necessário estabelecer uma nomenclatura padronizada e uma classificação que facilitasse a comunicação de resultados e respectivas recomendações de vigilância. O objetivo deste artigo é resumir e comparar diferentes métodos de avaliação e classificação de tumores anexiais, nomeadamente os modelos do grupo International Ovary Tumor Analysis (IOTA) e a classificação Gynecologic Imaging Report and Data System (GI-RADS), em termos de desempenho diagnóstico e utilidade na prática clínica.
Resumo:
Background: Most large acute stroke trials have been neutral. Functional outcome is usually analysed using a yes or no answer, e.g. death or dependency vs. independence. We assessed which statistical approaches are most efficient in analysing outcomes from stroke trials. Methods: Individual patient data from acute, rehabilitation and stroke unit trials studying the effects of interventions which alter functional outcome were assessed. Outcomes included modified Rankin Scale, Barthel Index, and ‘3 questions’. Data were analysed using a variety of approaches which compare two treatment groups. The results for each statistical test for each trial were then compared. Results: Data from 55 datasets were obtained (47 trials, 54,173 patients). The test results differed substantially so that approaches which use the ordered nature of functional outcome data (ordinal logistic regression, t-test, robust ranks test, bootstrapping the difference in mean rank) were more efficient statistically than those which collapse the data into 2 groups (chi square) (ANOVA p<0.001). The findings were consistent across different types and sizes of trial and for the different measures of functional outcome. Conclusions: When analysing functional outcome from stroke trials, statistical tests which use the original ordered data are more efficient and more likely to yield reliable results. Suitable approaches included ordinal logistic regression, t-test, and robust ranks test.
Resumo:
Background and Purpose—Most large acute stroke trials have been neutral. Functional outcome is usually analyzed using a yes or no answer, eg, death or dependency versus independence. We assessed which statistical approaches are most efficient in analyzing outcomes from stroke trials. Methods—Individual patient data from acute, rehabilitation and stroke unit trials studying the effects of interventions which alter functional outcome were assessed. Outcomes included modified Rankin Scale, Barthel Index, and “3 questions”. Data were analyzed using a variety of approaches which compare 2 treatment groups. The results for each statistical test for each trial were then compared. Results—Data from 55 datasets were obtained (47 trials, 54 173 patients). The test results differed substantially so that approaches which use the ordered nature of functional outcome data (ordinal logistic regression, t test, robust ranks test, bootstrapping the difference in mean rank) were more efficient statistically than those which collapse the data into 2 groups (2; ANOVA, P0.001). The findings were consistent across different types and sizes of trial and for the different measures of functional outcome. Conclusions—When analyzing functional outcome from stroke trials, statistical tests which use the original ordered data are more efficient and more likely to yield reliable results. Suitable approaches included ordinal logistic regression, test, and robust ranks test.
Resumo:
Background and Aim: Maternal morbidity and mortality statistics remain unacceptably high in Malawi. Prominent among the risk factors in the country is anaemia in pregnancy, which generally results from nutritional inadequacy (particularly iron deficiency) and malaria, among other factors. This warrants concerted efforts to increase iron intake among reproductive-age women. This study, among women in Malawi, examined factors determining intake of supplemental iron for at least 90 days during pregnancy. Methods: A weighted sample of 10,750 women (46.7%), from the 23,020 respondents of the 2010 Malawi Demographic and Health Survey (MDHS), were utilized for the study. Univariate, bivariate, and regression techniques were employed. While univariate analysis revealed the percent distributions of all variables, bivariate analysis was used to examine the relationships between individual independent variables and adherence to iron supplementation. Chi-square tests of independence were conducted for categorical variables, with the significance level set at P < 0.05. Two binary logistic regression models were used to evaluate the net effect of independent variables on iron supplementation adherence. Results: Thirty-seven percent of the women adhered to the iron supplementation recommendations during pregnancy. Multivariate analysis indicated that younger age, urban residence, higher education, higher wealth status, and attending antenatal care during the first trimester were significantly associated with increased odds of taking iron supplementation for 90 days or more during pregnancy (P < 0.01). Conclusions: The results indicate low adherence to the World Health Organization’s iron supplementation recommendations among pregnant women in Malawi, and this contributes to negative health outcomes for both mothers and children. Focusing on education interventions that target populations with low rates of iron supplement intake, including campaigns to increase the number of women who attend antenatal care clinics in the first trimester, are recommended to increase adherence to iron supplementation recommendations.