837 resultados para Risk measures
Resumo:
The nuclear accident in Chernobyl in 1986 is a dramatic example of the type of incidents that are characteristic of a risk society. The consequences of the incident are indeterminate, the causes complex and future developments unpredictable. Nothing can compensate for its effects and it affects a broad population indiscriminately. This paper examines the lived experience of those who experienced biographical disruption as residents of the region on the basis of qualitative case studies carried out in 2003 in the Chernobyl regions of Russia, Ukraine and Belarus. Our analysis indicates that informants tend to view their future as highly uncertain and unpredictable; they experience uncertainty about whether they are already contaminated, and they have to take hazardous decisions about where to go and what to eat. Fear, rumours and experts compete in supplying information to residents about the actual and potential consequences of the disaster, but there is little trust in, and only limited awareness of, the information that is provided. Most informants continue with their lives and do what they must or even what they like, even where the risks are known. They often describe their behaviour as being due to economic circumstances; where there is extreme poverty, even hazardous food sources are better than none. Unlike previous studies, we identify a pronounced tendency among informants not to separate the problems associated with the disaster from the hardships that have resulted from the break-up of the USSR, with both events creating a deep-seated sense of resignation and fatalism. Although most informants hold their governments to blame for lack of information, support and preventive measures, there is little or no collective action to have these put in place. This contrasts with previous research which has suggested that populations affected by disasters attribute crucial significance to that incident and, as a consequence, become increasingly politicized with regard to related policy agendas.
Resumo:
Background & Aims: Esophageal adenocarcinoma arises from Barrett's esophagus (BE); patients with this cancer have a poor prognosis. Identification of modifiable lifestyle factors that affect the risk of progression from BE to esophageal adenocarcinoma might prevent its development. We investigated associations among body size, smoking, and alcohol use with progression of BE to neoplasia. Methods: We analyzed data from patients with BE identified from the population-based Northern Ireland BE register, diagnosed between 1993 and 2005 with specialized intestinal metaplasia (n = 3167). Data on clinical, demographic, and lifestyle factors related to diagnosis of BE were collected from hospital case notes. We used the Northern Ireland Cancer Registry to identify which of these patients later developed esophageal adenocarcinoma, adenocarcinomas of the gastric cardia, or esophageal high-grade dysplasia. Cox proportional hazards models were used to associate lifestyle factors with risk of progression.
Results: By December 31, 2008, 117 of the patients with BE developed esophageal high-grade dysplasia or adenocarcinomas of the esophagus or gastric cardia. Current tobacco smoking was significantly associated with an increased risk of progression (hazard ratio = 2.03; 95% confidence interval, 1.29-3.17) compared with never smoking, and across all strata of smoking intensity. Alcohol consumption was not related to risk of progression. Measures of body size were infrequently reported in endoscopy reports, and body size was not associated with risk of progression.
Conclusions: Smoking tobacco increases the risk of progression to cancer or high-grade dysplasia 2-fold among patients with BE, compared with patients with BE that have never smoked. Smoking cessation strategies should be considered for patients with BE.
Resumo:
OBJECTIVE:
To compare blood pressure between 50-year-old adults who were born at term (37-42 weeks of gestation) with intra-uterine growth restriction (IUGR; birth weight <10th centile) and a control group of similar age born at term without IUGR (birth weight =10th centile).
STUDY DESIGN:
Controlled comparative study.
METHODS:
Participants included 232 men and women who were born at the Royal Maternity Hospital, Belfast, a large regional maternity hospital in Northern Ireland, between 1954 and 1956. One hundred and eight subjects who were born with IUGR were compared with 124 controls with normal birth weight for gestation. The main outcome measures were systolic and diastolic blood pressure at approximately 50 years of age, measured according to European recommendations.
RESULTS:
The IUGR group had higher systolic and diastolic blood pressure than the control group: 131.5 [95% confidence interval (CI) 127.9-135.1] vs 127.1 (95% CI 124.3-129.2) mmHg and 82.3 (95% CI 79.6-85.0) vs 79.0 (95% CI 77.0-81.0) mmHg, respectively. After adjustment for gender, the differences between the groups were statistically significant: systolic blood pressure 4.5 (95% CI 0.3-8.7) mmHg and diastolic blood pressure 3.4 (95% CI 0.2-6.5) mmHg (both P < 0.05). More participants in the IUGR group were receiving treatment for high blood pressure compared with the control group [16 (15%) vs 11 (9%)], although this was not statistically significant. The proportion of subjects with blood pressure >140/90 mmHg or currently receiving antihypertensive treatment was 45% (n = 49) for the IUGR group, and 31% (n = 38) for the control group (odds ratio 1.9, 95% CI 1.1-3.3). Adjustment for potential confounders made little difference.
CONCLUSIONS:
IUGR is associated with higher blood pressure at 50 years of age. Individuals born with IUGR should have regular blood pressure screening and early treatment as required. Hypertension remains underdiagnosed and undertreated in adult life.
Resumo:
Background: Interest in the prevention of osteoporosis is increasing and thus there is a need for an acceptable osteoporosis prevention programme in general practice. AIM. A study was undertaken to identify a cohort of middle-aged women attending a general practice who would be eligible for a longitudinal study looking at bone mineral density, osteoporosis and the effectiveness of hormone replacement therapy. This study aimed to describe the relationship between medical and lifestyle risk factors for osteoporosis and the initial bone density measurements in this group of women. METHOD. A health visitor administered a questionnaire to women aged between 48 and 52 years registered with a Belfast general practice. The main outcome measures were menopausal status, presence of medical and lifestyle risk factors and bone mineral density measurements. RESULTS. A total of 358 women our of 472 (76%) took part in the study which was conducted in 1991 and 1992. A highly significant difference was found between the mean bone mineral density of premenopausal, menopausal and postmenopausal women within the narrow study age range, postmenopausal women having the lowest bone mineral density. A significant relationship was found between body mass index and bone mineral density, a greater bone mineral density being found among women with a higher body mass index. Risk factors such as smoking and sedentary lifestyle were common (reported by approximately one third of respondents) but a poor relationship was found between these two and all the other risk factors and bone mineral density in this age group. CONCLUSION. Risk of osteoporosis cannot be identified by the presence of risk factors in women aged between 48 and 52 years. In terms of a current prevention strategy for general practice it would be better to take a population-based approach except for those women known to be at high risk of osteoporosis: women with early menopause or those who have had an oophorectomy.
Resumo:
Neprilysin (NEP), also known as membrane metalloendopeptidase (MME), is considered amongst the most important ß-amyloid (Aß)-degrading enzymes with regard to prevention of Alzheimer's disease (AD) pathology. Variation in the NEP gene (MME) has been suggested as a risk factor for AD. We conducted a genetic association study of 7MME SNPs - rs1836914, rs989692, rs9827586, rs6797911, rs61760379, rs3736187, rs701109 - with respect to AD risk in a cohort of 1057 probable and confirmed AD cases and 424 age-matched non-demented controls from the United Kingdom, Italy and Sweden. We also examined the association of these MME SNPs with NEP protein level and enzyme activity, and on biochemical measures of Aß accumulation in frontal cortex - levels of total soluble Aß, oligomeric Aß(1-42), and guanidine-extractable (insoluble) Aß - in a sub-group of AD and control cases with post-mortem brain tissue. On multivariate logistic regression analysis one of the MME variants (rs6797911) was associated with AD risk (P = 0.00052, Odds Ratio (O.R. = 1.40, 95% confidence interval (1.16-1.70)). None of the SNPs had any association with Aß levels; however, rs9827586 was significantly associated with NEP protein level (p=0.014) and enzyme activity (p=0.006). Association was also found between rs701109 and NEP protein level (p=0.026) and a marginally non-significant association was found for rs989692 (p=0.055). These data suggest that MME variation may be associated with AD risk but we have not found evidence that this is mediated through modification of NEP protein level or activity.
Resumo:
OBJECTIVE:
To study associations between severity stages of early and late age-related macular degeneration (AMD) and genetic variations in age-related maculopathy susceptibility 2 (ARMS2) and complement factor H (CFH) and to investigate potential interactions between smoking and ARMS2.
DESIGN:
Population-based, cross-sectional European Eye Study in 7 countries in Europe.
PARTICIPANTS:
Four thousand seven hundred fifty participants, 65 years of age and older, recruited through random sampling.
METHODS:
Participants were classified on the basis of the more severely affected eye into 5 mutually exclusive AMD severity stages ranging from no AMD, 3 categories of early AMD, and late AMD. History of cigarette smoking was available and allowed classification into never, former, and current smokers, with the latter 2 groups combined into a single category of ever smokers for analysis. Genotyping was performed for single nucleotide polymorphisms rs10490924 and rs4146894 in ARMS2 and rs1061170 in CFH. Associations were analyzed by logistic regression.
MAIN OUTCOME MEASURES:
Odds ratios (ORs) for stage of AMD associated with genetic variations in ARMS2 and CFH and interactions between ARMS2 and smoking status.
RESULTS:
Early AMD was present in 36.4% and late AMD was present in 3.3% of participants. Data on both genotype and AMD were available for 4276 people. The ORs for associations between AMD stage and ARMS2 increased monotonically with more severe stages of early AMD and were altered little by adjustment for potential confounders. Compared with persons with no AMD, carriers of the TT genotype for rs10490924 in ARMS2 had a 10-fold increase in risk of late AMD (P<3 × 10(-20)). The ORs for associations with CFH were similar for stage 3 early AMD and late AMD. Interactions between rs10490924 in ARMS2 and smoking status were significant in both unadjusted and adjusted models (P = 0.001). The highest risk was observed in those doubly homozygous for rs10490924 and rs1061170 in CFH (OR, 62.3; 95% confidence interval, 16-242), with P values for trend ranging from 0.03 (early AMD, stage 1) to 1 × 10(-26) (late AMD).
CONCLUSIONS:
A strong association was demonstrated between all stages of AMD and genetic variation in ARMS2, and a significant gene-environment interaction with cigarette smoking was confirmed.
Resumo:
Objective: To evaluate the impact of a provider initiated primary care outreach intervention compared with usual care among older adults at risk of functional decline. Design: Randomised controlled trial. Setting: Patients enrolled with 35 family physicians in five primary care networks in Hamilton, Ontario, Canada. Participants Patients: were eligible if they were 75 years of age or older and were not receiving home care services. Of 3166 potentially eligible patients, 2662 (84%) completed the validated postal questionnaire used to determine risk of functional decline. Of 1724 patients who met the risk criteria, 769 (45%) agreed to participate and 719 were randomised. Intervention: The 12 month intervention, provided by experienced home care nurses in 2004-6, consisted of a comprehensive initial assessment using the resident assessment instrument for home care; collaborative care planning with patients, their families, and family physicians; health promotion; and referral to community health and social support services. Main outcome measures: Quality adjusted life years (QALYs), use and costs of health and social services, functional status, self rated health, and mortality. Results: The mean difference in QALYs between intervention and control patients during the study period was not statistically significant (0.017, 95% confidence interval -0.022 to 0.056; P=0.388). The mean difference in overall cost of prescription drugs and services between the intervention and control groups was not statistically significant, (-$C165 (£107; €118; $162), 95% confidence interval -$C16 545 to $C16 214; P=0.984). Changes over 12 months in functional status and self rated health were not significantly different between the intervention and control groups. Ten patients died in each group. Conclusions: The results of this study do not support adoption of this preventive primary care intervention for this target population of high risk older adults. Trial registration: Clinical trials NCT00134836.
Resumo:
Background Aflatoxins are fungal metabolites that frequently contaminate staple foods in much of sub-Saharan Africa, and are associated with increased risk of liver cancer and impaired growth in young children. We aimed to assess whether postharvest measures to restrict aflatoxin contamination of groundnut crops could reduce exposure in west African villages.
Methods We undertook an intervention study at subsistence farms in the lower Kindia region of Guinea. Farms from 20 villages were included, ten of which implemented a package of postharvest measures to restrict aflatoxin contamination of the groundnut crop; ten controls followed usual postharvest practices. We measured the concentrations of blood aflatoxin-albumin adducts from 600 people immediately after harvest and at 3 months and 5 months postharvest to monitor the effectiveness of the intervention.
Findings In control villages mean aflatoxin-albumin concentration increased postharvest (from 5.5 pg/mg [95% CI 4.7-6.1] immediately after harvest to 18.7 pg/mg [17.0-20.6] 5 months later). By contrast, mean aflatoxin-albumin concentration in intervention villages after 5 months of groundnut storage was much the same as that immediately postharvest (7.2 pg/mg [6.2-8.4] vs 8.0 pg/mg [7.0-9.2]). At 5 months, mean adduct concentration in intervention villages was less than 50% of that in control villages (8.0 pg/mg [7.2-9.2] vs 18.7 pg/mg [17.0-20.6], p<0.0001). About a third of the number of people had non-detectable aflatoxin-albumin concentrations at harvest. At 5 months, five (2%) people in the control villages had non-detectable adduct concentrations compared with 47 (20%) of those in the intervention group (p<0.0001). Mean concentrations of aflatoxin B1 in groundnuts in household stores in intervention and control villages were consistent with measurements of aflatoxin-albumin adducts.
Interpretation Use of low-technology approaches at the subsistence-farm level in sub-Saharan Africa could substantially reduce the disease burden caused by aflatoxin exposure.
Resumo:
Predictive validity of the Stanford-Binet Intelligence Scale Fourth Edition (S-B IV) from age 3 years to ages 4-5 years was evaluated with biologically "at risk" children without major sensory or motor impairments (n = 236). Using the standard scoring, children with full scale IQ <or = 84 on the Wechsler Preschool and Primary Scale of Intelligence at age 4-5 years were poorly identified (sensitivity 54%) from the composite S-B IV score at age 3. However, sensitivity improved greatly to 78% by including as a predictor the number of subtests the child was actually able to perform at age 3 years. Measures from the Home Screening Questionnaire and ratings of mother-child interaction further improved sensitivity to 83%. The standard method for calculating the composite score on the S-B IV excludes subtests with a raw score of 0, which overestimates cognitive functioning in young biologically high risk children. Accuracy of early identification was improved significantly by considering the number of subtests the child did not perform at age 3 years.
Resumo:
Objective: Establish maternal preferences for a third-trimester ultrasound scan in a healthy, low-risk pregnant population.
Design: Cross-sectional study incorporating a discrete choice experiment.
Setting: A large, urban maternity hospital in Northern Ireland.
Participants: One hundred and forty-six women in their second trimester of pregnancy.
Methods: A discrete choice experiment was designed to elicit preferences for four attributes of a third-trimester ultrasound scan: health-care professional conducting the scan, detection rate for abnormal foetal growth, provision of non-medical information, cost. Additional data collected included age, marital status, socio-economic status, obstetric history, pregnancy-specific stress levels, perceived health and whether pregnancy was planned. Analysis was undertaken using a mixed logit model with interaction effects.
Main outcome measures: Women's preferences for, and trade-offs between, the attributes of a hypothetical scan and indirect willingness-to-pay estimates.
Results: Women had significant positive preference for higher rate of detection, lower cost and provision of non-medical information, with no significant value placed on scan operator. Interaction effects revealed subgroups that valued the scan most: women experiencing their first pregnancy, women reporting higher levels of stress, an adverse obstetric history and older women.
Conclusions: Women were able to trade on aspects of care and place relative importance on clinical, non-clinical outcomes and processes of service delivery, thus highlighting the potential of using health utilities in the development of services from a clinical, economic and social perspective. Specifically, maternal preferences exhibited provide valuable information for designing a randomized trial of effectiveness and insight for clinical and policy decision makers to inform woman-centred care.
Resumo:
PURPOSE: To describe fundus autofluorescence (AF) patterns and their change over time in patients with age-related macular degeneration (AMD) and high risk of visual loss participating in the drusen laser study (DLS). DESIGN: Randomized clinical trial. METHODS: The study population consisted of 29 patients (35 eyes) participating in the DLS, which is a prospective, randomized, controlled clinical trial of prophylactic laser therapy in patients with AMD and high risk of neovascular complications. The intervention consisted of 16 eyes having prophylactic laser and 19 receiving no treatment. The main outcome measures were changes in the distribution of drusen and AF. Patients were reviewed for a median follow-up or 24 months (range 12-36 months). RESULTS: At baseline, four patterns of fundus AF were recognized: focal increased AF (n = 18), reticular AF (n = 3), combined focal and reticular AF (n = 2), and homogeneous AF (n = 12). At last follow-up, fundus AF remained unchanged in 15 untreated (78%) and in seven treated (43%) eyes. In only one untreated eye, focal areas of increased AF returned to background levels and were no longer detectable at last follow-up, compared with six treated eyes. This difference was statistically significant (P = .03). Only large foveal soft drusen (drusenoid pigment epithelium detachments) consistently corresponded with focal changes in AF, whereas no obvious correspondence was found between small soft drusen located elsewhere and changes in AF. CONCLUSION: The lack of obvious correspondence between the distribution of drusen and of AF found in this study appears to indicate that drusen and AF represent independent measures of aging in the posterior pole. © 2002 Elsevier Science Inc. All rights reserved.
Resumo:
Context: Family carers of palliative care patients report high levels of psychological distress throughout the caregiving phase and during bereavement. Palliative care providers are required to provide psychosocial support to family carers; however, determining which carers are more likely to develop prolonged grief (PG) is currently unclear.
Objectives: To ascertain whether family carers reporting high levels of PG symptoms and those who develop PG disorder (PGD) by six and 13 months postdeath can be predicted from predeath information.
Methods: A longitudinal study of 301 carers of patients receiving palliative care was conducted across three palliative care services. Data were collected on entry to palliative care (T1) on a variety of sociodemographic variables, carer-related factors, and psychological distress measures. The measures of psychological distress were then readministered at six (T2; n = 167) and 13 months postdeath (T3; n = 143).
Results: The PG symptoms at T1 were a strong predictor of both PG symptoms and PGD at T2 and T3. Greater bereavement dependency, a spousal relationship to the patient, greater impact of caring on schedule, poor family functioning, and low levels of optimism also were risk factors for PG symptoms.
Conclusion: Screening family carers on entry to palliative care seems to be the most effective way of identifying who has a higher risk of developing PG. We recommend screening carers six months after the death of their relative to identify most carers with PG.
Resumo:
Aims
Our aim was to test the prediction and clinical applicability of high-sensitivity assayed troponin I for incident cardiovascular events in a general middle-aged European population.
Methods and results
High-sensitivity assayed troponin I was measured in the Scottish Heart Health Extended Cohort (n = 15 340) with 2171 cardiovascular events (including acute coronary heart disease and probable ischaemic strokes), 714 coronary deaths (25% of all deaths), 1980 myocardial infarctions, and 797 strokes of all kinds during an average of 20 years follow-up. Detection rate above the limit of detection (LoD) was 74.8% in the overall population and 82.6% in men and 67.0% in women. Troponin I assayed by the high-sensitivity method was associated with future cardiovascular risk after full adjustment such as that individuals in the fourth category had 2.5 times the risk compared with those without detectable troponin I (P < 0.0001). These associations remained significant even for those individuals in whom levels of contemporary-sensitivity troponin I measures were not detectable. Addition of troponin I levels to clinical variables led to significant increases in risk prediction with significant improvement of the c-statistic (P < 0.0001) and net reclassification (P < 0.0001). A threshold of 4.7 pg/mL in women and 7.0 pg/mL in men is suggested to detect individuals at high risk for future cardiovascular events.
Conclusion
Troponin I, measured with a high-sensitivity assay, is an independent predictor of cardiovascular events and might support selection of at risk individuals.
Resumo:
Abstract Objective To determine if high umbilical artery Doppler (UAD) pulsatility index (PI) is associated with cardio-vascular (CV) risk-factors in children at age 12 years. Methods We studied 195 children at age 12 years who had had in-utero UAD studies performed at 28 weeks gestation. The children were grouped according to whether their umbilical Doppler PI was high (indicating poor feto-placental circulation) or normal. At age 12 years we assessed CV risk factors, including anthropometric measures, blood pressure, pulse wave velocity (a measure of arterial compliance), cardio-respiratory fitness and homocysteine and cholesterol serum levels. Results Compared with children with a normal UAD PI (N=88), the children (N=107) with high UAD PI had higher resting pulse rate (p=0.04), higher pulse wave velocity (p=0.046), higher serum homocysteine levels (p=0.032) and reduced arterial compliance (7.58 v 8.50 m/sec, p=0.029) using univariate analysis. These differences were not present when adjusting for cofounders was modelled. Conclusion High PI on UAD testing in-utero may be associated with increased likelihood of some cardio-vascular risk factors at age 12-years but confounding variables may be as important. Our study raises possible long-term benefits of in-utero UAD measurements.
Resumo:
Explanations for the causes of famine and food insecurity often reside at a high level of aggregation or abstraction. Popular models within famine studies have often emphasised the role of prime movers such as population stress, or the political-economic structure of access channels, as key determinants of food security. Explanation typically resides at the macro level, obscuring the presence of substantial within-country differences in the manner in which such stressors operate. This study offers an alternative approach to analyse the uneven nature of food security, drawing on the Great Irish famine of 1845–1852. Ireland is often viewed as a classical case of Malthusian stress, whereby population outstripped food supply under a pre-famine demographic regime of expanded fertility. Many have also pointed to Ireland's integration with capitalist markets through its colonial relationship with the British state, and country-wide system of landlordism, as key determinants of local agricultural activity. Such models are misguided, ignoring both substantial complexities in regional demography, and the continuity of non-capitalistic, communal modes of land management long into the nineteenth century. Drawing on resilience ecology and complexity theory, this paper subjects a set of aggregate data on pre-famine Ireland to an optimisation clustering procedure, in order to discern the potential presence of distinctive social–ecological regimes. Based on measures of demography, social structure, geography, and land tenure, this typology reveals substantial internal variation in regional social–ecological structure, and vastly differing levels of distress during the peak famine months. This exercise calls into question the validity of accounts which emphasise uniformity of structure, by revealing a variety of regional regimes, which profoundly mediated local conditions of food security. Future research should therefore consider the potential presence of internal variations in resilience and risk exposure, rather than seeking to characterise cases based on singular macro-dynamics and stressors alone.