64 resultados para Burr Conspiracy, 1805-1807.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: The relationship between work-related stress and alcohol intake is uncertain. In order to add to the thus far inconsistent evidence from relatively small studies, we conducted individual-participant meta-analyses of the association between work-related stress (operationalised as self-reported job strain) and alcohol intake. METHODOLOGY AND PRINCIPAL FINDINGS: We analysed cross-sectional data from 12 European studies (n?=?142 140) and longitudinal data from four studies (n?=?48 646). Job strain and alcohol intake were self-reported. Job strain was analysed as a binary variable (strain vs. no strain). Alcohol intake was harmonised into the following categories: none, moderate (women: 1-14, men: 1-21 drinks/week), intermediate (women: 15-20, men: 22-27 drinks/week) and heavy (women: >20, men: >27 drinks/week). Cross-sectional associations were modelled using logistic regression and the results pooled in random effects meta-analyses. Longitudinal associations were examined using mixed effects logistic and modified Poisson regression. Compared to moderate drinkers, non-drinkers and (random effects odds ratio (OR): 1.10, 95% CI: 1.05, 1.14) and heavy drinkers (OR: 1.12, 95% CI: 1.00, 1.26) had higher odds of job strain. Intermediate drinkers, on the other hand, had lower odds of job strain (OR: 0.92, 95% CI: 0.86, 0.99). We found no clear evidence for longitudinal associations between job strain and alcohol intake. CONCLUSIONS: Our findings suggest that compared to moderate drinkers, non-drinkers and heavy drinkers are more likely and intermediate drinkers less likely to report work-related stress.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Tobacco smoking is a major contributor to the public health burden and healthcare costs worldwide, but the determinants of smoking behaviours are poorly understood. We conducted a large individual-participant meta-analysis to examine the extent to which work-related stress, operationalised as job strain, is associated with tobacco smoking in working adults. METHODOLOGY AND PRINCIPAL FINDINGS: We analysed cross-sectional data from 15 European studies comprising 166 130 participants. Longitudinal data from six studies were used. Job strain and smoking were self-reported. Smoking was harmonised into three categories never, ex- and current. We modelled the cross-sectional associations using logistic regression and the results pooled in random effects meta-analyses. Mixed effects logistic regression was used to examine longitudinal associations. Of the 166 130 participants, 17% reported job strain, 42% were never smokers, 33% ex-smokers and 25% current smokers. In the analyses of the cross-sectional data, current smokers had higher odds of job strain than never-smokers (age, sex and socioeconomic position-adjusted odds ratio: 1.11, 95% confidence interval: 1.03, 1.18). Current smokers with job strain smoked, on average, three cigarettes per week more than current smokers without job strain. In the analyses of longitudinal data (1 to 9 years of follow-up), there was no clear evidence for longitudinal associations between job strain and taking up or quitting smoking. CONCLUSIONS: Our findings show that smokers are slightly more likely than non-smokers to report work-related stress. In addition, smokers who reported work stress smoked, on average, slightly more cigarettes than stress-free smokers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Published work assessing psychosocial stress (job strain) as a risk factor for coronary heart disease is inconsistent and subject to publication bias and reverse causation bias. We analysed the relation between job strain and coronary heart disease with a meta-analysis of published and unpublished studies. METHODS: We used individual records from 13 European cohort studies (1985-2006) of men and women without coronary heart disease who were employed at time of baseline assessment. We measured job strain with questions from validated job-content and demand-control questionnaires. We extracted data in two stages such that acquisition and harmonisation of job strain measure and covariables occurred before linkage to records for coronary heart disease. We defined incident coronary heart disease as the first non-fatal myocardial infarction or coronary death. FINDINGS: 30?214 (15%) of 197?473 participants reported job strain. In 1·49 million person-years at risk (mean follow-up 7·5 years [SD 1·7]), we recorded 2358 events of incident coronary heart disease. After adjustment for sex and age, the hazard ratio for job strain versus no job strain was 1·23 (95% CI 1·10-1·37). This effect estimate was higher in published (1·43, 1·15-1·77) than unpublished (1·16, 1·02-1·32) studies. Hazard ratios were likewise raised in analyses addressing reverse causality by exclusion of events of coronary heart disease that occurred in the first 3 years (1·31, 1·15-1·48) and 5 years (1·30, 1·13-1·50) of follow-up. We noted an association between job strain and coronary heart disease for sex, age groups, socioeconomic strata, and region, and after adjustments for socioeconomic status, and lifestyle and conventional risk factors. The population attributable risk for job strain was 3·4%. INTERPRETATION: Our findings suggest that prevention of workplace stress might decrease disease incidence; however, this strategy would have a much smaller effect than would tackling of standard risk factors, such as smoking. FUNDING: Finnish Work Environment Fund, the Academy of Finland, the Swedish Research Council for Working Life and Social Research, the German Social Accident Insurance, the Danish National Research Centre for the Working Environment, the BUPA Foundation, the Ministry of Social Affairs and Employment, the Medical Research Council, the Wellcome Trust, and the US National Institutes of Health.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To investigate whether work related stress, measured and defined as job strain, is associated with the overall risk of cancer and the risk of colorectal, lung, breast, or prostate cancers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aflatoxin B1 (AFB1), a mycotoxin produced by Aspergillus flavus or A. parasiticus, is a frequent contaminant of food and feed. This toxin is hepatotoxic and immunotoxic. The present study analyzed in pigs the influence of AFB1 on humoral and cellular responses, and investigated whether the immunomodulation observed is produced through interference with cytokine expression. For 28 days, pigs were fed a control diet or a diet contaminated with 385, 867 or 1807 mu g pure AFB1/kg feed. At days 4 and 15, pigs were vaccinated with ovalbumin. AFB1 exposure, confirmed by an observed dose-response in blood aflatoxin-albumin adduct, had no major effect on humoral immunity as measured by plasma concentrations of total IgA, IgG and IgM and of anti-ovalbumin IgG. Toxin exposure did not impair the mitogenic response of lymphocytes but delayed and decreased their specific proliferation in response to the vaccine antigen, suggesting impaired lymphocyte activation in pigs exposed to AFB1. The expression level of pro-inflammatory (TNF-alpha, IL-1 beta, IL-6, IFN-gamma) and regulatory (IL-10) cytokines was assessed by real-time PCR in spleen. A significant up-regulation of all 5 cytokines was observed in spleen from pigs exposed to the highest dose of AFB1. In pigs exposed to the medium dose, IL-6 expression was increased and a trend towards increased IFN-gamma and IL-10 was observed. In addition we demonstrate that IL-6 impaired in vitro the antigenic- but not the mitogenic-induced proliferation of lymphocytes from control pigs vaccinated with ovalbumin. These results indicate that AFB1 dietary exposure decreases cell-mediated immunity while inducing an inflammatory response. These impairments in the immune response could participate in failure of vaccination protocols and increased susceptibility to infections described in pigs exposed to AFB1. (C) 2008 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Unfavorable work characteristics, such as low job control and too high or too low job demands, have been suggested to increase the likelihood of physical inactivity during leisure time, but this has not been verified in large-scale studies. The authors combined individual-level data from 14 European cohort studies (baseline years from 19851988 to 20062008) to examine the association between unfavorable work characteristics and leisure-time physical inactivity in a total of 170,162 employees (50 women; mean age, 43.5 years). Of these employees, 56,735 were reexamined after 29 years. In cross-sectional analyses, the odds for physical inactivity were 26 higher (odds ratio 1.26, 95 confidence interval: 1.15, 1.38) for employees with high-strain jobs (low control/high demands) and 21 higher (odds ratio 1.21, 95 confidence interval: 1.11, 1.31) for those with passive jobs (low control/low demands) compared with employees in low-strain jobs (high control/low demands). In prospective analyses restricted to physically active participants, the odds of becoming physically inactive during follow-up were 21 and 20 higher for those with high-strain (odds ratio 1.21, 95 confidence interval: 1.11, 1.32) and passive (odds ratio 1.20, 95 confidence interval: 1.11, 1.30) jobs at baseline. These data suggest that unfavorable work characteristics may have a spillover effect on leisure-time physical activity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: A full-thickness macular hole (FTMH) is a common retinal condition associated with impaired vision. Randomised controlled trials (RCTs) have demonstrated that surgery, by means of pars plana vitrectomy and post-operative intraocular tamponade with gas, is effective for stage 2, 3 and 4 FTMH. Internal limiting membrane (ILM) peeling has been introduced as an additional surgical manoeuvre to increase the success of the surgery; i.e. increase rates of hole closure and visual improvement. However, little robust evidence exists supporting the superiority of ILM peeling compared with no-peeling techniques. The purpose of FILMS (Full-thickness macular hole and Internal Limiting Membrane peeling Study) is to determine whether ILM peeling improves the visual function, the anatomical closure of FTMH, and the quality of life of patients affected by this disorder, and the cost-effectiveness of the surgery. Methods/Design: Patients with stage 2-3 idiopathic FTMH of less or equal than 18 months duration (based on symptoms reported by the participant) and with a visual acuity = 20/40 in the study eye will be enrolled in this FILMS from eight sites across the UK and Ireland. Participants will be randomised to receive combined cataract surgery (phacoemulsification and intraocular lens implantation) and pars plana vitrectomy with postoperative intraocular tamponade with gas, with or without ILM peeling. The primary outcome is distance visual acuity at 6 months. Secondary outcomes include distance visual acuity at 3 and 24 months, near visual acuity at 3, 6, and 24 months, contrast sensitivity at 6 months, reading speed at 6 months, anatomical closure of the macular hole at each time point (1, 3, 6, and 24 months), health related quality of life (HRQOL) at six months, costs to the health service and the participant, incremental costs per quality adjusted life year (QALY) and adverse events. Discussion: FILMS will provide high quality evidence onthe role of ILM peeling in FTMH surgery. © 2008 Lois et al; licensee BioMed Central Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE. To determine whether internal limiting membrane (ILM) peeling is effective and cost effective compared with no peeling in patients with idiopathic stage 2 or 3 full-thickness maculay hole (FTMH). METHODS. This was a pragmatic multicenter randomized controlled trial. Eligible participants from nine centers were randomized to ILM peeling or no peeling (1:1 ratio) in addition to phacovitrectomy, including detachment and removal of the posterior hyaloid and gas tamponade. The primary outcome was distance visual acuity (VA) at 6 months after surgery. Secondary outcomes included hole closure, distance VA at other time points, near VA, contrast sensitivity, reading speed, reoperations, complications, resource use, and participant-reported health status, visual function, and costs. RESULTS. Of 141 participants randomized in nine centers, 127 (90%) completed the 6-month follow-up. Nonstatistically significant differences in distance visual acuity at 6 months were found between groups (mean difference, 4.8; 95% confidence interval [CI], -0.3 to 9.8; P = 0.063). There was a significantly higher rate of hole closure in the ILM-peel group (56 [84%] vs. 31 [48%]) at 1 month (odds ratio [OR], 6.23; 95% CI, 2.64-14.73; P <0.001) with fewer reoperations (8 [12%] vs. 31 [48%]) performed by 6 months (OR, 0.14; 95% CI, 0.05- 0.34; P <0.001). Peeling the ILM is likely to be cost effective. CONCLUSIONS. There was no evidence of a difference in distance VA after the ILM peeling and no-ILM peeling techniques. An important benefit in favor of no ILM peeling was ruled out. Given the higher anatomic closure and lower reoperation rates in the ILM-peel group, ILM peeling seems to be the treatment of choice for idiopathic stage 2 to 3 FTMH. © 2011 The Association for Research in Vision and Ophthalmology, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim: To determine whether internal limiting membrane (ILM) peeling is cost-effective compared with no peeling for patients with an idiopathic stage 2 or 3 full-thickness macular hole. Methods: A cost-effectiveness analysis was performed alongside a randomised controlled trial. 141 participants were randomly allocated to receive macular-hole surgery, with either ILM peeling or no peeling. Health-service resource use, costs and quality of life were calculated for each participant. The incremental cost per quality-adjusted life year (QALY) gained was calculated at 6 months. Results: At 6 months, the total costs were on average higher (£424, 95% CI -182 to 1045) in the No Peel arm, primarily owing to the higher reoperation rate in the No Peel arm. The mean additional QALYs from ILM peel at 6 months were 0.002 (95% CI 0.01 to 0.013), adjusting for baseline EQ-5D and other minimisation factors. A mean incremental cost per QALY was not computed, as Peeling was on average less costly and slightly more effective. A stochastic analysis suggested that there was more than a 90% probability that Peeling would be cost-effective at a willingness-to-pay threshold of £20 000 per QALY. Conclusion: Although there is no evidence of a statistically significant difference in either costs or QALYs between macular hole surgery with or without ILM peeling, the balance of probabilities is that ILM Peeling is likely to be a cost-effective option for the treatment of macular holes. Further long-term follow-up data are needed to confirm these findings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: To assess the comparative accuracy of potential screening tests for open angle glaucoma (OAG).

METHODS: Medline, Embase, Biosis (to November 2005), Science Citation Index (to December 2005), and The Cochrane Library (Issue 4, 2005) were searched. Studies assessing candidate screening tests for detecting OAG in persons older than 40 years that reported true and false positives and negatives were included. Meta-analysis was undertaken using the hierarchical summary receiver operating characteristic model.

RESULTS: Forty studies enrolling over 48,000 people reported nine tests. Most tests were reported by only a few studies. Frequency-doubling technology (FDT; C-20-1) was significantly more sensitive than ophthalmoscopy (30, 95% credible interval [CrI] 0-62) and Goldmann applanation tonometry (GAT; 45, 95% CrI 17-68), whereas threshold standard automated perimetry (SAP) and Heidelberg Retinal Tomograph (HRT II) were both more sensitive than GAT (41, 95% CrI 14-64 and 39, 95% CrI 3-64, respectively). GAT was more specific than both FDT C-20-5 (19, 95% CrI 0-53) and threshold SAP (14, 95% CrI 1-37). Judging performance by diagnostic odds ratio, FDT, oculokinetic perimetry, and HRT II are promising tests. Ophthalmoscopy, SAP, retinal photography, and GAT had relatively poor performance as single tests. These findings are based on heterogeneous data of limited quality and as such are associated with considerable uncertainty.

CONCLUSIONS: No test or group of tests was clearly superior for glaucoma screening. Further research is needed to evaluate the comparative accuracy of the most promising tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Open angle glaucoma (OAG) is a common cause of blindness.

OBJECTIVES: To assess the effects of medication compared with initial surgery in adults with OAG.

SEARCH METHODS: We searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (The Cochrane Library 2012, Issue 7), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to August 2012), EMBASE (January 1980 to August 2012), Latin American and Caribbean Literature on Health Sciences (LILACS) (January 1982 to August 2012), Biosciences Information Service (BIOSIS) (January 1969 to August 2012), Cumulative Index to Nursing and Allied Health Literature (CINAHL) (January 1937 to August 2012), OpenGrey (System for Information on Grey Literature in Europe) (www.opengrey.eu/), Zetoc, the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com) and the WHO International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 1 August 2012. The National Research Register (NRR) was last searched in 2007 after which the database was archived. We also checked the reference lists of articles and contacted researchers in the field.

SELECTION CRITERIA: We included randomised controlled trials (RCTs) comparing medications with surgery in adults with OAG.

DATA COLLECTION AND ANALYSIS: Two authors independently assessed trial quality and extracted data. We contacted study authors for missing information.

MAIN RESULTS: Four trials involving 888 participants with previously untreated OAG were included. Surgery was Scheie's procedure in one trial and trabeculectomy in three trials. In three trials, primary medication was usually pilocarpine, in one trial it was a beta-blocker.The most recent trial included participants with on average mild OAG. At five years, the risk of progressive visual field loss, based on a three unit change of a composite visual field score, was not significantly different according to initial medication or initial trabeculectomy (odds ratio (OR) 0.74, 95% confidence interval (CI) 0.54 to 1.01). In an analysis based on mean difference (MD) as a single index of visual field loss, the between treatment group difference in MD was -0.20 decibel (dB) (95% CI -1.31 to 0.91). For a subgroup with more severe glaucoma (MD -10 dB), findings from an exploratory analysis suggest that initial trabeculectomy was associated with marginally less visual field loss at five years than initial medication, (mean difference 0.74 dB (95% CI -0.00 to 1.48). Initial trabeculectomy was associated with lower average intraocular pressure (IOP) (mean difference 2.20 mmHg (95% CI 1.63 to 2.77) but more eye symptoms than medication (P = 0.0053). Beyond five years, visual acuity did not differ according to initial treatment (OR 1.48, 95% CI 0.58 to 3.81).From three trials in more severe OAG, there is some evidence that medication was associated with more progressive visual field loss and 3 to 8 mmHg less IOP lowering than surgery. In the longer-term (two trials) the risk of failure of the randomised treatment was greater with medication than trabeculectomy (OR 3.90, 95% CI 1.60 to 9.53; hazard ratio (HR) 7.27, 95% CI 2.23 to 25.71). Medications and surgery have evolved since these trials were undertaken.In three trials the risk of developing cataract was higher with trabeculectomy (OR 2.69, 95% CI 1.64 to 4.42). Evidence from one trial suggests that, beyond five years, the risk of needing cataract surgery did not differ according to initial treatment policy (OR 0.63, 95% CI 0.15 to 2.62).Methodological weaknesses were identified in all the trials.

AUTHORS' CONCLUSIONS: Primary surgery lowers IOP more than primary medication but is associated with more eye discomfort. One trial suggests that visual field restriction at five years is not significantly different whether initial treatment is medication or trabeculectomy. There is some evidence from two small trials in more severe OAG, that initial medication (pilocarpine, now rarely used as first line medication) is associated with more glaucoma progression than surgery. Beyond five years, there is no evidence of a difference in the need for cataract surgery according to initial treatment.The clinical and cost-effectiveness of contemporary medication (prostaglandin analogues, alpha2-agonists and topical carbonic anhydrase inhibitors) compared with primary surgery is not known.Further RCTs of current medical treatments compared with surgery are required, particularly for people with severe glaucoma and in black ethnic groups. Outcomes should include those reported by patients. Economic evaluations are required to inform treatment policy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES: To determine effective and efficient monitoring criteria for ocular hypertension [raised intraocular pressure (IOP)] through (i) identification and validation of glaucoma risk prediction models; and (ii) development of models to determine optimal surveillance pathways.

DESIGN: A discrete event simulation economic modelling evaluation. Data from systematic reviews of risk prediction models and agreement between tonometers, secondary analyses of existing datasets (to validate identified risk models and determine optimal monitoring criteria) and public preferences were used to structure and populate the economic model.

SETTING: Primary and secondary care.

PARTICIPANTS: Adults with ocular hypertension (IOP > 21 mmHg) and the public (surveillance preferences).

INTERVENTIONS: We compared five pathways: two based on National Institute for Health and Clinical Excellence (NICE) guidelines with monitoring interval and treatment depending on initial risk stratification, 'NICE intensive' (4-monthly to annual monitoring) and 'NICE conservative' (6-monthly to biennial monitoring); two pathways, differing in location (hospital and community), with monitoring biennially and treatment initiated for a ≥ 6% 5-year glaucoma risk; and a 'treat all' pathway involving treatment with a prostaglandin analogue if IOP > 21 mmHg and IOP measured annually in the community.

MAIN OUTCOME MEASURES: Glaucoma cases detected; tonometer agreement; public preferences; costs; willingness to pay and quality-adjusted life-years (QALYs).

RESULTS: The best available glaucoma risk prediction model estimated the 5-year risk based on age and ocular predictors (IOP, central corneal thickness, optic nerve damage and index of visual field status). Taking the average of two IOP readings, by tonometry, true change was detected at two years. Sizeable measurement variability was noted between tonometers. There was a general public preference for monitoring; good communication and understanding of the process predicted service value. 'Treat all' was the least costly and 'NICE intensive' the most costly pathway. Biennial monitoring reduced the number of cases of glaucoma conversion compared with a 'treat all' pathway and provided more QALYs, but the incremental cost-effectiveness ratio (ICER) was considerably more than £30,000. The 'NICE intensive' pathway also avoided glaucoma conversion, but NICE-based pathways were either dominated (more costly and less effective) by biennial hospital monitoring or had a ICERs > £30,000. Results were not sensitive to the risk threshold for initiating surveillance but were sensitive to the risk threshold for initiating treatment, NHS costs and treatment adherence.

LIMITATIONS: Optimal monitoring intervals were based on IOP data. There were insufficient data to determine the optimal frequency of measurement of the visual field or optic nerve head for identification of glaucoma. The economic modelling took a 20-year time horizon which may be insufficient to capture long-term benefits. Sensitivity analyses may not fully capture the uncertainty surrounding parameter estimates.

CONCLUSIONS: For confirmed ocular hypertension, findings suggest that there is no clear benefit from intensive monitoring. Consideration of the patient experience is important. A cohort study is recommended to provide data to refine the glaucoma risk prediction model, determine the optimum type and frequency of serial glaucoma tests and estimate costs and patient preferences for monitoring and treatment.

FUNDING: The National Institute for Health Research Health Technology Assessment Programme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract
BACKGROUND:
Glaucoma is a leading cause of blindness. Early detection is advocated but there is insufficient evidence from randomized controlled trials (RCTs) to inform health policy on population screening. Primarily, there is no agreed screening intervention. For a screening programme, agreement is required on the screening tests to be used, either individually or in combination, the person to deliver the test and the location where testing should take place. This study aimed to use ophthalmologists (who were experienced glaucoma subspecialists), optometrists, ophthalmic nurses and patients to develop a reduced set of potential screening tests and testing arrangements that could then be explored in depth in a further study of their feasibility for evaluation in a glaucoma screening RCT.
METHODS:
A two-round Delphi survey involving 38 participants was conducted. Materials were developed from a prior evidence synthesis. For round one, after some initial priming questions in four domains, specialists were asked to nominate three screening interventions, the intervention being a combination of the four domains; target population, (age and higher risk groups), site, screening test and test operator (provider). More than 250 screening interventions were identified. For round two, responses were condensed into 72 interventions and each was rated by participants on a 0-10 scale in terms of feasibility.
RESULTS:
Using a cut-off of a median rating of feasibility of =5.5 as evidence of agreement of intervention feasibility, six interventions were identified from round 2. These were initiating screening at age 50, with a combination of two or three screening tests (varying combinations of tonometry/measures of visual function/optic nerve damage) organized in a community setting with an ophthalmic trained technical assistant delivering the tests. An alternative intervention was a 'glaucoma risk score' ascertained by questionnaire. The advisory panel recommended that further exploration of the feasibility of screening higher risk populations and detailed specification of the screening tests was required.
CONCLUSIONS:
With systematic use of expert opinions, a shortlist of potential screening interventions was identified. Views of users, service providers and cost-effectiveness modeling are now required to identify a feasible intervention to evaluate in a future glaucoma screening trial.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives: To assess whether open angle glaucoma (OAG) screening meets the UK National Screening Committee criteria, to compare screening strategies with case finding, to estimate test parameters, to model estimates of cost and cost-effectiveness, and to identify areas for future research. Data sources: Major electronic databases were searched up to December 2005. Review methods: Screening strategies were developed by wide consultation. Markov submodels were developed to represent screening strategies. Parameter estimates were determined by systematic reviews of epidemiology, economic evaluations of screening, and effectiveness (test accuracy, screening and treatment). Tailored highly sensitive electronic searches were undertaken. Results: Most potential screening tests reviewed had an estimated specificity of 85% or higher. No test was clearly most accurate, with only a few, heterogeneous studies for each test. No randomised controlled trials (RCTs) of screening were identified. Based on two treatment RCTs, early treatment reduces the risk of progression. Extrapolating from this, and assuming accelerated progression with advancing disease severity, without treatment the mean time to blindness in at least one eye was approximately 23 years, compared to 35 years with treatment. Prevalence would have to be about 3-4% in 40 year olds with a screening interval of 10 years to approach cost-effectiveness. It is predicted that screening might be cost-effective in a 50-year-old cohort at a prevalence of 4% with a 10-year screening interval. General population screening at any age, thus, appears not to be cost-effective. Selective screening of groups with higher prevalence (family history, black ethnicity) might be worthwhile, although this would only cover 6% of the population. Extension to include other at-risk cohorts (e.g. myopia and diabetes) would include 37% of the general population, but the prevalence is then too low for screening to be considered cost-effective. Screening using a test with initial automated classification followed by assessment by a specialised optometrist, for test positives, was more cost-effective than initial specialised optometric assessment. The cost-effectiveness of the screening programme was highly sensitive to the perspective on costs (NHS or societal). In the base-case model, the NHS costs of visual impairment were estimated as £669. If annual societal costs were £8800, then screening might be considered cost-effective for a 40-year-old cohort with 1% OAG prevalence assuming a willingness to pay of £30,000 per quality-adjusted life-year. Of lesser importance were changes to estimates of attendance for sight tests, incidence of OAG, rate of progression and utility values for each stage of OAG severity. Cost-effectiveness was not particularly sensitive to the accuracy of screening tests within the ranges observed. However, a highly specific test is required to reduce large numbers of false-positive referrals. The findings that population screening is unlikely to be cost-effective are based on an economic model whose parameter estimates have considerable uncertainty, in particular, if rate of progression and/or costs of visual impairment are higher than estimated then screening could be cost-effective. Conclusions: While population screening is not cost-effective, the targeted screening of high-risk groups may be. Procedures for identifying those at risk, for quality assuring the programme, as well as adequate service provision for those screened positive would all be needed. Glaucoma detection can be improved by increasing attendance for eye examination, and improving the performance of current testing by either refining practice or adding in a technology-based first assessment, the latter being the more cost-effective option. This has implications for any future organisational changes in community eye-care services. Further research should aim to develop and provide quality data to populate the economic model, by conducting a feasibility study of interventions to improve detection, by obtaining further data on costs of blindness, risk of progression and health outcomes, and by conducting an RCT of interventions to improve the uptake of glaucoma testing. © Queen's Printer and Controller of HMSO 2007. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Measures that reflect patients' assessment of their health are of increasing importance as outcome measures in randomised controlled trials. The methodological approach used in the pre-validation development of new instruments (item generation, item reduction and question formatting) should be robust and transparent. The totality of the content of existing PRO instruments for a specific condition provides a valuable resource (pool of items) that can be utilised to develop new instruments. Such 'top down' approaches are common, but the explicit pre-validation methods are often poorly reported. This paper presents a systematic and generalisable 5-step pre-validation PRO instrument methodology.

METHODS: The method is illustrated using the example of the Aberdeen Glaucoma Questionnaire (AGQ). The five steps are: 1) Generation of a pool of items; 2) Item de-duplication (three phases); 3) Item reduction (two phases); 4) Assessment of the remaining items' content coverage against a pre-existing theoretical framework appropriate to the objectives of the instrument and the target population (e.g. ICF); and 5) qualitative exploration of the target populations' views of the new instrument and the items it contains.

RESULTS: The AGQ 'item pool' contained 725 items. Three de-duplication phases resulted in reduction of 91, 225 and 48 items respectively. The item reduction phases discarded 70 items and 208 items respectively. The draft AGQ contained 83 items with good content coverage. The qualitative exploration ('think aloud' study) resulted in removal of a further 15 items and refinement to the wording of others. The resultant draft AGQ contained 68 items.

CONCLUSIONS: This study presents a novel methodology for developing a PRO instrument, based on three sources: literature reporting what is important to patient; theoretically coherent framework; and patients' experience of completing the instrument. By systematically accounting for all items dropped after the item generation phase, our method ensures that the AGQ is developed in a transparent, replicable manner and is fit for validation. We recommend this method to enhance the likelihood that new PRO instruments will be appropriate to the research context in which they are used, acceptable to research participants and likely to generate valid data.