927 resultados para Burr grass
Resumo:
Unfavorable work characteristics, such as low job control and too high or too low job demands, have been suggested to increase the likelihood of physical inactivity during leisure time, but this has not been verified in large-scale studies. The authors combined individual-level data from 14 European cohort studies (baseline years from 19851988 to 20062008) to examine the association between unfavorable work characteristics and leisure-time physical inactivity in a total of 170,162 employees (50 women; mean age, 43.5 years). Of these employees, 56,735 were reexamined after 29 years. In cross-sectional analyses, the odds for physical inactivity were 26 higher (odds ratio 1.26, 95 confidence interval: 1.15, 1.38) for employees with high-strain jobs (low control/high demands) and 21 higher (odds ratio 1.21, 95 confidence interval: 1.11, 1.31) for those with passive jobs (low control/low demands) compared with employees in low-strain jobs (high control/low demands). In prospective analyses restricted to physically active participants, the odds of becoming physically inactive during follow-up were 21 and 20 higher for those with high-strain (odds ratio 1.21, 95 confidence interval: 1.11, 1.32) and passive (odds ratio 1.20, 95 confidence interval: 1.11, 1.30) jobs at baseline. These data suggest that unfavorable work characteristics may have a spillover effect on leisure-time physical activity.
Resumo:
Background: A full-thickness macular hole (FTMH) is a common retinal condition associated with impaired vision. Randomised controlled trials (RCTs) have demonstrated that surgery, by means of pars plana vitrectomy and post-operative intraocular tamponade with gas, is effective for stage 2, 3 and 4 FTMH. Internal limiting membrane (ILM) peeling has been introduced as an additional surgical manoeuvre to increase the success of the surgery; i.e. increase rates of hole closure and visual improvement. However, little robust evidence exists supporting the superiority of ILM peeling compared with no-peeling techniques. The purpose of FILMS (Full-thickness macular hole and Internal Limiting Membrane peeling Study) is to determine whether ILM peeling improves the visual function, the anatomical closure of FTMH, and the quality of life of patients affected by this disorder, and the cost-effectiveness of the surgery. Methods/Design: Patients with stage 2-3 idiopathic FTMH of less or equal than 18 months duration (based on symptoms reported by the participant) and with a visual acuity = 20/40 in the study eye will be enrolled in this FILMS from eight sites across the UK and Ireland. Participants will be randomised to receive combined cataract surgery (phacoemulsification and intraocular lens implantation) and pars plana vitrectomy with postoperative intraocular tamponade with gas, with or without ILM peeling. The primary outcome is distance visual acuity at 6 months. Secondary outcomes include distance visual acuity at 3 and 24 months, near visual acuity at 3, 6, and 24 months, contrast sensitivity at 6 months, reading speed at 6 months, anatomical closure of the macular hole at each time point (1, 3, 6, and 24 months), health related quality of life (HRQOL) at six months, costs to the health service and the participant, incremental costs per quality adjusted life year (QALY) and adverse events. Discussion: FILMS will provide high quality evidence onthe role of ILM peeling in FTMH surgery. © 2008 Lois et al; licensee BioMed Central Ltd.
Resumo:
PURPOSE. To determine whether internal limiting membrane (ILM) peeling is effective and cost effective compared with no peeling in patients with idiopathic stage 2 or 3 full-thickness maculay hole (FTMH). METHODS. This was a pragmatic multicenter randomized controlled trial. Eligible participants from nine centers were randomized to ILM peeling or no peeling (1:1 ratio) in addition to phacovitrectomy, including detachment and removal of the posterior hyaloid and gas tamponade. The primary outcome was distance visual acuity (VA) at 6 months after surgery. Secondary outcomes included hole closure, distance VA at other time points, near VA, contrast sensitivity, reading speed, reoperations, complications, resource use, and participant-reported health status, visual function, and costs. RESULTS. Of 141 participants randomized in nine centers, 127 (90%) completed the 6-month follow-up. Nonstatistically significant differences in distance visual acuity at 6 months were found between groups (mean difference, 4.8; 95% confidence interval [CI], -0.3 to 9.8; P = 0.063). There was a significantly higher rate of hole closure in the ILM-peel group (56 [84%] vs. 31 [48%]) at 1 month (odds ratio [OR], 6.23; 95% CI, 2.64-14.73; P <0.001) with fewer reoperations (8 [12%] vs. 31 [48%]) performed by 6 months (OR, 0.14; 95% CI, 0.05- 0.34; P <0.001). Peeling the ILM is likely to be cost effective. CONCLUSIONS. There was no evidence of a difference in distance VA after the ILM peeling and no-ILM peeling techniques. An important benefit in favor of no ILM peeling was ruled out. Given the higher anatomic closure and lower reoperation rates in the ILM-peel group, ILM peeling seems to be the treatment of choice for idiopathic stage 2 to 3 FTMH. © 2011 The Association for Research in Vision and Ophthalmology, Inc.
Resumo:
Aim: To determine whether internal limiting membrane (ILM) peeling is cost-effective compared with no peeling for patients with an idiopathic stage 2 or 3 full-thickness macular hole. Methods: A cost-effectiveness analysis was performed alongside a randomised controlled trial. 141 participants were randomly allocated to receive macular-hole surgery, with either ILM peeling or no peeling. Health-service resource use, costs and quality of life were calculated for each participant. The incremental cost per quality-adjusted life year (QALY) gained was calculated at 6 months. Results: At 6 months, the total costs were on average higher (£424, 95% CI -182 to 1045) in the No Peel arm, primarily owing to the higher reoperation rate in the No Peel arm. The mean additional QALYs from ILM peel at 6 months were 0.002 (95% CI 0.01 to 0.013), adjusting for baseline EQ-5D and other minimisation factors. A mean incremental cost per QALY was not computed, as Peeling was on average less costly and slightly more effective. A stochastic analysis suggested that there was more than a 90% probability that Peeling would be cost-effective at a willingness-to-pay threshold of £20 000 per QALY. Conclusion: Although there is no evidence of a statistically significant difference in either costs or QALYs between macular hole surgery with or without ILM peeling, the balance of probabilities is that ILM Peeling is likely to be a cost-effective option for the treatment of macular holes. Further long-term follow-up data are needed to confirm these findings.
Resumo:
PURPOSE: To assess the comparative accuracy of potential screening tests for open angle glaucoma (OAG).
METHODS: Medline, Embase, Biosis (to November 2005), Science Citation Index (to December 2005), and The Cochrane Library (Issue 4, 2005) were searched. Studies assessing candidate screening tests for detecting OAG in persons older than 40 years that reported true and false positives and negatives were included. Meta-analysis was undertaken using the hierarchical summary receiver operating characteristic model.
RESULTS: Forty studies enrolling over 48,000 people reported nine tests. Most tests were reported by only a few studies. Frequency-doubling technology (FDT; C-20-1) was significantly more sensitive than ophthalmoscopy (30, 95% credible interval [CrI] 0-62) and Goldmann applanation tonometry (GAT; 45, 95% CrI 17-68), whereas threshold standard automated perimetry (SAP) and Heidelberg Retinal Tomograph (HRT II) were both more sensitive than GAT (41, 95% CrI 14-64 and 39, 95% CrI 3-64, respectively). GAT was more specific than both FDT C-20-5 (19, 95% CrI 0-53) and threshold SAP (14, 95% CrI 1-37). Judging performance by diagnostic odds ratio, FDT, oculokinetic perimetry, and HRT II are promising tests. Ophthalmoscopy, SAP, retinal photography, and GAT had relatively poor performance as single tests. These findings are based on heterogeneous data of limited quality and as such are associated with considerable uncertainty.
CONCLUSIONS: No test or group of tests was clearly superior for glaucoma screening. Further research is needed to evaluate the comparative accuracy of the most promising tests.
Resumo:
BACKGROUND: Open angle glaucoma (OAG) is a common cause of blindness.
OBJECTIVES: To assess the effects of medication compared with initial surgery in adults with OAG.
SEARCH METHODS: We searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (The Cochrane Library 2012, Issue 7), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to August 2012), EMBASE (January 1980 to August 2012), Latin American and Caribbean Literature on Health Sciences (LILACS) (January 1982 to August 2012), Biosciences Information Service (BIOSIS) (January 1969 to August 2012), Cumulative Index to Nursing and Allied Health Literature (CINAHL) (January 1937 to August 2012), OpenGrey (System for Information on Grey Literature in Europe) (www.opengrey.eu/), Zetoc, the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com) and the WHO International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 1 August 2012. The National Research Register (NRR) was last searched in 2007 after which the database was archived. We also checked the reference lists of articles and contacted researchers in the field.
SELECTION CRITERIA: We included randomised controlled trials (RCTs) comparing medications with surgery in adults with OAG.
DATA COLLECTION AND ANALYSIS: Two authors independently assessed trial quality and extracted data. We contacted study authors for missing information.
MAIN RESULTS: Four trials involving 888 participants with previously untreated OAG were included. Surgery was Scheie's procedure in one trial and trabeculectomy in three trials. In three trials, primary medication was usually pilocarpine, in one trial it was a beta-blocker.The most recent trial included participants with on average mild OAG. At five years, the risk of progressive visual field loss, based on a three unit change of a composite visual field score, was not significantly different according to initial medication or initial trabeculectomy (odds ratio (OR) 0.74, 95% confidence interval (CI) 0.54 to 1.01). In an analysis based on mean difference (MD) as a single index of visual field loss, the between treatment group difference in MD was -0.20 decibel (dB) (95% CI -1.31 to 0.91). For a subgroup with more severe glaucoma (MD -10 dB), findings from an exploratory analysis suggest that initial trabeculectomy was associated with marginally less visual field loss at five years than initial medication, (mean difference 0.74 dB (95% CI -0.00 to 1.48). Initial trabeculectomy was associated with lower average intraocular pressure (IOP) (mean difference 2.20 mmHg (95% CI 1.63 to 2.77) but more eye symptoms than medication (P = 0.0053). Beyond five years, visual acuity did not differ according to initial treatment (OR 1.48, 95% CI 0.58 to 3.81).From three trials in more severe OAG, there is some evidence that medication was associated with more progressive visual field loss and 3 to 8 mmHg less IOP lowering than surgery. In the longer-term (two trials) the risk of failure of the randomised treatment was greater with medication than trabeculectomy (OR 3.90, 95% CI 1.60 to 9.53; hazard ratio (HR) 7.27, 95% CI 2.23 to 25.71). Medications and surgery have evolved since these trials were undertaken.In three trials the risk of developing cataract was higher with trabeculectomy (OR 2.69, 95% CI 1.64 to 4.42). Evidence from one trial suggests that, beyond five years, the risk of needing cataract surgery did not differ according to initial treatment policy (OR 0.63, 95% CI 0.15 to 2.62).Methodological weaknesses were identified in all the trials.
AUTHORS' CONCLUSIONS: Primary surgery lowers IOP more than primary medication but is associated with more eye discomfort. One trial suggests that visual field restriction at five years is not significantly different whether initial treatment is medication or trabeculectomy. There is some evidence from two small trials in more severe OAG, that initial medication (pilocarpine, now rarely used as first line medication) is associated with more glaucoma progression than surgery. Beyond five years, there is no evidence of a difference in the need for cataract surgery according to initial treatment.The clinical and cost-effectiveness of contemporary medication (prostaglandin analogues, alpha2-agonists and topical carbonic anhydrase inhibitors) compared with primary surgery is not known.Further RCTs of current medical treatments compared with surgery are required, particularly for people with severe glaucoma and in black ethnic groups. Outcomes should include those reported by patients. Economic evaluations are required to inform treatment policy.
Resumo:
OBJECTIVES: To determine effective and efficient monitoring criteria for ocular hypertension [raised intraocular pressure (IOP)] through (i) identification and validation of glaucoma risk prediction models; and (ii) development of models to determine optimal surveillance pathways.
DESIGN: A discrete event simulation economic modelling evaluation. Data from systematic reviews of risk prediction models and agreement between tonometers, secondary analyses of existing datasets (to validate identified risk models and determine optimal monitoring criteria) and public preferences were used to structure and populate the economic model.
SETTING: Primary and secondary care.
PARTICIPANTS: Adults with ocular hypertension (IOP > 21 mmHg) and the public (surveillance preferences).
INTERVENTIONS: We compared five pathways: two based on National Institute for Health and Clinical Excellence (NICE) guidelines with monitoring interval and treatment depending on initial risk stratification, 'NICE intensive' (4-monthly to annual monitoring) and 'NICE conservative' (6-monthly to biennial monitoring); two pathways, differing in location (hospital and community), with monitoring biennially and treatment initiated for a ≥ 6% 5-year glaucoma risk; and a 'treat all' pathway involving treatment with a prostaglandin analogue if IOP > 21 mmHg and IOP measured annually in the community.
MAIN OUTCOME MEASURES: Glaucoma cases detected; tonometer agreement; public preferences; costs; willingness to pay and quality-adjusted life-years (QALYs).
RESULTS: The best available glaucoma risk prediction model estimated the 5-year risk based on age and ocular predictors (IOP, central corneal thickness, optic nerve damage and index of visual field status). Taking the average of two IOP readings, by tonometry, true change was detected at two years. Sizeable measurement variability was noted between tonometers. There was a general public preference for monitoring; good communication and understanding of the process predicted service value. 'Treat all' was the least costly and 'NICE intensive' the most costly pathway. Biennial monitoring reduced the number of cases of glaucoma conversion compared with a 'treat all' pathway and provided more QALYs, but the incremental cost-effectiveness ratio (ICER) was considerably more than £30,000. The 'NICE intensive' pathway also avoided glaucoma conversion, but NICE-based pathways were either dominated (more costly and less effective) by biennial hospital monitoring or had a ICERs > £30,000. Results were not sensitive to the risk threshold for initiating surveillance but were sensitive to the risk threshold for initiating treatment, NHS costs and treatment adherence.
LIMITATIONS: Optimal monitoring intervals were based on IOP data. There were insufficient data to determine the optimal frequency of measurement of the visual field or optic nerve head for identification of glaucoma. The economic modelling took a 20-year time horizon which may be insufficient to capture long-term benefits. Sensitivity analyses may not fully capture the uncertainty surrounding parameter estimates.
CONCLUSIONS: For confirmed ocular hypertension, findings suggest that there is no clear benefit from intensive monitoring. Consideration of the patient experience is important. A cohort study is recommended to provide data to refine the glaucoma risk prediction model, determine the optimum type and frequency of serial glaucoma tests and estimate costs and patient preferences for monitoring and treatment.
FUNDING: The National Institute for Health Research Health Technology Assessment Programme.
Resumo:
Abstract
BACKGROUND:
Glaucoma is a leading cause of blindness. Early detection is advocated but there is insufficient evidence from randomized controlled trials (RCTs) to inform health policy on population screening. Primarily, there is no agreed screening intervention. For a screening programme, agreement is required on the screening tests to be used, either individually or in combination, the person to deliver the test and the location where testing should take place. This study aimed to use ophthalmologists (who were experienced glaucoma subspecialists), optometrists, ophthalmic nurses and patients to develop a reduced set of potential screening tests and testing arrangements that could then be explored in depth in a further study of their feasibility for evaluation in a glaucoma screening RCT.
METHODS:
A two-round Delphi survey involving 38 participants was conducted. Materials were developed from a prior evidence synthesis. For round one, after some initial priming questions in four domains, specialists were asked to nominate three screening interventions, the intervention being a combination of the four domains; target population, (age and higher risk groups), site, screening test and test operator (provider). More than 250 screening interventions were identified. For round two, responses were condensed into 72 interventions and each was rated by participants on a 0-10 scale in terms of feasibility.
RESULTS:
Using a cut-off of a median rating of feasibility of =5.5 as evidence of agreement of intervention feasibility, six interventions were identified from round 2. These were initiating screening at age 50, with a combination of two or three screening tests (varying combinations of tonometry/measures of visual function/optic nerve damage) organized in a community setting with an ophthalmic trained technical assistant delivering the tests. An alternative intervention was a 'glaucoma risk score' ascertained by questionnaire. The advisory panel recommended that further exploration of the feasibility of screening higher risk populations and detailed specification of the screening tests was required.
CONCLUSIONS:
With systematic use of expert opinions, a shortlist of potential screening interventions was identified. Views of users, service providers and cost-effectiveness modeling are now required to identify a feasible intervention to evaluate in a future glaucoma screening trial.
Resumo:
Objectives: To assess whether open angle glaucoma (OAG) screening meets the UK National Screening Committee criteria, to compare screening strategies with case finding, to estimate test parameters, to model estimates of cost and cost-effectiveness, and to identify areas for future research. Data sources: Major electronic databases were searched up to December 2005. Review methods: Screening strategies were developed by wide consultation. Markov submodels were developed to represent screening strategies. Parameter estimates were determined by systematic reviews of epidemiology, economic evaluations of screening, and effectiveness (test accuracy, screening and treatment). Tailored highly sensitive electronic searches were undertaken. Results: Most potential screening tests reviewed had an estimated specificity of 85% or higher. No test was clearly most accurate, with only a few, heterogeneous studies for each test. No randomised controlled trials (RCTs) of screening were identified. Based on two treatment RCTs, early treatment reduces the risk of progression. Extrapolating from this, and assuming accelerated progression with advancing disease severity, without treatment the mean time to blindness in at least one eye was approximately 23 years, compared to 35 years with treatment. Prevalence would have to be about 3-4% in 40 year olds with a screening interval of 10 years to approach cost-effectiveness. It is predicted that screening might be cost-effective in a 50-year-old cohort at a prevalence of 4% with a 10-year screening interval. General population screening at any age, thus, appears not to be cost-effective. Selective screening of groups with higher prevalence (family history, black ethnicity) might be worthwhile, although this would only cover 6% of the population. Extension to include other at-risk cohorts (e.g. myopia and diabetes) would include 37% of the general population, but the prevalence is then too low for screening to be considered cost-effective. Screening using a test with initial automated classification followed by assessment by a specialised optometrist, for test positives, was more cost-effective than initial specialised optometric assessment. The cost-effectiveness of the screening programme was highly sensitive to the perspective on costs (NHS or societal). In the base-case model, the NHS costs of visual impairment were estimated as £669. If annual societal costs were £8800, then screening might be considered cost-effective for a 40-year-old cohort with 1% OAG prevalence assuming a willingness to pay of £30,000 per quality-adjusted life-year. Of lesser importance were changes to estimates of attendance for sight tests, incidence of OAG, rate of progression and utility values for each stage of OAG severity. Cost-effectiveness was not particularly sensitive to the accuracy of screening tests within the ranges observed. However, a highly specific test is required to reduce large numbers of false-positive referrals. The findings that population screening is unlikely to be cost-effective are based on an economic model whose parameter estimates have considerable uncertainty, in particular, if rate of progression and/or costs of visual impairment are higher than estimated then screening could be cost-effective. Conclusions: While population screening is not cost-effective, the targeted screening of high-risk groups may be. Procedures for identifying those at risk, for quality assuring the programme, as well as adequate service provision for those screened positive would all be needed. Glaucoma detection can be improved by increasing attendance for eye examination, and improving the performance of current testing by either refining practice or adding in a technology-based first assessment, the latter being the more cost-effective option. This has implications for any future organisational changes in community eye-care services. Further research should aim to develop and provide quality data to populate the economic model, by conducting a feasibility study of interventions to improve detection, by obtaining further data on costs of blindness, risk of progression and health outcomes, and by conducting an RCT of interventions to improve the uptake of glaucoma testing. © Queen's Printer and Controller of HMSO 2007. All rights reserved.
Resumo:
BACKGROUND: Measures that reflect patients' assessment of their health are of increasing importance as outcome measures in randomised controlled trials. The methodological approach used in the pre-validation development of new instruments (item generation, item reduction and question formatting) should be robust and transparent. The totality of the content of existing PRO instruments for a specific condition provides a valuable resource (pool of items) that can be utilised to develop new instruments. Such 'top down' approaches are common, but the explicit pre-validation methods are often poorly reported. This paper presents a systematic and generalisable 5-step pre-validation PRO instrument methodology.
METHODS: The method is illustrated using the example of the Aberdeen Glaucoma Questionnaire (AGQ). The five steps are: 1) Generation of a pool of items; 2) Item de-duplication (three phases); 3) Item reduction (two phases); 4) Assessment of the remaining items' content coverage against a pre-existing theoretical framework appropriate to the objectives of the instrument and the target population (e.g. ICF); and 5) qualitative exploration of the target populations' views of the new instrument and the items it contains.
RESULTS: The AGQ 'item pool' contained 725 items. Three de-duplication phases resulted in reduction of 91, 225 and 48 items respectively. The item reduction phases discarded 70 items and 208 items respectively. The draft AGQ contained 83 items with good content coverage. The qualitative exploration ('think aloud' study) resulted in removal of a further 15 items and refinement to the wording of others. The resultant draft AGQ contained 68 items.
CONCLUSIONS: This study presents a novel methodology for developing a PRO instrument, based on three sources: literature reporting what is important to patient; theoretically coherent framework; and patients' experience of completing the instrument. By systematically accounting for all items dropped after the item generation phase, our method ensures that the AGQ is developed in a transparent, replicable manner and is fit for validation. We recommend this method to enhance the likelihood that new PRO instruments will be appropriate to the research context in which they are used, acceptable to research participants and likely to generate valid data.
Resumo:
Aim: To evaluate the quality of reporting of all diagnostic studies published in five major ophthalmic journals in the year 2002 using the Standards for Reporting of Diagnostic Accuracy (STARD) initiative parameters. Methods: Manual searching was used to identify diagnostic studies published in 2002 in five leading ophthalmic journals, the American Journal of Ophthalmology (AJO), Archives of Ophthalmology (Archives), British Journal of Ophthalmology (BJO), Investigative Ophthalmology and Visual Science (IOVS), and Ophthalmology. The STARD checklist of 25 items and flow chart was used to evaluate the quality of each publication. Results: A total of 16 publications were included (AJO = 5, Archives = 1, BJO = 2, IOVS = 2, and Ophthalmology = 6). More than half of the studies (n = 9) were related to glaucoma diagnosis. Other specialties included retina (n = 4) cornea (n = 2), and neuro-ophthalmology (n = 1). The most common description of diagnostic accuracy was sensitivity and specificity values, published in 13 articles. The number of fully reported items in evaluated studies ranged from eight to 19. Seven studies reported more than 50% of the STARD items. Conclusions: The current standards of reporting of diagnostic accuracy tests are highly variable. The STARD initiative may be a useful tool for appraising the strengths and weaknesses of diagnostic accuracy studies.
Resumo:
Aim: To compare the diagnostic performance of accredited glaucoma optometrists (AGO) for both the diagnosis of glaucoma and the decision to treat with that of routine hospital eye care, against a reference standard of expert opinion (a consultant ophthalmologist with a special interest in glaucoma). Methods: A directly comparative, masked, performance study was undertaken in Grampian, Scotland. Of 165 people invited to participate, 100 (61%) were examined. People suspected of having glaucoma underwent, within one month, a full ophthalmic assessment in both a newly established community optometry led glaucoma management scheme and a consultant led hospital eye service. Results: Agreement between the AGO and the consultant ophthalmologist in diagnosing glaucoma was substantial (89%; ? = 0.703, SE = 0.083). Agreement over the need for treatment was also substantial (88%; ? = 0.716, SE = 0.076). The agreement between the trainee ophthalmologists and the consultant ophthalmologist in the diagnosis of glaucoma and treatment recommendation was moderate (83%, ? = 0.541, SE = 0.098, SE = 0.98; and 81%, ? = 0.553, SE = 0.90, respectively). The diagnostic accuracy of the optometrists in detecting glaucoma in this population was high for specificity (0.93 (95% confidence interval, 0.85 to 0.97)) but lower for sensitivity (0.76 (0.57 to 0.89)). Performance was similar when accuracy was assessed for treatment recommendation (sensitivity 0.73 (0.57 to 0.85); specificity 0.96 (0.88 to 0.99)). The differences in sensitivity and specificity between AGO and junior ophthalmologist were not statistically significant. Conclusions: Community optometrists trained in glaucoma provided satisfactory decisions regarding diagnosis and initiation of treatment for glaucoma. With such additional training in glaucoma, optometrists are at least as accurate as junior ophthalmologists but some cases of glaucoma are missed.
Resumo:
PURPOSE: To identify vision Patient-Reported Outcomes instruments relevant to glaucoma and assess their content validity.
METHODS: MEDLINE, MEDLINE in Process, EMBASE and SCOPUS (to January 2009) were systematically searched. Observational studies or randomised controlled trials, published in English, reporting use of vision instruments in glaucoma studies involving adults were included. In addition, reference lists were scanned to identify additional studies describing development and/or validation to ascertain the final version of the instruments. Instruments' content was then mapped onto a theoretical framework, the World Health Organization International Classification of Functioning, Disability and Health. Two reviewers independently evaluated studies for inclusion and quality assessed instrument content.
RESULTS: Thirty-three instruments were identified. Instruments were categorised into thirteen vision status, two vision disability, one vision satisfaction, five glaucoma status, one glaucoma medication related to health status, five glaucoma medication side effects and six glaucoma medication satisfaction measures according to each instruments' content. The National Eye Institute Visual Function Questionnaire-25, Impact of Vision Impairment and Treatment Satisfaction Survey-Intraocular Pressure had the highest number of positive ratings in the content validity assessment.
CONCLUSION: This study provides a descriptive catalogue of vision-specific PRO instruments, to inform the choice of an appropriate measure of patient-reported outcomes in a glaucoma context.
Resumo:
PURPOSE: To evaluate the relative benefits and to identify any adverse effects of surgical interventions for limbal stem cell deficiency (LSCD).
DESIGN: Systematic literature review.
METHODS: We searched the following electronic databases from January 1, 1989 through September 30, 2006: MEDLINE, EMBASE, Science citation index, BIOSIS, and the Cochrane Library. In addition, reference lists were scanned to identify any additional reports. The quality of published reports was assessed using standard methods. The main outcome measure was improvement in vision of at least two Snellen lines of best-corrected visual acuity (BCVA). Data on adverse outcomes also were collected.
RESULTS: Twenty-six studies met the inclusion criteria. There were no randomized controlled studies. All 26 studies were either prospective or retrospective case series. For bilateral severe LSCD, keratolimbal allograft was the most common intervention with systemic immunosuppression. Other interventions included eccentric penetrating keratolimbal allografts and cultivated autologous oral mucosal epithelial grafts. An improvement in BCVA of two lines or more was reported in 31% to 67% of eyes. For unilateral severe LSCD, the most common surgical intervention was contralateral conjunctival limbal autograft, with 35% to 88% of eyes gaining an improvement in BCVA of two lines or more. The only study evaluating partial LSCD showed an improvement in BCVA of two lines or more in 39% of eyes.
CONCLUSIONS: Studies to date have not provided strong evidence to guide clinical practice on which surgery is most beneficial to treat various types of LSCD. Standardized data collection in a multicenter LSCD register is suggested.
Resumo:
OBJECTIVE: To assess the agreement of tonometers available for clinical practice with the Goldmann applanation tonometer (GAT), the most commonly accepted reference device.
DESIGN: A systematic review and meta-analysis of directly comparative studies assessing the agreement of 1 or more tonometers with the reference tonometer (GAT).
PARTICIPANTS: A total of 11 582 participants (15 525 eyes) were included.
METHODS: Summary 95% limits of agreement (LoA) were produced for each comparison.
MAIN OUTCOME MEASURES: Agreement, recordability, and reliability.
RESULTS: A total of 102 studies, including 130 paired comparisons, were included, representing 8 tonometers: dynamic contour tonometer, noncontact tonometer (NCT), ocular response analyzer, Ocuton S, handheld applanation tonometer (HAT), rebound tonometer, transpalpebral tonometer, and Tono-Pen. The agreement (95% limits) seemed to vary across tonometers: 0.2 mmHg (-3.8 to 4.3 mmHg) for the NCT to 2.7 mmHg (-4.1 to 9.6 mmHg) for the Ocuton S. The estimated proportion within 2 mmHg of the GAT ranged from 33% (Ocuton S) to 66% and 59% (NCT and HAT, respectively). Substantial inter- and intraobserver variability were observed for all tonometers.
CONCLUSIONS: The NCT and HAT seem to achieve a measurement closest to the GAT. However, there was substantial variability in measurements both within and between studies.