126 resultados para Patient-reported outcomes
Resumo:
Allogeneic blood or bone marrow transplantation is a successful treatment for leukaemia and severe aplastic anaemia (SAA). Graft rejection following transplantation for leukaemia is a rare event but leukaemic relapse may occur at varying rates, depending upon the stage of leukaemia at which the transplant was undertaken and the type of leukaemia. Relapse is generally assumed to occur in residual host cells, which are refractory to, or escape from the myeloablative conditioning therapy. Rare cases have been described, however, in which the leukaemia recurs in cells of donor origin. Lack of a successful outcome of blood or bone marrow transplantation for severe aplastic anaemia (SAA), however, is due to late graft rejection or graft-versus-host disease. Leukaemia in cells of donor origin has rarely been reported in patients following allogeneic bone marrow transplantation for SAA. This report describes leukaemic transformation in donor cells following a second allogeneic BMT for severe aplastic anaemia. PCR of short tandem repeats in bone marrow aspirates and in colonies derived from BFUE and CFU-GM indicated the donor origin of leukaemia. Donor leukaemia is a rare event following transplantation for severe aplastic anaemia but may represent the persistence or perturbation of a stromal defect in these patients inducing leukaemic change in donor haemopoietic stem cells.
Resumo:
Background A 2014 national audit used the English General Practice Patient Survey (GPPS) to compare service users’ experience of out-of-hours general practitioner (GP) services, yet there is no published evidence on the validity of these GPPS items. Objectives Establish the construct and concurrent validity of GPPS items evaluating service users’ experience of GP out-of-hours care. Methods Cross-sectional postal survey of service users (n=1396) of six English out-of-hours providers. Participants reported on four GPPS items evaluating out-of-hours care (three items modified following cognitive interviews with service users), and 14 evaluative items from the Out-of-hours Patient Questionnaire (OPQ). Construct validity was assessed through correlations between any reliable (Cochran's α>0.7) scales, as suggested by a principal component analysis of the modified GPPS items, with the ‘entry access’ (four items) and ‘consultation satisfaction’ (10 items) OPQ subscales. Concurrent validity was determined by investigating whether each modified GPPS item was associated with thematically related items from the OPQ using linear regressions. Results The modified GPPS item-set formed a single scale (α=0.77), which summarised the two-component structure of the OPQ moderately well; explaining 39.7% of variation in the ‘entry access’ scores (r=0.63) and 44.0% of variation in the ‘consultation satisfaction’ scores (r=0.66), demonstrating acceptable construct validity. Concurrent validity was verified as each modified GPPS item was highly associated with a distinct set of related items from the OPQ. Conclusions Minor modifications are required for the English GPPS items evaluating out-of-hours care to improve comprehension by service users. A modified question set was demonstrated to comprise a valid measure of service users’ overall satisfaction with out-of-hours care received. This demonstrates the potential for the use of as few as four items in benchmarking providers and assisting services in identifying, implementing and assessing quality improvement initiatives.
Resumo:
Background: Rapid Response Systems (RRS) have been implemented nationally and internationally to improve patient safety in hospital. However, to date the majority of the RRS research evidence has focused on measuring the effectiveness of the intervention on patient outcomes. To evaluate RRS it has been recommended that a multimodal approach is required to address the broad range of process and outcome measures required to determine the effectiveness of the RRS concept. Aim: The aim of this paper is to evaluate the official RRS programme theoretical assumptions regarding how the programme is meant to work against actual practice in order to determine what works. Methods: The research design was a multiple case study approach of four wards in two hospitals in Northern Ireland. It followed the principles of realist evaluation research which allowed empirical data to be gathered to test and refine RRS programme theory [1]. This approach used a variety of mixed methods to test the programme theories including individual and focus group interviews with a purposive sample of 75 nurses and doctors, observation of ward practices and documentary analysis. The findings from the case studies were analysed and compared within and across cases to identify what works for whom and in what circumstances. Results: The RRS programme theories were critically evaluated and compared with study findings to develop a mid-range theory to explain what works, for whom in what circumstances. The findings of what works suggests that clinical experience, established working relationships, flexible implementation of protocols, ongoing experiential learning, empowerment and pre-emptive management are key to the success of RRS implementation. Conclusion:These findings highlight the combination of factors that can improve the implementation of RRS and in light of this evidence several recommendations are made to provide policymakers with guidance and direction for their success and sustainability.References: 1.Pawson R and Tilley N. (1997) Realistic Evaluation. Sage Publications; LondonType of submission: Concurrent session Source of funding: Sandra Ryan Fellowship funded by the School of Nursing & Midwifery, Queen’s University of Belfast
Resumo:
BACKGROUND: This study investigated the effect of socioeconomic deprivation on preoperative disease and outcome following unicompartmental knee replacement (UKR).
METHODS: 307 Oxford UKRs implanted between 2008 and 2013 under the care of one surgeon using the same surgical technique were analysed. Deprivation was quantified using the Northern Ireland Multiple Deprivation Measure. Preoperative disease severity and postoperative outcome were measured using the Oxford Knee Score (OKS).
RESULTS: There was no difference in preoperative OKS between deprivation groups. Preoperative knee range of motion (ROM) was significantly reduced in more deprived patients with 10° less ROM than least deprived patients. Postoperatively there was no difference in OKS improvement between deprivation groups (p=0.46), with improvements of 19.5 and 21.0 units in the most and least deprived groups respectively. There was no significant association between deprivation and OKS improvement on unadjusted or adjusted analysis. Preoperative OKS, Short Form 12 mental component score and length of stay were significant independent predictors of OKS improvement. A significantly lower proportion of the most deprived group (15%) reported being able to walk an unlimited distance compared to the least deprived group (41%) one year postoperatively.
CONCLUSION: More deprived patients can achieve similar improvements in OKS to less deprived patients following UKR.
LEVEL OF EVIDENCE: 2b - retrospective cohort study of prognosis.
Resumo:
Background:
Prolonged mechanical ventilation is associated with a longer intensive care unit (ICU) length of stay and higher mortality. Consequently, methods to improve ventilator weaning processes have been sought. Two recent Cochrane systematic reviews in ICU adult and paediatric populations concluded that protocols can be effective in reducing the duration of mechanical ventilation, but there was significant heterogeneity in study findings. Growing awareness of the benefits of understanding the contextual factors impacting on effectiveness has encouraged the integration of qualitative evidence syntheses with effectiveness reviews, which has delivered important insights into the reasons underpinning (differential) effectiveness of healthcare interventions.
Objectives:
1. To locate, appraise and synthesize qualitative evidence concerning the barriers and facilitators of the use of protocols for weaning critically-ill adults and children from mechanical ventilation;
2. To integrate this synthesis with two Cochrane effectiveness reviews of protocolized weaning to help explain observed heterogeneity by identifying contextual factors that impact on the use of protocols for weaning critically-ill adults and children from mechanical ventilation;
3. To use the integrated body of evidence to suggest the circumstances in which weaning protocols are most likely to be used.
Search methods:
We used a range of search terms identified with the help of the SPICE (Setting, Perspective, Intervention, Comparison, Evaluation) mnemonic. Where available, we used appropriate methodological filters for specific databases. We searched the following databases: Ovid MEDLINE, Embase, OVID, PsycINFO, CINAHL Plus, EBSCOHost, Web of Science Core Collection, ASSIA, IBSS, Sociological Abstracts, ProQuest and LILACS on the 26th February 2015. In addition, we searched: the grey literature; the websites of professional associations for relevant publications; and the reference lists of all publications reviewed. We also contacted authors of the trials included in the effectiveness reviews as well as of studies (potentially) included in the qualitative synthesis, conducted citation searches of the publications reporting these studies, and contacted content experts.
We reran the search on 3rd July 2016 and found three studies, which are awaiting classification.
Selection criteria:
We included qualitative studies that described: the circumstances in which protocols are designed, implemented or used, or both, and the views and experiences of healthcare professionals either involved in the design, implementation or use of weaning protocols or involved in the weaning of critically-ill adults and children from mechanical ventilation not using protocols. We included studies that: reflected on any aspect of the use of protocols, explored contextual factors relevant to the development, implementation or use of weaning protocols, and reported contextual phenomena and outcomes identified as relevant to the effectiveness of protocolized weaning from mechanical ventilation.
Data collection and analysis:
At each stage, two review authors undertook designated tasks, with the results shared amongst the wider team for discussion and final development. We independently reviewed all retrieved titles, abstracts and full papers for inclusion, and independently extracted selected data from included studies. We used the findings of the included studies to develop a new set of analytic themes focused on the barriers and facilitators to the use of protocols, and further refined them to produce a set of summary statements. We used the Confidence in the Evidence from Reviews of Qualitative Research (CERQual) framework to arrive at a final assessment of the overall confidence of the evidence used in the synthesis. We included all studies but undertook two sensitivity analyses to determine how the removal of certain bodies of evidence impacted on the content and confidence of the synthesis. We deployed a logic model to integrate the findings of the qualitative evidence synthesis with those of the Cochrane effectiveness reviews.
Main results:
We included 11 studies in our synthesis, involving 267 participants (one study did not report the number of participants). Five more studies are awaiting classification and will be dealt with when we update the review.
The quality of the evidence was mixed; of the 35 summary statements, we assessed 17 as ‘low’, 13 as ‘moderate’ and five as ‘high’ confidence. Our synthesis produced nine analytical themes, which report potential barriers and facilitators to the use of protocols. The themes are: the need for continual staff training and development; clinical experience as this promotes felt and perceived competence and confidence to wean; the vulnerability of weaning to disparate interprofessional working; an understanding of protocols as militating against a necessary proactivity in clinical practice; perceived nursing scope of practice and professional risk; ICU structure and processes of care; the ability of protocols to act as a prompt for shared care and consistency in weaning practice; maximizing the use of protocols through visibility and ease of implementation; and the ability of protocols to act as a framework for communication with parents.
Authors' conclusions:
There is a clear need for weaning protocols to take account of the social and cultural environment in which they are to be implemented. Irrespective of its inherent strengths, a protocol will not be used if it does not accommodate these complexities. In terms of protocol development, comprehensive interprofessional input will help to ensure broad-based understanding and a sense of ‘ownership’. In terms of implementation, all relevant ICU staff will benefit from general weaning as well as protocol-specific training; not only will this help secure a relevant clinical knowledge base and operational understanding, but will also demonstrate to others that this knowledge and understanding is in place. In order to maximize relevance and acceptability, protocols should be designed with the patient profile and requirements of the target ICU in mind. Predictably, an under-resourced ICU will impact adversely on protocol implementation, as staff will prioritize management of acutely deteriorating and critically-ill patients.
Resumo:
The last 20 years have seen significant advances in cancer care in Northern Ireland, leading to measureable improvements in patient outcomes. Crucial to this transformation has been an ethos that recognizes the primacy role of research in effecting heath care change. The authors' model of a cross-sectoral partnership that unites patients, scientists, health care professionals, hospital trusts, bioindustry, and government agencies can be truly transformative, empowering tripartite clinical-academic-industry efforts that have already yielded significant benefit and will continue to inform strategy and its implementation going forward.
Resumo:
Background: Outwith clinical trials, patient outcomes specifically related to SACT (systemic anti-cancer therapy) are not well reported despite a significant proportion of patients receiving active treatment at the end of life. The NCEPOD reviewing deaths within 30 days of SACT found SACT caused or hastened death in 27% of cases.
Method: Across the Northern Ireland cancer network, 95 patients who died within 30 days of SACT for solid tumours were discussed at the Morbidity and Mortality monthly meeting during 2013. Using a structured template, each case was independently reviewed, with particular focus on whether SACT caused or hastened death.
Results: Lung, GI and breast cancers were the most common sites. Performance status was recorded in 92% at time of final SACT cycle (ECOG PS 0-2 89%).
In 57% the cause of death was progressive disease. Other causes included thromboembolism (13%) and infection (5% neutropenic sepsis, 6% non-neutropenic sepsis). In 26% with death from progressive disease, the patient was first cycle of first line treatment for metastatic disease. In the majority discussion regarding treatment aims and risks was documented. Only one patient was receiving SACT with curative intent, who died from appropriately managed neutropenic sepsis.
A definitive decision regarding SACT's role in death was made in 60%: in 49% SACT was deemed non-contributory and in 11% SACT was deemed the cause of death. In 40% SACT did not play a major role, but a definitive negative association could not be made.
Conclusion: Development of a robust review process of 30-day mortality after SACT established a benchmark for SACT delivery for future comparisons and identified areas for SACT service organisation improvement. Moreover it encourages individual practice reflection and highlights the importance of balancing patients' needs and concerns with realistic outcomes and risks, particularly in heavily pre-treated patients or those of poor performance status.
Resumo:
Background The diagnosis of gestational diabetes (GDM) during pregnancy can lead to anxiety. Little research has focused on the education these women receive and how this is best delivered in a busy clinic. Aim This study evaluated the impact of an innovative patient-centred educational DVD on anxiety and glycaemic control and in newly diagnosed women with GDM. Method 150 multi-ethnic women, aged 19-44 years, from three UK hospitals were randomised to either standard care plus DVD (DVD group, n=77) or standard care alone (control group, n=73) at GDM diagnosis. Women were followed up at their next clinic visit at a mean ± SD of 2.5 ± 1.6 weeks later. Primary outcomes were anxiety (State-Trait Anxiety Inventory) and mean 1-hour postprandial capillary self-monitored blood glucose for all meals, on day prior to follow-up. Secondary outcomes included pregnancy specific stress (Pregnancy Distress Questionnaire), emotional adjustment to diabetes (Appraisal of Diabetes Scale), self-efficacy (Diabetes Empowerment Scale) and GDM knowledge (non-validated questionnaire). Other outcomes included mean fasting and 1-hour postprandial blood glucose at each meal, on day prior to follow-up. Women in the DVD group completed a feedback questionnaire on the resource. Results No significant difference between the DVD and control group were reported, for anxiety (37.7 ± 11.7 vs 36.2 ± 10.9; mean difference after adjustment for covariates (95%CI) 2.5 (-0.8, 5.9) or for mean 1-hour postprandial glucose (6.9 ± 0.9 vs 7.0 ± 1.2 mmol/L; -0.2 (-0.5, 0.2). Similarly, no significant differences in the other psychosocial variables were identified between the groups. However, the DVD group had significantly lower postprandial breakfast glucose compared to the control group (6.8 ± 1.2 vs 7.4 ± 1.9 mmol/L; -0.5 (-1.1, -<0.1; p=0.04). Using a scale of 0-10, 84% rated the DVD 7 or above for usefulness (10 being very useful), and 88% rated it 7 or above when asked if they would recommend to a friend (10 being very strongly recommend). Women described the DVD as ‘reassuring’, ‘a fantastic tool’, that ‘provided a lot of information in a quick and easy way’ and ‘helped reinforce all the information from clinic’. Discussion While no significant change was observed in anxiety or mean postprandial glucose, the DVD was rated highly by women with GDM and may be a useful resource to assist with educating newly diagnosed women. This project is supported by BRIDGES, an IDF programme supported by an educational grant from Lilly Diabetes.
Resumo:
Objective: Guidelines recommend the creation of a wrist radiocephalic arteriovenous fistula (RAVF) as initial hemodialysis vascular access. This study explored the potential of preoperative ultrasound vessel measurements to predict AVF failure to mature (FTM) in a cohort of patients with end-stage renal disease in Northern Ireland
.Methods: A retrospective analysis was performed of all patients who had preoperative ultrasound mapping of upper limb blood vessels carried out from August 2011 to December 2014 and whose AVF reached a functional outcome by March 2015.
Results: There were 152 patients (97% white) who had ultrasound mapping andan AVF functional outcome recorded; 80 (54%) had an upper arm AVF created, and 69 (46%) had a RAVF formed. Logistic regression revealed that female gender (odds ratio [OR], 2.5; 95% confidence interval [CI], 1.12-5.55; P = .025), minimum venous diameter (OR, 0.6; 95% CI, 0.39-0.95; P = .029), and RAVF (OR, 0.4; 95% CI, 0.18-0.89; P = .026) were associated with FTM. On subgroup analysis of the RAVF group, RAVFs with an arterial volume flow <50 mL/min were seven times as likely to fail as RAVFs with higher volume flows (OR, 7.0; 95% CI, 2.35-20.87; P < .001).
Conclusions: In this cohort, a radial artery flow rate <50 mL/min was associated with a sevenfold increased risk of FTM in RAVF, which to our knowledge has not been previously reported in the literature. Preoperative ultrasound mapping adds objective assessment in the clinical prediction of AVF FTM.
Resumo:
Objective: To assess the effect of provision of free glasses on academic performance in rural Chinese children with myopia. Design: Cluster randomized, investigator masked, controlled trial.Setting 252 primary schools in two prefectures in western China, 2012-13. Participants: 3177 of 19 934 children in fourth and fifth grades (mean age 10.5 years) with visual acuity <6/12 in either eye without glasses correctable to >6/12 with glasses. 3052 (96.0%) completed the study.Interventions Children were randomized by school (84 schools per arm) to one of three interventions at the beginning of the school year: prescription for glasses only (control group), vouchers for free glasses at a local facility, or free glasses provided in class. Main outcome measures: Spectacle wear at endline examination and end of year score on a specially designed mathematics test, adjusted for baseline score and expressed in standard deviations. Results: Among 3177 eligible children, 1036 (32.6%) were randomized to control, 988 (31.1%) to vouchers, and 1153 (36.3%) to free glasses in class. All eligible children would benefit from glasses, but only 15% wore them at baseline. At closeout glasses wear was 41% (observed) and 68% (self reported) in the free glasses group, and 26% (observed) and 37% (self reported) in the controls. Effect on test score was 0.11 SD (95% confidence interval 0.01 to 0.21) when the free glasses group was compared with the control group. The adjusted effect of providing free glasses (0.10, 0.002 to 0.19) was greater than parental education (0.03, −0.04 to 0.09) or family wealth (0.01, −0.06 to 0.08). This difference between groups was significant, but was smaller than the prespecified 0.20 SD difference that the study was powered to detect. Conclusions: The provision of free glasses to Chinese children with myopia improves children’s performance on mathematics testing to a statistically significant degree, despite imperfect compliance, although the observed difference between groups was smaller than the study was originally designed to detect. Myopia is common and rarely corrected in this setting. Trial Registration: Current Controlled Trials ISRCTN03252665.
Resumo:
Background: Poor follow-up after cataract surgery in developing countries makes assessment of operative quality uncertain. We aimed to assess two strategies to measure visual outcome: recording the visual acuity of all patients 3 or fewer days postoperatively (early postoperative assessment), and recording that of only those patients who returned for the final follow-up examination after 40 or more days without additional prompting. Methods: Each of 40 centres in ten countries in Asia, Africa, and Latin America recruited 40-120 consecutive surgical cataract patients. Operative-eye best-corrected visual acuity and uncorrected visual acuity were recorded before surgery, 3 or fewer days postoperatively, and 40 or more days postoperatively. Clinics logged whether each patient had returned for the final follow-up examination without additional prompting, had to be actively encouraged to return, or had to be examined at home. Visual outcome for each centre was defined as the proportion of patients with uncorrected visual acuity of 6/18 or better minus the proportion with uncorrected visual acuity of 6/60 or worse, and was calculated for each participating hospital with results from the early assessment of all patients and the late assessment of only those returning unprompted, with results from the final follow-up assessment for all patients used as the standard. Findings: Of 3708 participants, 3441 (93%) had final follow-up vision data recorded 40 or more days after surgery, 1831 of whom (51% of the 3581 total participants for whom mode of follow-up was recorded) had returned to the clinic without additional prompting. Visual outcome by hospital from early postoperative and final follow-up assessment for all patients were highly correlated (Spearman's rs=0·74, p<0·0001). Visual outcome from final follow-up assessment for all patients and for only those who returned without additional prompting were also highly correlated (rs=0·86, p<0·0001), even for the 17 hospitals with unprompted return rates of less than 50% (rs=0·71, p=0·002). When we divided hospitals into top 25%, middle 50%, and bottom 25% by visual outcome, classification based on final follow-up assessment for all patients was the same as that based on early postoperative assessment for 27 (68%) of 40 centres, and the same as that based on data from patients who returned without additional prompting in 31 (84%) of 37 centres. Use of glasses to optimise vision at the time of the early and late examinations did not further improve the correlations. Interpretation: Early vision assessment for all patients and follow-up assessment only for patients who return to the clinic without prompting are valid measures of operative quality in settings where follow-up is poor. Funding: ORBIS International, Fred Hollows Foundation, Helen Keller International, International Association for the Prevention of Blindness Latin American Office, Aravind Eye Care System. © 2013 Congdon et al. Open Access article distributed under the terms of CC BY.
Resumo:
Objective: To determine the prevalence of systemic corticosteroid-induced morbidity in severe asthma.
Design: Cross-sectional observational study.Setting The primary care Optimum Patient Care Research Database and the British Thoracic Society Difficult Asthma Registry.
Participants: Optimum Patient Care Research Database (7195 subjects in three age- and gender-matched groups)—severe asthma (Global Initiative for Asthma (GINA) treatment step 5 with four or more prescriptions/year of oral corticosteroids, n=808), mild/moderate asthma (GINA treatment step 2/3, n=3975) and non-asthma controls (n=2412). 770 subjects with severe asthma from the British Thoracic Society Difficult Asthma Registry (442 receiving daily oral corticosteroids to maintain disease control).
Main outcome measures: Prevalence rates of morbidities associated with systemic steroid exposure were evaluated and reported separately for each group.
Results: 748/808 (93%) subjects with severe asthma had one or more condition linked to systemic corticosteroid exposure (mild/moderate asthma 3109/3975 (78%), non-asthma controls 1548/2412 (64%); p<0.001 for severe asthma versus non-asthma controls). Compared with mild/moderate asthma, morbidity rates for severe asthma were significantly higher for conditions associated with systemic steroid exposure (type II diabetes 10% vs 7%, OR=1.46 (95% CI 1.11 to 1.91), p<0.01; osteoporosis 16% vs 4%, OR=5.23, (95% CI 3.97 to 6.89), p<0.001; dyspeptic disorders (including gastric/duodenal ulceration) 65% vs 34%, OR=3.99, (95% CI 3.37 to 4.72), p<0.001; cataracts 9% vs 5%, OR=1.89, (95% CI 1.39 to 2.56), p<0.001). In the British Thoracic Society Difficult Asthma Registry similar prevalence rates were found, although, additionally, high rates of osteopenia (35%) and obstructive sleep apnoea (11%) were identified.
Conclusions: Oral corticosteroid-related adverse events are common in severe asthma. New treatments which reduce exposure to oral corticosteroids may reduce the prevalence of these conditions and this should be considered in cost-effectiveness analyses of these new treatments.
Resumo:
OBJECTIVE:
To study the postoperative visual function and uptake of refraction and second-eye surgery among persons undergoing cataract surgery in rural China.
METHODS:
Self-reported visual function was measured 10 to 14 months after surgery. Subjects with improvement of 2 or more lines with refraction were offered glasses, and those with significant cataract were offered second-eye surgery.
RESULTS:
Among 313 eligible subjects, 242 (77%) could be contacted; 176 (73%) of those contacted were examined. Interviewed subjects had a mean +/- SD age of 69.9 +/- 10.2 years, and 63.6% were female. The mean +/- SD visual function score was 88.4 +/- 12.3, higher than previously reported for cataract programs in rural China and significantly (P = .03) correlated with presenting vision. Forty-two percent of subjects had spectacles, more than half being reading glasses. Though 87% of subjects' vision improved with refraction, only 35% accepted prescriptions, the most common reason for refusal being lack of perceived need. Second-eye surgery was accepted by a total of 48% (85 of 176) of patients, cost being the biggest reason for refusal.
CONCLUSIONS:
Visual function was high in this cohort. Potential benefit of refraction and second-eye surgery was substantial, but uptake of services was modest. Programs to improve service uptake should focus on reading glasses and cost-reduction strategies such as tiered pricing.
Resumo:
PURPOSE: To model the possible impact of using average-power intraocular lenses (IOLs) and evaluate the postoperative refractive error in patients having cataract surgery in rural China.SETTING: Rural Guangdong, China.METHODS: Patients having cataract surgery by local surgeons were examined and visual function was assessed 10 to 14 months after surgery. Subjective refraction at near and distance was performed bilaterally by an ophthalmologist. Patients had a target refraction of -0.50 diopter (D) based on ocular biometry.RESULTS: Of the 313 eligible patients, 242 (77%) could be contacted and 176 (74% of contacted patients, 56% overall) were examined. Examined patients had a mean age of 69.4 +/- 10.5 years. Of the 211 operated eyes, 73.2% were within +/-1.0 D of the target refraction after surgery. The best presenting distance vision was in patients within +/-1.0 D of plano and the best presenting near vision, in those with mild myopia (<-1.0 D to > or =2.0 D) (P= .005). However, patients with hyperopia (>+1.0 D) reported significantly better adjusted visual function than those with emmetropia or myopia (<-1.0 D). When the predicted use of an average-power IOL (median +21.5 D) was modeled, predicted visual acuity was significantly reduced (P= .001); however, predicted visual function was not significantly altered (P>.3).CONCLUSIONS: Accurate selection of postoperative refractive error was achieved by local surgeons in this rural area. Based on visual function results, aiming for mild postoperative myopia may not be suitable in this setting. Implanting average-power IOLs significantly reduced postoperative presenting vision, but not visual function.
Resumo:
AIM: To study the effect of posterior capsular opacification (PCO) on vision and visual function in patients undergoing cataract surgery in rural China, and to compare this with the effect of refractive error. METHODS: Patients undergoing cataract surgery in at least one eye by local surgeons in a rural setting between 8 August and 31 December 2005 were examined with slit lamp grading of PCO 10-14 months after surgery. Subjects with any PCO associated with best-corrected visual acuity of 6/7.5 or worse, or with grade 2+ or worse PCO without visual decrement, were offered YAG laser capsulotomy. Vision and self-reported visual function were assessed, and various demographic and clinical factors potentially associated with PCO were recorded. RESULTS: Of 313 patients operated on within the study window, 239 (76%) could be contacted by telephone; study examinations were performed on 176 (74%). Examined subjects had a mean (SD) age of 69.4 (10.5) years, 116 (67%) were female, and 149 (86%) had been blind (presenting visual acuity < or = 6/60) in the operated eye before surgery. PCO of grade 1 or above was present in 34 of 204 operated eyes (16.7%). Those with PCO had significantly worse presenting vision (p = 0.007) but not visual function (p>0.3) than those without PCO. Women had a significantly higher prevalence of PCO (20.9%) than did men (8.6%, p<0.05). Of 19 eyes undergoing capsulotomy with best-corrected visual acuity measured the next day, 13 (68%) improved by one or more lines, and seven (37%) improved by two or more lines. Despite a higher uptake of capsulotomy (95%) as opposed to refraction (35%) in this cohort, the yield in terms of eyes with poor presenting visual acuity (< 6/18) that could be improved was higher for refraction (26% = 9/35) than for capsulotomy (9% = 3/35). CONCLUSION: The prevalence of PCO and impact on vision and visual function in this cohort was modest 1 year after surgery. However, PCO prevalence increases with time. Follow-up of this cohort is underway to determine the effectiveness of this early intervention in identifying and treating subjects who will eventually experience clinically significant PCO.