875 resultados para Evaluation methods
Resumo:
Objective: This review intends to examine current research surrounding economic assessment in the delivery of dental care. Economic evaluation is an acknowledged method of analysing dental care systems by means of efficiency, effectiveness, efficacy and availability. Though this is a widely used method in medicine, it is underappreciated in dentistry. As the delivery of health care changes there has been recent demand by the public, the profession, and those funding dental treatment to investigate current practices regarding programs themselves and resource allocation.
Methods: A meta-analysis was conducted regarding health economics. The initial search was carried out using Pubmed, Google Scholar, Science Direct, and The Cochrane Library with search terms “health AND economics AND dentistry”. A secondary search was conducted with the terms “heath care AND dentistry AND”. The third part of the entry was changed to address the aims and included the following terms: “cost benefit analysis”, “efficiency criteria”, “supply & demand”, “cost-effectiveness”, “cost minimisation”, “cost utility”, “resource allocation”, “QALY”, and “delivery and economics”. Limits were applied to all searches to only include papers published in English within the last eight years.
Results: Preliminary results demonstrated a limited number of economic evaluations conducted in dentistry. Those that were carried out were mainly confined to the United Kingdom. Furthermore analysis was mainly restricted to restorative dentistry, followed by orthodontics, and maxillofacial surgery, thereby demonstrating a need for investigation in all fields of dentistry.
Conclusion: Health economics has been overlooked in the past regarding delivery of dental care and resource allocation. Economic appraisal is a crucial part of generating an effective and efficient dental care system. It is becoming increasingly evident that there is a need for economic evaluation in all dental fields.
Resumo:
Background: Serious case reviews and research studies have indicated weaknesses in risk assessments conducted by child protection social workers. Social workers are adept at gathering information but struggle with analysis and assessment of risk. The Department for Education wants to know if the use of a structured decision-making tool can improve child protection assessments of risk.
Methods/design: This multi-site, cluster-randomised trial will assess the effectiveness of the Safeguarding Children Assessment and Analysis Framework (SAAF). This structured decision-making tool aims to improve social workers' assessments of harm, of future risk and parents' capacity to change. The comparison is management as usual.
Inclusion criteria: Children's Services Departments (CSDs) in England willing to make relevant teams available to be randomised, and willing to meet the trial's training and data collection requirements.
Exclusion criteria: CSDs where there were concerns about performance; where a major organisational restructuring was planned or under way; or where other risk assessment tools were in use.
Six CSDs are participating in this study. Social workers in the experimental arm will receive 2 days training in SAAF together with a range of support materials, and access to limited telephone consultation post-training. The primary outcome is child maltreatment. This will be assessed using data collected nationally on two key performance indicators: the first is the number of children in a year who have been subject to a second Child Protection Plan (CPP); the second is the number of re-referrals of children because of related concerns about maltreatment. Secondary outcomes are: i) the quality of assessments judged against a schedule of quality criteria and ii) the relationship between the three assessments required by the structured decision-making tool (level of harm, risk of (re) abuse and prospects for successful intervention).
Discussion: This is the first study to examine the effectiveness of SAAF. It will contribute to a very limited literature on the contribution that structured decision-making tools can make to improving risk assessment and case planning in child protection and on what is involved in their effective implementation.
Resumo:
BACKGROUND: Glaucoma is a leading cause of avoidable blindness worldwide. Open angle glaucoma is the most common type of glaucoma. No randomised controlled trials have been conducted evaluating the effectiveness of glaucoma screening for reducing sight loss. It is unclear what the most appropriate intervention to be evaluated in any glaucoma screening trial would be. The purpose of this study was to develop the clinical components of an intervention for evaluation in a glaucoma (open angle) screening trial that would be feasible and acceptable in a UK eye-care service.
METHODS: A mixed-methods study, based on the Medical Research Council (MRC) framework for complex interventions, integrating qualitative (semi-structured interviews with 46 UK eye-care providers, policy makers and health service commissioners), and quantitative (economic modelling) methods. Interview data were synthesised and used to revise the screening interventions compared within an existing economic model.
RESULTS: The qualitative data indicated broad based support for a glaucoma screening trial to take place in primary care, using ophthalmic trained technical assistants supported by optometry input. The precise location should be tailored to local circumstances. There was variability in opinion around the choice of screening test and target population. Integrating the interview findings with cost-effectiveness criteria reduced 189 potential components to a two test intervention including either optic nerve photography or screening mode perimetry (a measure of visual field sensitivity) with or without tonometry (a measure of intraocular pressure). It would be more cost-effective, and thus acceptable in a policy context, to target screening for open angle glaucoma to those at highest risk but for both practicality and equity arguments the optimal strategy was screening a general population cohort beginning at age forty.
CONCLUSIONS: Interventions for screening for open angle glaucoma that would be feasible from a service delivery perspective were identified. Integration within an economic modelling framework explicitly highlighted the trade-off between cost-effectiveness, feasibility and equity. This study exemplifies the MRC recommendation to integrate qualitative and quantitative methods in developing complex interventions. The next step in the development pathway should encompass the views of service users.
Resumo:
Background: Chronic kidney disease (CKD) and hypertension are global public health problems associated with considerable morbidity, premature mortality and attendant healthcare costs. Previous studies have highlighted that non-invasive examination of the retinal microcirculation can detect microvascular pathology that is associated with systemic disorders of the circulatory system such as hypertension. We examined the associations between retinal vessel caliber (RVC) and fractal dimension (DF), with both hypertension and CKD in elderly Irish nuns.
Methods: Data from 1233 participants in the cross-sectional observational Irish Nun Eye Study (INES) were assessed from digital photographs with a standardized protocol using computer-assisted software. Multivariate regression analyses were used to assess associations with hypertension and CKD, with adjustment for age, body mass index (BMI), refraction, fellow eye RVC, smoking, alcohol consumption, ischemic heart disease (IHD), cerebrovascular accident (CVA), diabetes and medication use.
Results: In total, 1122 (91%) participants (mean age: 76.3 [range: 56-100] years) had gradable retinal images of sufficient quality for blood vessel assessment. Hypertension was significantly associated with a narrower central retinal arteriolar equivalent (CRAE) in a fully adjusted analysis (P = 0.002; effect size= -2.16 μm; 95% confidence intervals [CI]: -3.51, -0.81 μm). No significant associations between other retinal vascular parameters and hypertension or between any retinal vascular parameters and CKD were found.
Conclusions: Individuals with hypertension have significantly narrower retinal arterioles which may afford an earlier opportunity for tailored prevention and treatment options to optimize the structure and function of the microvasculature, providing additional clinical utility. No significant associations between retinal vascular parameters and CKD were detected.
Resumo:
The influence of mixed hematopoietic chimerism (MC) after allogeneic bone marrow transplantation remains unknown. Increasingly sensitive detection methods have shown that MC occurs frequently. We report a highly sensitive novel method to assess MC based on the polymerase chain reaction (PCR). Simple dinucleotide repeat sequences called microsatellites have been found to vary in their repeat number between individuals. We use this variation to type donor-recipient pairs following allogeneic BMT. A panel of seven microsatellites was used to distinguish between donor and recipient cells of 32 transplants. Informative microsatellites were subsequently used to assess MC after BMT in this group of patients. Seventeen of the 32 transplants involved a donor of opposite sex; hence, cytogenetics and Y chromosome-specific PCR were also used as an index of chimerism in these patients. MC was detected in bone marrow aspirates and peripheral blood in 18 of 32 patients (56%) by PCR. In several cases, only stored slide material was available for analysis but PCR of microsatellites or Y chromosomal material could be used successfully to assess the origin of cells in this archival material. Cytogenetic analysis was possible in 17 patients and MC was detected in three patients. Twelve patients received T-cell-depleted marrow and showed a high incidence of MC as revealed by PCR (greater than 80%). Twenty patients received unmanipulated marrow, and while the incidence of MC was lower (44%), this was a high percentage when compared with other studies. Once MC was detected, the percentages of recipient cells tended to increase. However, in patients exhibiting MC who subsequently relapsed, this increase was relatively sudden. The overall level of recipient cells in the group of MC patients who subsequently relapsed was higher than in those who exhibited stable MC. Thus, while the occurrence of MC was not indicative of a poor prognosis per se, sudden increases in the proportions of recipient cells may be a prelude to graft rejection or relapse.
Resumo:
Solar heating systems have the potential to be an efficient renewable energy technology, provided they are sized correctly. Sizing a solar thermal system for domestic applications does not warrant the cost of a simulation. As a result simplified sizing procedures are required. The size of a system depends on a number of variables including the efficiency of the collector itself, the hot water demand and the solar radiation at a given location. Domestic Hot Water (DHW) demand varies with time and is assessed using a multi-parameter detailed model. Secondly, the national energy evaluation methodologies are evaluated from the perspective of solar thermal system sizing. Based on the assessment of the standards, limitations in the evaluation method for solar thermal systems are outlined and an adapted method, specific to the sizing of solar thermal systems, is proposed. The methodology is presented for two common dwelling scenarios. Results from this showed that it is difficult to achieve a high solar fraction given practical sizes of system infrastructure (storage tanks) for standard domestic properties. However, solar thermal systems can significantly offset energy loads due associated DHW consumption, particularly when sized appropriately. The presented methodology is valuable for simple solar system design and also for the quick comparison of salient criteria.
Resumo:
Background: Rapid Response Systems (RRS) have been implemented nationally and internationally to improve patient safety in hospital. However, to date the majority of the RRS research evidence has focused on measuring the effectiveness of the intervention on patient outcomes. To evaluate RRS it has been recommended that a multimodal approach is required to address the broad range of process and outcome measures required to determine the effectiveness of the RRS concept. Aim: The aim of this paper is to evaluate the official RRS programme theoretical assumptions regarding how the programme is meant to work against actual practice in order to determine what works. Methods: The research design was a multiple case study approach of four wards in two hospitals in Northern Ireland. It followed the principles of realist evaluation research which allowed empirical data to be gathered to test and refine RRS programme theory [1]. This approach used a variety of mixed methods to test the programme theories including individual and focus group interviews with a purposive sample of 75 nurses and doctors, observation of ward practices and documentary analysis. The findings from the case studies were analysed and compared within and across cases to identify what works for whom and in what circumstances. Results: The RRS programme theories were critically evaluated and compared with study findings to develop a mid-range theory to explain what works, for whom in what circumstances. The findings of what works suggests that clinical experience, established working relationships, flexible implementation of protocols, ongoing experiential learning, empowerment and pre-emptive management are key to the success of RRS implementation. Conclusion:These findings highlight the combination of factors that can improve the implementation of RRS and in light of this evidence several recommendations are made to provide policymakers with guidance and direction for their success and sustainability.References: 1.Pawson R and Tilley N. (1997) Realistic Evaluation. Sage Publications; LondonType of submission: Concurrent session Source of funding: Sandra Ryan Fellowship funded by the School of Nursing & Midwifery, Queen’s University of Belfast
Resumo:
Realistic Evaluation of EWS and ALERT: factors enabling and constraining implementation Background The implementation of EWS and ALERT in practice is essential to the success of Rapid Response Systems but is dependent upon nurses utilising EWS protocols and applying ALERT best practice guidelines. To date there is limited evidence on the effectiveness of EWS or ALERT as research has primarily focused on measuring patient outcomes (cardiac arrests, ICU admissions) following the implementation of a Rapid Response Team. Complex interventions in healthcare aimed at changing service delivery and related behaviour of health professionals require a different research approach to evaluate the evidence. To understand how and why EWS and ALERT work, or might not work, research needs to consider the social, cultural and organisational influences that will impact on successful implementation in practice. This requires a research approach that considers both the processes and outcomes of complex interventions, such as EWS and ALERT, implemented in practice. Realistic Evaluation is such an approach and was used to explain the factors that enable and constrain the implementation of EWS and ALERT in practice [1]. Aim The aim of this study was to evaluate factors that enabled and constrained the implementation and service delivery of early warnings systems (EWS) and ALERT in practice in order to provide direction for enabling their success and sustainability. Methods The research design was a multiple case study approach of four wards in two hospitals in Northern Ireland. It followed the principles of realist evaluation research which allowed empirical data to be gathered to test and refine RRS programme theory. This approach used a variety of mixed methods to test the programme theories including individual and focus group interviews, observation and documentary analysis in a two stage process. A purposive sample of 75 key informants participated in individual and focus group interviews. Observation and documentary analysis of EWS compliance data and ALERT training records provided further evidence to support or refute the interview findings. Data was analysed using NVIVO8 to categorise interview findings and SPSS for ALERT documentary data. These findings were further synthesised by undertaking a within and cross case comparison to explain the factors enabling and constraining EWS and ALERT. Results A cross case analysis highlighted similarities, differences and factors enabling or constraining successful implementation across the case study sites. Findings showed that personal (confidence; clinical judgement; personality), social (ward leadership; communication), organisational (workload and staffing issues; pressure from managers to complete EWS audit and targets), educational (constraints on training; no clinical educator on ward) and cultural (routine task delegated) influences impact on EWS and acute care training outcomes. There were also differences noted between medical and surgical wards across both case sites. Conclusions Realist Evaluation allows refinement and development of the RRS programme theory to explain the realities of practice. These refined RRS programme theories are capable of informing the planning of future service provision and provide direction for enabling their success and sustainability. References: 1. McGaughey J, Blackwood B, O’Halloran P, Trinder T. J. & Porter S. (2010) A realistic evaluation of Track and Trigger systems and acute care training for early recognition and management of deteriorating ward–based patients. Journal of Advanced Nursing 66 (4), 923-932. Type of submission: Concurrent session Source of funding: Sandra Ryan Fellowship funded by the School of Nursing & Midwifery, Queen’s University of Belfast
Resumo:
Aim The aim of the study is to evaluate factors that enable or constrain the implementation and service delivery of early warnings systems or acute care training in practice. Background To date there is limited evidence to support the effectiveness of acute care initiatives (early warning systems, acute care training, outreach) in reducing the number of adverse events (cardiac arrest, death, unanticipated Intensive Care admission) through increased recognition and management of deteriorating ward based patients in hospital [1-3]. The reasons posited are that previous research primarily focused on measuring patient outcomes following the implementation of an intervention or programme without considering the social factors (the organisation, the people, external influences) which may have affected the process of implementation and hence measured end-points. Further research which considers the social processes is required in order to understand why a programme works, or does not work, in particular circumstances [4]. Method The design is a multiple case study approach of four general wards in two acute hospitals where Early Warning Systems (EWS) and Acute Life-threatening Events Recognition and Treatment (ALERT) course have been implemented. Various methods are being used to collect data about individual capacities, interpersonal relationships and institutional balance and infrastructures in order to understand the intended and unintended process outcomes of implementing EWS and ALERT in practice. This information will be gathered from individual and focus group interviews with key participants (ALERT facilitators, nursing and medical ALERT instructors, ward managers, doctors, ward nurses and health care assistants from each hospital); non-participant observation of ward organisation and structure; audit of patients' EWS charts and audit of the medical notes of patients who deteriorated during the study period to ascertain whether ALERT principles were followed. Discussion & progress to date This study commenced in January 2007. Ethical approval has been granted and data collection is ongoing with interviews being conducted with key stakeholders. The findings from this study will provide evidence for policy-makers to make informed decisions regarding the direction for strategic and service planning of acute care services to improve the level of care provided to acutely ill patients in hospital. References 1. Esmonde L, McDonnell A, Ball C, Waskett C, Morgan R, Rashidain A et al. Investigating the effectiveness of Critical Care Outreach Services: A systematic review. Intensive Care Medicine 2006; 32: 1713-1721 2. McGaughey J, Alderdice F, Fowler R, Kapila A, Mayhew A, Moutray M. Outreach and Early Warning Systems for the prevention of Intensive Care admission and death of critically ill patients on general hospital wards. Cochrane Database of Systematic Reviews 2007, Issue 3. www.thecochranelibrary.com 3. Winters BD, Pham JC, Hunt EA, Guallar E, Berenholtz S, Pronovost PJ (2007) Rapid Response Systems: A systematic review. Critical Care Medicine 2007; 35 (5): 1238-43 4. Pawson R and Tilley N. Realistic Evaluation. London; Sage: 1997
Resumo:
Statement of purpose The purpose of this concurrent session is to present the main findings and recommendations from a five year study evaluating the implementation of Early Warning Systems (EWS) and the Acute Life-threatening Events: Recognition and Treatment (ALERT) course in Northern Ireland. The presentation will provide delegates with an understanding of those factors that enable and constrain successful implementation of EWS and ALERT in practice in order to provide an impetus for change. Methods The research design was a multiple case study approach of four wards in two hospitals in Northern Ireland. It followed the principles of realist evaluation research which allowed empirical data to be gathered to test and refine RRS programme theory [1]. The stages included identifying the programme theories underpinning EWS and ALERT, generating hypotheses, gathering empirical evidence and refining the programme theories. This approach used a variety of mixed methods including individual and focus group interviews, observation and documentary analysis of EWS compliance data and ALERT training records. A within and across case comparison facilitated the development of mid-range theories from the research evidence. Results The official RRS theories developed from the realist synthesis were critically evaluated and compared with the study findings to develop a mid-range theory to explain what works, for whom in what circumstances. The findings of what works suggests that clinical experience, established working relationships, flexible implementation of protocols, ongoing experiential learning, empowerment and pre-emptive management are key to the success of EWS and ALERT implementation. Each concept is presented as ‘context, mechanism and outcome configurations’ to provide an understanding of how the context impacts on individual reasoning or behaviour to produce certain outcomes. Conclusion These findings highlight the combination of factors that can improve the implementation and sustainability of EWS and ALERT and in light of this evidence several recommendations are made to provide policymakers with guidance and direction for future policy development. References: 1. Pawson R and Tilley N. (1997) Realistic Evaluation. Sage Publications; London Type of submission: Concurrent session Source of funding: Sandra Ryan Fellowship funded by the School of Nursing & Midwifery, Queen’s University of Belfast
Resumo:
Background: People with intellectual disabilities often present with unique challenges that make it more difficult to meet their
palliative care needs.
Aim: To define consensus norms for palliative care of people with intellectual disabilities in Europe.
Design: Delphi study in four rounds: (1) a taskforce of 12 experts from seven European countries drafted the norms, based on available empirical knowledge and regional/national guidelines; (2) using an online survey, 34 experts from 18 European countries evaluated the draft norms, provided feedback and distributed the survey within their professional networks. Criteria for consensus
were clearly defined; (3) modifications and recommendations were made by the taskforce; and (4) the European Association for
Palliative Care reviewed and approved the final version.
Setting and participants: Taskforce members: identified through international networking strategies. Expert panel: a purposive sample identified through taskforce members’ networks.
Results: A total of 80 experts from 15 European countries evaluated 52 items within the following 13 norms: equity of access, communication, recognising the need for palliative care, assessment of total needs, symptom management, end-of-life decision making, involving those who matter, collaboration, support for family/carers, preparing for death, bereavement support, education/training
and developing/managing services. None of the items scored less than 86% agreement, making a further round unnecessary. In light of respondents’ comments, several items were modified and one item was deleted.
Conclusion: This White Paper presents the first guidance for clinical practice, policy and research related to palliative care for people with intellectual disabilities based on evidence and European consensus, setting a benchmark for changes in policy and practice.
Resumo:
BACKGROUND: Diabetic retinopathy is an important cause of visual loss. Laser photocoagulation preserves vision in diabetic retinopathy but is currently used at the stage of proliferative diabetic retinopathy (PDR).
OBJECTIVES: The primary aim was to assess the clinical effectiveness and cost-effectiveness of pan-retinal photocoagulation (PRP) given at the non-proliferative stage of diabetic retinopathy (NPDR) compared with waiting until the high-risk PDR (HR-PDR) stage was reached. There have been recent advances in laser photocoagulation techniques, and in the use of laser treatments combined with anti-vascular endothelial growth factor (VEGF) drugs or injected steroids. Our secondary questions were: (1) If PRP were to be used in NPDR, which form of laser treatment should be used? and (2) Is adjuvant therapy with intravitreal drugs clinically effective and cost-effective in PRP?
ELIGIBILITY CRITERIA: Randomised controlled trials (RCTs) for efficacy but other designs also used.
REVIEW METHODS: Systematic review and economic modelling.
RESULTS: The Early Treatment Diabetic Retinopathy Study (ETDRS), published in 1991, was the only trial designed to determine the best time to initiate PRP. It randomised one eye of 3711 patients with mild-to-severe NPDR or early PDR to early photocoagulation, and the other to deferral of PRP until HR-PDR developed. The risk of severe visual loss after 5 years for eyes assigned to PRP for NPDR or early PDR compared with deferral of PRP was reduced by 23% (relative risk 0.77, 99% confidence interval 0.56 to 1.06). However, the ETDRS did not provide results separately for NPDR and early PDR. In economic modelling, the base case found that early PRP could be more effective and less costly than deferred PRP. Sensitivity analyses gave similar results, with early PRP continuing to dominate or having low incremental cost-effectiveness ratio. However, there are substantial uncertainties. For our secondary aims we found 12 trials of lasers in DR, with 982 patients in total, ranging from 40 to 150. Most were in PDR but five included some patients with severe NPDR. Three compared multi-spot pattern lasers against argon laser. RCTs comparing laser applied in a lighter manner (less-intensive burns) with conventional methods (more intense burns) reported little difference in efficacy but fewer adverse effects. One RCT suggested that selective laser treatment targeting only ischaemic areas was effective. Observational studies showed that the most important adverse effect of PRP was macular oedema (MO), which can cause visual impairment, usually temporary. Ten trials of laser and anti-VEGF or steroid drug combinations were consistent in reporting a reduction in risk of PRP-induced MO.
LIMITATION: The current evidence is insufficient to recommend PRP for severe NPDR.
CONCLUSIONS: There is, as yet, no convincing evidence that modern laser systems are more effective than the argon laser used in ETDRS, but they appear to have fewer adverse effects. We recommend a trial of PRP for severe NPDR and early PDR compared with deferring PRP till the HR-PDR stage. The trial would use modern laser technologies, and investigate the value adjuvant prophylactic anti-VEGF or steroid drugs.
STUDY REGISTRATION: This study is registered as PROSPERO CRD42013005408.
FUNDING: The National Institute for Health Research Health Technology Assessment programme.
Resumo:
PURPOSE: To establish the relationship between myopia and lens opacity. DESIGN: Population-based cross-sectional study. PARTICIPANTS: Two thousand five hundred twenty participants from the Salisbury Eye Evaluation aged 65 to 84 years. METHODS: Participants filled out questionnaires regarding medical history, social habits, and a detailed history of distance spectacle wear. They underwent a full ocular examination. Lens photographs were taken for assessment of lens opacity using the Wilmer grading system. Multivariate logistic regression models using generalized estimating equations were used to analyze the relationship between lens opacity type and degree of myopia, while accounting for potential confounders. MAIN OUTCOME MEASURES: Presence of posterior subcapsular opacity, cortical opacity, or nuclear opacity. RESULTS: Significant associations were found between myopia and both nuclear and posterior subcapsular opacities. For nuclear opacity, the odds ratios (ORs) were 2.25 for myopia between -0.50 diopters (D) and -1.99 D (P<0.001), 3.65 for myopia between -2.00 D and -3.99 D (P<0.001), 4.54 for myopia between -4.00 D and -5.99 D (P<0.001), and 3.61 for myopia -6.00 D or more (P = 0.002). For posterior subcapsular cataracts, ORs were 1.59 for myopia between -0.50 D and -1.99 D (P = 0.11), 3.22 for myopia between -2.00 D and -3.99 D (P = 0.002), 5.36 for myopia between -4.00 D and -5.99 D (P<0.001), and 12.34 for myopia -6.00 D or more (P<0.001). No association was found between myopia and cortical opacity. The association between posterior subcapsular opacity and myopia was equally strong for those wearing glasses by age 21 years and for those without glasses; for nuclear opacity, significantly higher ORs were found for myopes who started wearing glasses after age 21. CONCLUSIONS: These results confirm the previously reported association between myopia, posterior subcapsular opacity, and nuclear opacity. Furthermore, the strong association between early spectacle wear and posterior subcapsular opacity among myopes, absent for nuclear opacity, suggests that myopia may precede opacity in the case of posterior subcapsular opacity, but not nuclear opacity. Measures of association between posterior subcapsular opacity and myopia were stronger in the current study than have previously been found. Longitudinal studies to confirm the association are warranted.
Resumo:
PURPOSE:
To compare the outcomes of cataract surgery performed with 3 incision size-dependent phacoemulsification groups (1.8, 2.2, and 3.0 mm).
DESIGN:
Prospective randomized comparative study.
METHODS:
One hundred twenty eyes of 120 patients with age-related cataract (grades 2 to 4) were categorized according to the Lens Opacities Classification System III. Eligible subjects were randomly assigned to 3 surgical groups using coaxial phacoemulsification through 3 clear corneal incision sizes (1.8, 2.2, and 3.0 mm). Different intraoperative and postoperative outcome measures were obtained, with corneal incision size and surgically induced astigmatism as the main clinical outcomes.
RESULTS:
There were no statistically significant differences in most of the intraoperative and postoperative outcome measures among the 3 groups. However, the mean cord length of the clear corneal incision was increased in each group after surgery. The mean maximal clear corneal incision thickness in the 1.8-mm group was significantly greater than for the other groups at 1 month. The mean surgically induced astigmatism in the 1.8- and 2.2-mm groups was significantly less than that in the 3.0-mm group after 1 month, without significant difference between the 1.8- and 2.2-mm groups.
CONCLUSIONS:
With appropriate equipment, smaller incisions may result in less astigmatism, but the particular system used will influence incision stress and wound integrity, and may thus limit the reduction in incision size and astigmatism that is achievable.
Resumo:
Background
Specialty Registrars in Restorative Dentistry (StRs) should be competent in the independent restorative management of patients with developmental disorders including hypodontia and cleft lip/palate upon completion of their specialist training.1 Knowledge and management may be assessed via the Intercollegiate Specialty Fellowship Examination (ISFE) in Restorative Dentistry.2
Objective
The aim of this study was to collate and compare data on the training and experience of StRs in the management of patients with developmental disorders across different training units within the British Isles.
Methods
Questionnaires were distributed to all StRs attending the Annual General Meeting of the Specialty Registrars in Restorative Dentistry Group, Belfast, in October 2015. Participants were asked to rate their confidence and experience of assessing and planning treatment for patients with developmental disorders, construction of appropriate prostheses, and provision of dental implants. Respondents were also asked to record clinical supervision and didactic teaching at their unit, and to rate their confidence of passing a future ISFE station assessing knowledge of developmental disorders.
Results
Responses were obtained from 32 StRs (n=32) training within all five countries of the British Isles. The majority of respondents were based in England (72%) with three in Wales, and two in each of Scotland, Northern Ireland, and the Republic of Ireland. Approximately one third of respondents (34%) were in the final years of training (years 4-6). Almost half of the StRs reported that they were not confident of independently assessing (44%) new patients with a developmental disorder, with larger numbers (72%) indicating a lack of confidence in treatment planning. Six respondents rated their experience of treating obturator patients as ‘poor’ or ‘very poor’. The majority (56%) rated their experience of implant provision in these cases as ‘good’ or ‘excellent’ with three-quarters (75%) rating clinical supervision at their unit as ‘good’ or ‘excellent’. Less than half (41%) rated the didactic teaching at their unit as ‘good’ or ‘excellent’, and only 8 StRs indicated that they were confident of passing an ISFE station focused on developmental disorders.
Conclusion
Experience and training regarding patients with developmental disorders is inconsistent for StRs across the British Isles with a number of trainees reporting a lack of clinical exposure.