117 resultados para Research -- Evaluation
Resumo:
This research provides new insights into the measurement of students’ authorial identity and its potential for minimising the incidence of unintentional plagiarism by providing evidence about the psychometric properties of the Student Authorship Questionnaire (SAQ). Exploratory and confirmatory factor analyses (EFA and CFA) are employed to investigate the measurement properties of the scales which comprise the SAQ using data collected from accounting students. The results provide limited psychometric support in favour of the factorial structure of the SAQ and raise a number of questions regarding the instrument’s robustness and generalisability across disciplines. An alternative model derived from the EFA outperforms the SAQ model with regard to its psychometric properties. Explanations for these findings are proffered and avenues for future research suggested.
Resumo:
Background: Rapid Response Systems (RRS) have been implemented nationally and internationally to improve patient safety in hospital. However, to date the majority of the RRS research evidence has focused on measuring the effectiveness of the intervention on patient outcomes. To evaluate RRS it has been recommended that a multimodal approach is required to address the broad range of process and outcome measures required to determine the effectiveness of the RRS concept. Aim: The aim of this paper is to evaluate the official RRS programme theoretical assumptions regarding how the programme is meant to work against actual practice in order to determine what works. Methods: The research design was a multiple case study approach of four wards in two hospitals in Northern Ireland. It followed the principles of realist evaluation research which allowed empirical data to be gathered to test and refine RRS programme theory [1]. This approach used a variety of mixed methods to test the programme theories including individual and focus group interviews with a purposive sample of 75 nurses and doctors, observation of ward practices and documentary analysis. The findings from the case studies were analysed and compared within and across cases to identify what works for whom and in what circumstances. Results: The RRS programme theories were critically evaluated and compared with study findings to develop a mid-range theory to explain what works, for whom in what circumstances. The findings of what works suggests that clinical experience, established working relationships, flexible implementation of protocols, ongoing experiential learning, empowerment and pre-emptive management are key to the success of RRS implementation. Conclusion:These findings highlight the combination of factors that can improve the implementation of RRS and in light of this evidence several recommendations are made to provide policymakers with guidance and direction for their success and sustainability.References: 1.Pawson R and Tilley N. (1997) Realistic Evaluation. Sage Publications; LondonType of submission: Concurrent session Source of funding: Sandra Ryan Fellowship funded by the School of Nursing & Midwifery, Queen’s University of Belfast
Resumo:
Realistic Evaluation of EWS and ALERT: factors enabling and constraining implementation Background The implementation of EWS and ALERT in practice is essential to the success of Rapid Response Systems but is dependent upon nurses utilising EWS protocols and applying ALERT best practice guidelines. To date there is limited evidence on the effectiveness of EWS or ALERT as research has primarily focused on measuring patient outcomes (cardiac arrests, ICU admissions) following the implementation of a Rapid Response Team. Complex interventions in healthcare aimed at changing service delivery and related behaviour of health professionals require a different research approach to evaluate the evidence. To understand how and why EWS and ALERT work, or might not work, research needs to consider the social, cultural and organisational influences that will impact on successful implementation in practice. This requires a research approach that considers both the processes and outcomes of complex interventions, such as EWS and ALERT, implemented in practice. Realistic Evaluation is such an approach and was used to explain the factors that enable and constrain the implementation of EWS and ALERT in practice [1]. Aim The aim of this study was to evaluate factors that enabled and constrained the implementation and service delivery of early warnings systems (EWS) and ALERT in practice in order to provide direction for enabling their success and sustainability. Methods The research design was a multiple case study approach of four wards in two hospitals in Northern Ireland. It followed the principles of realist evaluation research which allowed empirical data to be gathered to test and refine RRS programme theory. This approach used a variety of mixed methods to test the programme theories including individual and focus group interviews, observation and documentary analysis in a two stage process. A purposive sample of 75 key informants participated in individual and focus group interviews. Observation and documentary analysis of EWS compliance data and ALERT training records provided further evidence to support or refute the interview findings. Data was analysed using NVIVO8 to categorise interview findings and SPSS for ALERT documentary data. These findings were further synthesised by undertaking a within and cross case comparison to explain the factors enabling and constraining EWS and ALERT. Results A cross case analysis highlighted similarities, differences and factors enabling or constraining successful implementation across the case study sites. Findings showed that personal (confidence; clinical judgement; personality), social (ward leadership; communication), organisational (workload and staffing issues; pressure from managers to complete EWS audit and targets), educational (constraints on training; no clinical educator on ward) and cultural (routine task delegated) influences impact on EWS and acute care training outcomes. There were also differences noted between medical and surgical wards across both case sites. Conclusions Realist Evaluation allows refinement and development of the RRS programme theory to explain the realities of practice. These refined RRS programme theories are capable of informing the planning of future service provision and provide direction for enabling their success and sustainability. References: 1. McGaughey J, Blackwood B, O’Halloran P, Trinder T. J. & Porter S. (2010) A realistic evaluation of Track and Trigger systems and acute care training for early recognition and management of deteriorating ward–based patients. Journal of Advanced Nursing 66 (4), 923-932. Type of submission: Concurrent session Source of funding: Sandra Ryan Fellowship funded by the School of Nursing & Midwifery, Queen’s University of Belfast
Resumo:
Statement of purpose The purpose of this concurrent session is to present the main findings and recommendations from a five year study evaluating the implementation of Early Warning Systems (EWS) and the Acute Life-threatening Events: Recognition and Treatment (ALERT) course in Northern Ireland. The presentation will provide delegates with an understanding of those factors that enable and constrain successful implementation of EWS and ALERT in practice in order to provide an impetus for change. Methods The research design was a multiple case study approach of four wards in two hospitals in Northern Ireland. It followed the principles of realist evaluation research which allowed empirical data to be gathered to test and refine RRS programme theory [1]. The stages included identifying the programme theories underpinning EWS and ALERT, generating hypotheses, gathering empirical evidence and refining the programme theories. This approach used a variety of mixed methods including individual and focus group interviews, observation and documentary analysis of EWS compliance data and ALERT training records. A within and across case comparison facilitated the development of mid-range theories from the research evidence. Results The official RRS theories developed from the realist synthesis were critically evaluated and compared with the study findings to develop a mid-range theory to explain what works, for whom in what circumstances. The findings of what works suggests that clinical experience, established working relationships, flexible implementation of protocols, ongoing experiential learning, empowerment and pre-emptive management are key to the success of EWS and ALERT implementation. Each concept is presented as ‘context, mechanism and outcome configurations’ to provide an understanding of how the context impacts on individual reasoning or behaviour to produce certain outcomes. Conclusion These findings highlight the combination of factors that can improve the implementation and sustainability of EWS and ALERT and in light of this evidence several recommendations are made to provide policymakers with guidance and direction for future policy development. References: 1. Pawson R and Tilley N. (1997) Realistic Evaluation. Sage Publications; London Type of submission: Concurrent session Source of funding: Sandra Ryan Fellowship funded by the School of Nursing & Midwifery, Queen’s University of Belfast
Resumo:
Deep-seated progressive failures of cuttings in heavily overconsolidated clays have been observed in the field and are well documented, especially for London Clays (Potts, Kovacevic, & Vaughan, 1997; Smethurst, Powrie, & Clarke, 2006; Take, 2003), however, the process of softening and the development of a rupture surface in other clays, including the clay fraction of glacial tills, is still to be established. Recent decades have witnessed extreme weather conditions in Northern Ireland with dry summers and wet winters. The dynamics of this pore pressure variation can trigger strength reduction and progressive plastic straining, both of which will lead to slope failure. The aim of this research is to evaluate the effect of pore pressure variations on the deformation and long-term stability of large cuttings in glacial tills in Northern Ireland. This paper outlines the overall research program and presents initial laboratory findings (Carse, 2013).
Resumo:
BACKGROUND: Diabetic retinopathy is an important cause of visual loss. Laser photocoagulation preserves vision in diabetic retinopathy but is currently used at the stage of proliferative diabetic retinopathy (PDR).
OBJECTIVES: The primary aim was to assess the clinical effectiveness and cost-effectiveness of pan-retinal photocoagulation (PRP) given at the non-proliferative stage of diabetic retinopathy (NPDR) compared with waiting until the high-risk PDR (HR-PDR) stage was reached. There have been recent advances in laser photocoagulation techniques, and in the use of laser treatments combined with anti-vascular endothelial growth factor (VEGF) drugs or injected steroids. Our secondary questions were: (1) If PRP were to be used in NPDR, which form of laser treatment should be used? and (2) Is adjuvant therapy with intravitreal drugs clinically effective and cost-effective in PRP?
ELIGIBILITY CRITERIA: Randomised controlled trials (RCTs) for efficacy but other designs also used.
REVIEW METHODS: Systematic review and economic modelling.
RESULTS: The Early Treatment Diabetic Retinopathy Study (ETDRS), published in 1991, was the only trial designed to determine the best time to initiate PRP. It randomised one eye of 3711 patients with mild-to-severe NPDR or early PDR to early photocoagulation, and the other to deferral of PRP until HR-PDR developed. The risk of severe visual loss after 5 years for eyes assigned to PRP for NPDR or early PDR compared with deferral of PRP was reduced by 23% (relative risk 0.77, 99% confidence interval 0.56 to 1.06). However, the ETDRS did not provide results separately for NPDR and early PDR. In economic modelling, the base case found that early PRP could be more effective and less costly than deferred PRP. Sensitivity analyses gave similar results, with early PRP continuing to dominate or having low incremental cost-effectiveness ratio. However, there are substantial uncertainties. For our secondary aims we found 12 trials of lasers in DR, with 982 patients in total, ranging from 40 to 150. Most were in PDR but five included some patients with severe NPDR. Three compared multi-spot pattern lasers against argon laser. RCTs comparing laser applied in a lighter manner (less-intensive burns) with conventional methods (more intense burns) reported little difference in efficacy but fewer adverse effects. One RCT suggested that selective laser treatment targeting only ischaemic areas was effective. Observational studies showed that the most important adverse effect of PRP was macular oedema (MO), which can cause visual impairment, usually temporary. Ten trials of laser and anti-VEGF or steroid drug combinations were consistent in reporting a reduction in risk of PRP-induced MO.
LIMITATION: The current evidence is insufficient to recommend PRP for severe NPDR.
CONCLUSIONS: There is, as yet, no convincing evidence that modern laser systems are more effective than the argon laser used in ETDRS, but they appear to have fewer adverse effects. We recommend a trial of PRP for severe NPDR and early PDR compared with deferring PRP till the HR-PDR stage. The trial would use modern laser technologies, and investigate the value adjuvant prophylactic anti-VEGF or steroid drugs.
STUDY REGISTRATION: This study is registered as PROSPERO CRD42013005408.
FUNDING: The National Institute for Health Research Health Technology Assessment programme.
Resumo:
This paper presents a research protocol for a randomised controlled efficacy trial of the ‘Dead Cool’ smoking prevention programme. Dead Cool is a three to four-hour programme designed to be used by teachers with Year 9 students in Northern Ireland. The main outcome of the programme is to prevent students from starting to smoke. The protocol reports a research design intended to test the efficacy of the programme in 20 post-primary school settings. Selected schools included those from secondary /grammar/integrated/single sex/coeducational, rural and urban schools from both the maintained and controlled state sector and independent sector schools. Outcome measures include self-reported behaviours, monitoring of carbon monoxide (CO) in exhaled breath and focus groups designed to assess implementation fidelity and opinions on efficacy in intervention schools and explore the ‘counterfactual’ potential treatments in control schools.
Resumo:
The European “Community Bureau of Reference” (BCR) sequential extraction procedure, diffusive gradient in thin-films technique (DGT), and physiologically based extraction test were applied to assess metal bioavailability in sediments of Lake Taihu (n = 13). Findings from the three methods showed that Cd was a significant problem in the western lake whereas Cu, Zn, and Ni pollution was most severe in the northern lake. Results from the sequential extraction revealed that more than 50 % of the Cu and Zn were highly mobile and defined within the extractable fraction (AS1 + FM2 + OS3) in the majority of the sediments, in contrast extractable fractions of Ni and Cd were lower than 50 % in most of the sampling sites. Average Cu, Zn, Ni, and Cd bioaccessibilities were <50 % in the gastric phase. Zn and Cd bioaccessibility in the intestinal phase was ∼50 % lower than the gastric phase while bioaccessibilities of Cu and Ni were 47–57 % greater than the gastric phase. Linear regression analysis between DGT and BCR measurements indicated that the extractable fractions (AS1 + FM2 + OS3) in the reducing environment were the main source of DGT uptake, suggesting that DGT is a good in situ evaluation tool for metal bioavailability in sediments.
Resumo:
Physical inactivity is the fourth leading risk factor for global mortality, with most of these deaths occurring in low and middle-income countries (LMICs) like India. Research from developed countries has consistently demonstrated associations between built environment features and physical activity levels of populations. The development of culturally sensitive and reliable measures of the built environment is a necessary first step for accurate analysis of environmental correlates of physical activity in LMICs. This study systematically adapted the Neighborhood Environment Walkability Scale (NEWS) for India and evaluated aspects of test-retest reliability of the adapted version among Indian adults. Cultural adaptation of the NEWS was conducted by Indian and international experts. Semi-structured interviews were conducted with local residents and key informants in the city of Chennai, India. At baseline, participants (N = 370; female = 47.2%) from Chennai completed the adapted NEWS-India surveys on perceived residential density, land use mix-diversity, land use mix-access, street connectivity, infrastructure and safety for walking and cycling, aesthetics, traffic safety, and safety from crime. NEWS-India was administered for a second time to consenting participants (N = 62; female = 53.2%) with a gap of 2–3 weeks between successive administrations. Qualitative findings demonstrated that built environment barriers and constraints to active commuting and physical activity behaviors intersected with social ecological systems. The adapted NEWS subscales had moderate to high test-retest reliability (ICC range 0.48–0.99). The NEWS-India demonstrated acceptable measurement properties among Indian adults and may be a useful tool for evaluation of built environment attributes in India. Further adaptation and evaluation in rural and suburban settings in India is essential to create a version that could be used throughout India.
Resumo:
BACKGROUND: Healthcare integration is a priority in many countries, yet there remains little direction on how to systematically evaluate this construct to inform further development. The examination of community-based palliative care networks provides an ideal opportunity for the advancement of integration measures, in consideration of how fundamental provider cohesion is to effective care at end of life.
AIM: This article presents a variable-oriented analysis from a theory-based case study of a palliative care network to help bridge the knowledge gap in integration measurement.
DESIGN: Data from a mixed-methods case study were mapped to a conceptual framework for evaluating integrated palliative care and a visual array depicting the extent of key factors in the represented palliative care network was formulated.
SETTING/PARTICIPANTS: The study included data from 21 palliative care network administrators, 86 healthcare professionals, and 111 family caregivers, all from an established palliative care network in Ontario, Canada.
RESULTS: The framework used to guide this research proved useful in assessing qualities of integration and functioning in the palliative care network. The resulting visual array of elements illustrates that while this network performed relatively well at the multiple levels considered, room for improvement exists, particularly in terms of interventions that could facilitate the sharing of information.
CONCLUSION: This study, along with the other evaluative examples mentioned, represents important initial attempts at empirically and comprehensively examining network-integrated palliative care and healthcare integration in general.
Resumo:
BACKGROUND: Pre-eclampsia is a leading cause of maternal and perinatal morbidity and mortality. Women with type 1 diabetes are considered a high-risk group for developing pre-eclampsia. Much research has focused on biomarkers as a means of screening for pre-eclampsia in the general maternal population; however, there is a lack of evidence for women with type 1 diabetes.
OBJECTIVES: To undertake a systematic review to identify potential biomarkers for the prediction of pre-eclampsia in women with type 1 diabetes.
SEARCH STRATEGY: We searched Medline, EMBASE, Maternity and Infant Care, Scopus, Web of Science and CINAHL SELECTION CRITERIA: Studies were included if they measured biomarkers in blood or urine of women who developed pre-eclampsia and had pre-gestational type 1 diabetes mellitus Data collection and analysis A narrative synthesis was adopted as a meta-analysis could not be performed, due to high study heterogeneity.
MAIN RESULTS: A total of 72 records were screened, with 21 eligible studies being included in the review. A wide range of biomarkers was investigated and study size varied from 34 to 1258 participants. No single biomarker appeared to be effective in predicting pre-eclampsia; however, glycaemic control was associated with an increased risk while a combination of angiogenic and anti-angiogenic factors seemed to be potentially useful.
CONCLUSIONS: Limited evidence suggests that combinations of biomarkers may be more effective in predicting pre-eclampsia than single biomarkers. Further research is needed to verify the predictive potential of biomarkers that have been measured in the general maternal population, as many studies exclude women with diabetes preceding pregnancy.