879 resultados para Multi-criteria Evaluation
Resumo:
Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient’s medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method.
Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated.
Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated.
Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.
Resumo:
This symposium describes a multi-dimensional strategy to examine fidelity of implementation in an authentic school district context. An existing large-district peer mentoring program provides an example. The presentation will address development of a logic model to articulate a theory of change; collaborative creation of a data set aligned with essential concepts and research questions; identification of independent, dependent, and covariate variables; issues related to use of big data that include conditioning and transformation of data prior to analysis; operationalization of a strategy to capture fidelity of implementation data from all stakeholders; and ways in which fidelity indicators might be used.
Resumo:
Advances in molecular biology have resulted in novel therapy for neurofibromatosis 2-related (NF2) tumours, highlighting the need for robust outcome measures. The disease-focused NF2 impact on quality of life (NFTI-QOL) patient questionnaire was assessed as an outcome measure for treatment in a multi-centre study. NFTI-QOL was related to clinician-rated severity (ClinSev) and genetic severity (GenSev) over repeated visits. Data were evaluated for 288 NF2 patients (n = 464 visits) attending the English national NF2 clinics from 2010 to 2012. The male-to-female ratio was equal and the mean age was 42.2 (SD 17.8) years. The analysis included NFTI-QOL eight-item score, ClinSev graded as mild, moderate, or severe, and GenSev as a rank order of the number of NF2 mutations (graded as mild, moderate, severe). The mean (SD) 8.7 (5.4) score for NFTI-QOL for either a first visit or all visits 9.2 (5.4) was similar to the published norm of 9.4 (5.5), with no significant relationships with age or gender. NFTI-QOL internal reliability was good, with a Cronbach’s alpha score of 0.85 and test re-test reliability r = 0.84. NFTI related to ClinSev (r = 0.41, p < 0.001; r = 0.46 for all visits), but weakly to GenSev (r = 0.16, p < 0.05; r = 0.15 for all visits). ClinSev related to GenSev (r = 0.41, p < 0.001; r = 0.42 for all visits). NFTI-QOL showed a good reliability and ability to detect significant longitudinal changes in the QOL of individuals. The moderate relationships of NFTI-QOL with clinician- and genetic-rated severity suggest that NFTI-QOL taps into NF2 patient experiences that are not encompassed by ClinSev rating or genotype.
Resumo:
‘Evaluation for Participation and Sustainability in Planning’ is the title of a new book edited by Johan Woltjer and his colleagues Angela Hull, Ernest Alexander, and Abdul Khakee. The book addresses the evaluation of planning interventions from several perspectives (social, economic, environmental). Specifically, the attention is focused on: - how evaluation is used in planning practice, including the choice of indicators or the criteria to evaluate participation; - the introduction of new kinds of information, such as measuring the cumulative effects or bringing criteria on capability and well-being into play; - alternative ways of collecting and presenting information, through using GIS or focusing on strategic environmental awareness and ‘hotspots’; and - understanding how strategic planning objectives are implemented in local practice.
Resumo:
The convex hull describes the extent or shape of a set of data and is used ubiquitously in computational geometry. Common algorithms to construct the convex hull on a finite set of n points (x,y) range from O(nlogn) time to O(n) time. However, it is often the case that a heuristic procedure is applied to reduce the original set of n points to a set of s < n points which contains the hull and so accelerates the final hull finding procedure. We present an algorithm to precondition data before building a 2D convex hull with integer coordinates, with three distinct advantages. First, for all practical purposes, it is linear; second, no explicit sorting of data is required and third, the reduced set of s points is constructed such that it forms an ordered set that can be directly pipelined into an O(n) time convex hull algorithm. Under these criteria a fast (or O(n)) pre-conditioner in principle creates a fast convex hull (approximately O(n)) for an arbitrary set of points. The paper empirically evaluates and quantifies the acceleration generated by the method against the most common convex hull algorithms. An extra acceleration of at least four times when compared to previous existing preconditioning methods is found from experiments on a dataset.
Resumo:
The convex hull describes the extent or shape of a set of data and is used ubiquitously in computational geometry. Common algorithms to construct the convex hull on a finite set of n points (x,y) range from O(nlogn) time to O(n) time. However, it is often the case that a heuristic procedure is applied to reduce the original set of n points to a set of s < n points which contains the hull and so accelerates the final hull finding procedure. We present an algorithm to precondition data before building a 2D convex hull with integer coordinates, with three distinct advantages. First, for all practical purposes, it is linear; second, no explicit sorting of data is required and third, the reduced set of s points is constructed such that it forms an ordered set that can be directly pipelined into an O(n) time convex hull algorithm. Under these criteria a fast (or O(n)) pre-conditioner in principle creates a fast convex hull (approximately O(n)) for an arbitrary set of points. The paper empirically evaluates and quantifies the acceleration generated by the method against the most common convex hull algorithms. An extra acceleration of at least four times when compared to previous existing preconditioning methods is found from experiments on a dataset.
Resumo:
Background: Implementing effective antenatal care models is a key global policy goal. However, the mechanisms of action of these multi-faceted models that would allow widespread implementation are seldom examined and poorly understood. In existing care model analyses there is little distinction between what is done, how it is done, and who does it. A new evidence-informed quality maternal and newborn care (QMNC) framework identifies key characteristics of quality care. This offers the opportunity to identify systematically the characteristics of care delivery that may be generalizable across contexts, thereby enhancing implementation. Our objective was to map the characteristics of antenatal care models tested in Randomised Controlled Trials (RCTs) to a new evidence-based framework for quality maternal and newborn care; thus facilitating the identification of characteristics of effective care.
Methods: A systematic review of RCTs of midwifery-led antenatal care models. Mapping and evaluation of these models’ characteristics to the QMNC framework using data extraction and scoring forms derived from the five framework components. Paired team members independently extracted data and conducted quality assessment using the QMNC framework and standard RCT criteria.
Results: From 13,050 citations initially retrieved we identified 17 RCTs of midwifery-led antenatal care models from Australia (7), the UK (4), China (2), and Sweden, Ireland, Mexico and Canada (1 each). QMNC framework scores ranged from 9 to 25 (possible range 0–32), with most models reporting fewer than half the characteristics associated with quality maternity care. Description of care model characteristics was lacking in many studies, but was better reported for the intervention arms. Organisation of care was the best-described component. Underlying values and philosophy of care were poorly reported.
Conclusions: The QMNC framework facilitates assessment of the characteristics of antenatal care models. It is vital to understand all the characteristics of multi-faceted interventions such as care models; not only what is done but why it is done, by whom, and how this differed from the standard care package. By applying the QMNC framework we have established a foundation for future reports of intervention studies so that the characteristics of individual models can be evaluated, and the impact of any differences appraised.
Resumo:
Background: Potentially inappropriate prescribing (PIP) is common in older people in primary care, as evidenced by a significant body of quantitative research. However, relatively few qualitative studies have investigated the phenomenon of PIP and its underlying processes from the perspective of general practitioners (GPs). The aim of this paper is to explore qualitatively, GP perspectives regarding prescribing and PIP in older primary care patients.
Method: Semi-structured qualitative interviews were conducted with GPs participating in a randomised controlled trial (RCT) of an intervention to decrease PIP in older patients (≥70 years) in Ireland. Interviews were conducted with GP participants (both intervention and control) from the OPTI-SCRIPT cluster RCT as part of the trial process evaluation between January and July 2013. Interviews were conducted by one interviewer and audio recorded. Interviews were transcribed verbatim and a thematic analysis was conducted.
Results: Seventeen semi-structured interviews were conducted (13 male; 4 female). Three main, inter-related themes emerged (complex prescribing environment, paternalistic doctor-patient relationship, and relevance of PIP concept). Patient complexity (e.g. polypharmacy, multimorbidity), as well as prescriber complexity (e.g. multiple prescribers, poor communication, restricted autonomy) were all identified as factors contributing to a complex prescribing environment where PIP could occur, as was a paternalistic-doctor patient relationship. The concept of PIP was perceived to be of variable usefulness to GPs and the criteria to measure it may be at odds with the complex processes of prescribing for this patient population.
Conclusions: Several inter-related factors contributing to the occurrence of PIP were identified, some of which may be amenable to intervention. Improvement strategies focused on improved management of polypharmacy and multimorbidity, and communication across primary and secondary care could result in substantial improvements in PIP.
Resumo:
The goal of this research was to evaluate the needs of the intercity common carrier bus service in Iowa. Within the framework of the overall goal, the objectives were to: (1) Examine the detailed operating cost and revenue data of the intercity carriers in Iowa; (2) Develop a model or models to estimate demand in cities and corridors served by the bus industry; (3) Develop a cost function model for estimating a carrier's operating costs; (4) Establish the criteria to be used in assessing the need for changes in bus service; (5) Outline the procedures for estimating route operating costs and revenues and develop a matrix of community and social factors to be considered in evaluation; and (6) Present a case study to demonstrate the methodology. The results of the research are presented in the following chapters: (1) Introduction; (2) Intercity Bus Research and Development; (3) Operating Characteristics of Intercity Carriers in Iowa; (4) Commuter Carriers; (5) Passenger and Revenue Forecasting Models; (6) Operating Cost Relationships; (7) Social and General Welfare Aspects of Intercity Bus Service; (8) Case Study Analysis; and (9) Additional Service Considerations and Recommendations.