821 resultados para Quality evaluation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Ototoxicity is a known side effect of combined radiation therapy and cisplatin chemotherapy for the treatment of medulloblastoma. the delivery of an involved field boost by intensity modulated radiation therapy (IMRT) may reduce the dose to the inner ear when compared with conventional radiotherapy. the dose of cisplatin may also affect the risk of ototoxicity. A retrospective study was performed to evaluate the impact of involved field boost using IMRT and cisplatin dose on the rate of ototoxicity.Methods: Data from 41 medulloblastoma patients treated with IMRT were collected. Overall and disease-free survival rates were calculated by Kaplan-Meier method Hearing function was graded according to toxicity criteria of Pediatric Oncology Group (POG). Doses to inner ear and total cisplatin dose were correlated with hearing function by univariate and multivariate data analysis.Results: After a mean follow-up of 44 months (range: 14 to 72 months), 37 patients remained alive, with two recurrences, both in spine with CSF involvement, resulting in a disease free-survival and overall survival of 85.2% and 90.2%, respectively. Seven patients (17%) experienced POG Grade 3 or 4 toxicity. Cisplatin dose was a significant factor for hearing loss in univariate analysis (p < 0.03). in multivariate analysis, median dose to inner ear was significantly associated with hearing loss (p < 0.01). POG grade 3 and 4 toxicity were uncommon with median doses to the inner ear bellow 42 Gy (p < 0.05) and total cisplatin dose of less than 375 mg/m(2) (p < 0.01).Conclusions: IMRT leads to a low rate of severe ototoxicity. Median radiation dose to auditory apparatus should be kept below 42 Gy. Cisplatin doses should not exceed 375 mg/m(2).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are a variety of guidelines and methods available to measure and assess survey quality. Most of these are based on qualitative descriptions. In practice, they are not easy to implement and it is very difficult to make comparisons between surveys. Hence there is a theoretical and pragmatic demand to develop a mainly quantitative based survey assessment tool. This research aimed to meet this need and make contributions to the evaluation and improvement of survey quality. Acknowledging the critical importance of measurement issues in survey research, this thesis starts with a comprehensive introduction to measurement theory and identifies the types of measurement errors associated with measurement procedures through three experiments. Then it moves on to describe concepts, guidelines and methods available for measuring and assessing survey quality. Combining these with measurement principles leads to the development of a quantitative based statistical holistic tool to measure and assess survey quality. The criteria, weights and subweights for the assessment tool are determined using Multi-Criteria Decision-Making (MCDM) and a survey questionnaire based on the Delphi method. Finally the model is applied to a database of surveys which was constructed to develop methods of classification, assessment and improvement of survey quality. The model developed in this thesis enables survey researchers and/or commissioners to make a holistic assessment of the value of the particular survey(s). This model is an Excel based audit which takes a holistic approach, following all stages of the survey from inception, to design, construction, execution, analysis and dissemination. At each stage a set of criteria are applied to assess quality. Scores attained against these assessments are weighted by the importance of the criteria and summed to give an overall assessment of the stage. The total score for a survey can be obtained by a combination of the scores for every stage weighted again by the importance of each stage. The advantage of this is to construct a means of survey assessment which can be used in a diagnostic manner to assess and improve survey quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To develop sedation, pain, and agitation quality measures using process control methodology and evaluate their properties in clinical practice. Design: A Sedation Quality Assessment Tool was developed and validated to capture data for 12-hour periods of nursing care. Domains included pain/discomfort and sedation-agitation behaviors; sedative, analgesic, and neuromuscular blocking drug administration; ventilation status; and conditions potentially justifying deep sedation. Predefined sedation-related adverse events were recorded daily. Using an iterative process, algorithms were developed to describe the proportion of care periods with poor limb relaxation, poor ventilator synchronization, unnecessary deep sedation, agitation, and an overall optimum sedation metric. Proportion charts described processes over time (2 monthly intervals) for each ICU. The numbers of patients treated between sedation-related adverse events were described with G charts. Automated algorithms generated charts for 12 months of sequential data. Mean values for each process were calculated, and variation within and between ICUs explored qualitatively. Setting: Eight Scottish ICUs over a 12-month period. Patients: Mechanically ventilated patients. Interventions: None. Measurements and Main Results: The Sedation Quality Assessment Tool agitation-sedation domains correlated with the Richmond Sedation Agitation Scale score (Spearman [rho] = 0.75) and were reliable in clinician-clinician (weighted kappa; [kappa] = 0.66) and clinician-researcher ([kappa] = 0.82) comparisons. The limb movement domain had fair correlation with Behavioral Pain Scale ([rho] = 0.24) and was reliable in clinician-clinician ([kappa] = 0.58) and clinician-researcher ([kappa] = 0.45) comparisons. Ventilator synchronization correlated with Behavioral Pain Scale ([rho] = 0.54), and reliability in clinician-clinician ([kappa] = 0.29) and clinician-researcher ([kappa] = 0.42) comparisons was fair-moderate. Eight hundred twenty-five patients were enrolled (range, 59-235 across ICUs), providing 12,385 care periods for evaluation (range 655-3,481 across ICUs). The mean proportion of care periods with each quality metric varied between ICUs: excessive sedation 12-38%; agitation 4-17%; poor relaxation 13-21%; poor ventilator synchronization 8-17%; and overall optimum sedation 45-70%. Mean adverse event intervals ranged from 1.5 to 10.3 patients treated. The quality measures appeared relatively stable during the observation period. Conclusions: Process control methodology can be used to simultaneously monitor multiple aspects of pain-sedation-agitation management within ICUs. Variation within and between ICUs could be used as triggers to explore practice variation, improve quality, and monitor this over time

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Durbin, J. & Urquhart, C. (2003). Qualitative evaluation of KA24 (Knowledge Access 24). Aberystwyth: Department of Information Studies, University of Wales Aberystwyth. Sponsorship: Knowledge Access 24 (NHS)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To support the diverse Quality of Service (QoS) requirements of real-time (e.g. audio/video) applications in integrated services networks, several routing algorithms that allow for the reservation of the needed bandwidth over a Virtual Circuit (VC) established on one of several candidate routes have been proposed. Traditionally, such routing is done using the least-loaded concept, and thus results in balancing the load across the set of candidate routes. In a recent study, we have established the inadequacy of this load balancing practice and proposed the use of load profiling as an alternative. Load profiling techniques allow the distribution of "available" bandwidth across a set of candidate routes to match the characteristics of incoming VC QoS requests. In this paper we thoroughly characterize the performance of VC routing using load profiling and contrast it to routing using load balancing and load packing. We do so both analytically and via extensive simulations of multi-class traffic routing in Virtual Path (VP) based networks. Our findings confirm that for routing guaranteed bandwidth flows in VP networks, load balancing is not desirable as it results in VP bandwidth fragmentation, which adversely affects the likelihood of accepting new VC requests. This fragmentation is more pronounced when the granularity of VC requests is large. Typically, this occurs when a common VC is established to carry the aggregate traffic flow of many high-bandwidth real-time sources. For VP-based networks, our simulation results show that our load-profiling VC routing scheme performs better or as well as the traditional load-balancing VC routing in terms of revenue under both skewed and uniform workloads. Furthermore, load-profiling routing improves routing fairness by proactively increasing the chances of admitting high-bandwidth connections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coeliac disease is one of the most common food intolerances worldwide and at present the gluten free diet remains the only suitable treatment. A market overview conducted as part of this thesis on nutritional and sensory quality of commercially available gluten free breads and pasta showed that improvements are necessary. Many products show strong off-flavors, poor mouthfeel and reduced shelf-life. Since the life-long avoidance of the cereal protein gluten means a major change to the diet, it is important to also consider the nutritional value of products intending to replace staple foods such as bread or pasta. This thesis addresses this issue by characterising available gluten free cereal and pseudocereal flours to facilitate a better raw material choice. It was observed that especially quinoa, buckwheat and teff are high in essential nutrients, such as protein, minerals and folate. In addition the potential of functional ingredients such as inulin, β-glucan, HPMC and xanthan to improve loaf quality were evaluated. Results show that these ingredients can increase loaf volume and reduce crumb hardness as well as rate of staling but that the effect diverges strongly depending on the bread formulation used. Furthermore, fresh egg pasta formulations based on teff and oat flour were developed. The resulting products were characterised regarding sensory and textural properties as well as in vitro digestibility. Scanning electron and confocal laser scanning microscopy was used throughout the thesis to visualise structural changes occurring during baking and pasta making

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dry mixing of binary food powders was conducted in a 2L lab-scale paddle mixer. Different types of food powders such as paprika, oregano, black pepper, onion powder and salt were used for the studies. A novel method based on a digital colour imaging system (DCI) was developed to measure the mixture quality (MQ) of binary food powder mixtures. The salt conductivity method was also used as an alternative method to measure the MQ. In the first part of the study the DCI method was developed and it showed potential for assessing MQ of binary powder mixes provided there was huge colour difference between the powders. In the second and third part of the study the effect of composition, water content, particle size and bulk density on MQ was studied. Flowability of powders at various moisture contents was also investigated. The mixing behaviour was assessed using coefficient of variation. Results showed that water content and composition influence the mixing behavior of powders. Good mixing was observed up to size ratios of 4.45 and at higher ratios MQ disimproved. The bulk density had a larger influence on the MQ. In the final study the MQ evaluation of binary and ternary powder mixtures was compared by using two methods – salt conductivity method and DCI method. Two binary food and two quaternary food powder mixtures with different coloured ingredients were studied. Overall results showed that DCI method has a potential for use by industries and it can analyse powder mixtures with components that have differences in colour and that are not segregating in nature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Serologic methods have been used widely to test for celiac disease and have gained importance in diagnostic definition and in new epidemiologic findings. However, there is no standardization, and there are no reference protocols and materials. METHODS: The European working group on Serological Screening for Celiac Disease has defined robust noncommercial test protocols for immunoglobulin (Ig)G and IgA gliadin antibodies and for IgA autoantibodies against endomysium and tissue transglutaminase. Standard curves were linear in the decisive range, and intra-assay variation coefficients were less than 5% to 10%. Calibration was performed with a group reference serum. Joint cutoff limits were used. Seven laboratories took part in the final collaborative study on 252 randomized sera classified by histology (103 pediatric and adult patients with active celiac disease, 89 disease control subjects, and 60 blood donors). RESULTS: IgA autoantibodies against endomysium and tissue transglutaminase rendered superior sensitivity (90% and 93%, respectively) and specificity (99% and 95%, respectively) over IgA and IgG gliadin antibodies. Tissue transglutaminase antibody testing showed superior receiver operating characteristic performance compared with gliadin antibodies. The K values for interlaboratory reproducibility showed superiority for IgA endomysium (0.93) in comparison with tissue transglutaminase antibodies (0.83) and gliadin antibodies (0.82 for IgG, 0.62 for IgA). CONCLUSIONS: Basic criteria of standardization and quality assessment must be fulfilled by any given test protocol proposed for serologic investigation of celiac disease. The working group has produced robust test protocols and reference materials available for standardization to further improve reliability of serologic testing for celiac disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Outpatient palliative care, an evolving delivery model, seeks to improve continuity of care across settings and to increase access to services in hospice and palliative medicine (HPM). It can provide a critical bridge between inpatient palliative care and hospice, filling the gap in community-based supportive care for patients with advanced life-limiting illness. Low capacities for data collection and quantitative research in HPM have impeded assessment of the impact of outpatient palliative care. APPROACH: In North Carolina, a regional database for community-based palliative care has been created through a unique partnership between a HPM organization and academic medical center. This database flexibly uses information technology to collect patient data, entered at the point of care (e.g., home, inpatient hospice, assisted living facility, nursing home). HPM physicians and nurse practitioners collect data; data are transferred to an academic site that assists with analyses and data management. Reports to community-based sites, based on data they provide, create a better understanding of local care quality. CURRENT STATUS: The data system was developed and implemented over a 2-year period, starting with one community-based HPM site and expanding to four. Data collection methods were collaboratively created and refined. The database continues to grow. Analyses presented herein examine data from one site and encompass 2572 visits from 970 new patients, characterizing the population, symptom profiles, and change in symptoms after intervention. CONCLUSION: A collaborative regional approach to HPM data can support evaluation and improvement of palliative care quality at the local, aggregated, and statewide levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. METHODS AND PRINCIPAL FINDINGS: The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. CONCLUSIONS: Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Systemic challenges within child welfare have prompted many states to explore new strategies aimed at protecting children while meeting the needs of families, but doing so within the confines of shrinking budgets. Differential Response has emerged as a promising practice for low or moderate risk cases of child maltreatment. This mixed methods evaluation explored various aspects of North Carolina's differential response system, known as the Multiple Response System (MRS), including: child safety, timeliness of response and case decision, frontloading of services, case distribution, implementation of Child and Family Teams, collaboration with community-based service providers and Shared Parenting. Utilizing Child Protective Services (CPS) administrative data, researchers found that compared to matched control counties, MRS: had a positive impact on child safety evidenced by a decline in the rates of substantiations and re-assessments; temporarily disrupted timeliness of response in pilot counties but had no effect on time to case decision; and increased the number of upfront services provided to families during assessment. Qualitative data collected through focus groups with providers and phone interviews with families provided important information on key MRS strategies, highlighting aspects that families and social workers like as well as identifying areas for improvement. This information is useful for continuous quality improvement efforts, particularly related to the development of training and technical assistance programs at the state and local level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. In clinical practice and in clinical trials, echocardiography and scintigraphy are used the most for the evaluation of global left ejection fraction (LVEF) and left ventricular (LV) volumes. Actually, poor quality imaging and geometrical assumptions are the main limitations of LVEF measured by echocardiography. Contrast agents and 3D echocardiography are new methods that may alleviate these potential limitations. Methods. Therefore we sought to examine the accuracy of contrast 3D echocardiography for the evaluation of LV volumes and LVEF relative to MIBI gated SPECT as an independent reference. In 43 patients addressed for chest pain, contrast 3D echocardiography (RT3DE) and MIBI gated SPECT were prospectively performed on the same day. The accuracy and the variability of LV volumes and LVEF measurements were evaluated. Results. Due to good endocardial delineation, LV volumes and LVEF measurements by contrast RT3DE were feasible in 99% of the patients. The mean LV end-diastolic volume (LVEDV) of the group by scintigraphy was 143 65 mL and was underestimated by triplane contrast RT3DE (128 60 mL; p < 0.001) and less by full-volume contrast RT3DE (132 62 mL; p < 0.001). Limits of agreement with scintigraphy were similar for triplane andfull-volume, modalities with the best results for full-volume. Results were similar for calculation of LV end-systolic volume (LVESV). The mean LVEF was 44 16% with scintigraphy and was not significantly different with both triplane contrast RT3DE (45 15%) and full-volume contrast RT3DE (45 15%). There was an excellent correlation between two different observers for LVEDV, LVESV and LVEF measurements and inter observer agreement was also good for both contrast RT3DE techniques. Conclusion. Contrast RT3DE allows an accurate assessment of LVEF compared to the LVEF measured by SPECT, and shows low variability between observers. Although RT3DE triplane provides accurate evaluation of left ventricular function, RT3DE full-volume is superior to triplane modality in patients with suspected coronary artery disease. © 2009 Cosyns et al; licensee BioMed Central Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Community matrons are a relatively new government initiative aimed at case managing people with long-term conditions to reduce the number of emergency bed days used in hospitals. Although there have been extensive evaluations of similar case management projects, to date there has been little evaluation of the community matron's role and the perceptions patients have of this new service. One of the main Government agendas for care is to deliver a high quality service driven by the needs of the service users (DH, 2000). In order to drive this agenda, care is to deliver a high quality service driven by the needs of the service users (DH, 2000). In order to drive this agenda, it is important that the views and perceptions of people on the receiving end of the services are heard, valued and appropriate actions taken. This two part evaluative report sets out to explore how people with long-term conditions perceive the impact of community matrons and the differences this new service may have had on their lives. Questionnaires were sent to 100 patients who were currently being case-managed by a community matron to evaluate the community matron service from the patients' perspective. Part two reports on patients' perceptions of the community matron role and the influences of the role on their health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital games constitute a major emerging technology that is expected to enter mainstream educational use within a few years. The highly engaging and motivating character of such games bears great potential to support immersive, meaningful, and situated learning experiences. To seize this potential, meaningful quality and impact measurements are indispensible. Although there is a growing body of evidence on the efficacy of games for learning, evaluation is often poorly designed, incomplete, biased, if not entirely absent. Well-designed evaluations demonstrating the educational effect as well as the return on investment of serious games may foster broader adoption by educational institutions and training providers, and support the development of the serious game industry. The European project RAGE introduces a comprehensive and multi-perspective framework for serious game evaluation, which is presented in this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This document presents the RAGE evaluation methodology. It provides the framework and accompanying guidelines for the evaluation and validation of the quality and effectiveness of the project outputs. Formative and summative evaluations of the different RAGE technologies and their underlying methodologies – the assets, the Ecosystem, and the applied games – will be carried out on the basis of this common framework.