873 resultados para Medical care Quality control Statistical methods
Resumo:
Interest groups advocate centre-specific outcome data as a useful tool for patients in choosing a hospital for their treatment and for decision-making by politicians and the insurance industry. Haematopoietic stem cell transplantation (HSCT) requires significant infrastructure and represents a cost-intensive procedure. It therefore qualifies as a prime target for such a policy. We made use of the comprehensive database of the Swiss Blood Stem Cells Transplant Group (SBST) to evaluate potential use of mortality rates. Nine institutions reported a total of 4717 HSCT - 1427 allogeneic (30.3%), 3290 autologous (69.7%) - in 3808 patients between the years 1997 and 2008. Data were analysed for survival- and transplantation-related mortality (TRM) at day 100 and at 5 years. The data showed marked and significant differences between centres in unadjusted analyses. These differences were absent or marginal when the results were adjusted for disease, year of transplant and the EBMT risk score (a score incorporating patient age, disease stage, time interval between diagnosis and transplantation, and, for allogeneic transplants, donor type and donor-recipient gender combination) in a multivariable analysis. These data indicate comparable quality among centres in Switzerland. They show that comparison of crude centre-specific outcome data without adjustment for the patient mix may be misleading. Mandatory data collection and systematic review of all cases within a comprehensive quality management system might, in contrast, serve as a model to ascertain the quality of other cost-intensive therapies in Switzerland.
Resumo:
Contexte. Les études cas-témoins sont très fréquemment utilisées par les épidémiologistes pour évaluer l’impact de certaines expositions sur une maladie particulière. Ces expositions peuvent être représentées par plusieurs variables dépendant du temps, et de nouvelles méthodes sont nécessaires pour estimer de manière précise leurs effets. En effet, la régression logistique qui est la méthode conventionnelle pour analyser les données cas-témoins ne tient pas directement compte des changements de valeurs des covariables au cours du temps. Par opposition, les méthodes d’analyse des données de survie telles que le modèle de Cox à risques instantanés proportionnels peuvent directement incorporer des covariables dépendant du temps représentant les histoires individuelles d’exposition. Cependant, cela nécessite de manipuler les ensembles de sujets à risque avec précaution à cause du sur-échantillonnage des cas, en comparaison avec les témoins, dans les études cas-témoins. Comme montré dans une étude de simulation précédente, la définition optimale des ensembles de sujets à risque pour l’analyse des données cas-témoins reste encore à être élucidée, et à être étudiée dans le cas des variables dépendant du temps. Objectif: L’objectif général est de proposer et d’étudier de nouvelles versions du modèle de Cox pour estimer l’impact d’expositions variant dans le temps dans les études cas-témoins, et de les appliquer à des données réelles cas-témoins sur le cancer du poumon et le tabac. Méthodes. J’ai identifié de nouvelles définitions d’ensemble de sujets à risque, potentiellement optimales (le Weighted Cox model and le Simple weighted Cox model), dans lesquelles différentes pondérations ont été affectées aux cas et aux témoins, afin de refléter les proportions de cas et de non cas dans la population source. Les propriétés des estimateurs des effets d’exposition ont été étudiées par simulation. Différents aspects d’exposition ont été générés (intensité, durée, valeur cumulée d’exposition). Les données cas-témoins générées ont été ensuite analysées avec différentes versions du modèle de Cox, incluant les définitions anciennes et nouvelles des ensembles de sujets à risque, ainsi qu’avec la régression logistique conventionnelle, à des fins de comparaison. Les différents modèles de régression ont ensuite été appliqués sur des données réelles cas-témoins sur le cancer du poumon. Les estimations des effets de différentes variables de tabac, obtenues avec les différentes méthodes, ont été comparées entre elles, et comparées aux résultats des simulations. Résultats. Les résultats des simulations montrent que les estimations des nouveaux modèles de Cox pondérés proposés, surtout celles du Weighted Cox model, sont bien moins biaisées que les estimations des modèles de Cox existants qui incluent ou excluent simplement les futurs cas de chaque ensemble de sujets à risque. De plus, les estimations du Weighted Cox model étaient légèrement, mais systématiquement, moins biaisées que celles de la régression logistique. L’application aux données réelles montre de plus grandes différences entre les estimations de la régression logistique et des modèles de Cox pondérés, pour quelques variables de tabac dépendant du temps. Conclusions. Les résultats suggèrent que le nouveau modèle de Cox pondéré propose pourrait être une alternative intéressante au modèle de régression logistique, pour estimer les effets d’expositions dépendant du temps dans les études cas-témoins
Resumo:
In this paper, various types of fault detection methods for fuel cells are compared. For example, those that use a model based approach or a data driven approach or a combination of the two. The potential advantages and drawbacks of each method are discussed and comparisons between methods are made. In particular, classification algorithms are investigated, which separate a data set into classes or clusters based on some prior knowledge or measure of similarity. In particular, the application of classification methods to vectors of reconstructed currents by magnetic tomography or to vectors of magnetic field measurements directly is explored. Bases are simulated using the finite integration technique (FIT) and regularization techniques are employed to overcome ill-posedness. Fisher's linear discriminant is used to illustrate these concepts. Numerical experiments show that the ill-posedness of the magnetic tomography problem is a part of the classification problem on magnetic field measurements as well. This is independent of the particular working mode of the cell but influenced by the type of faulty behavior that is studied. The numerical results demonstrate the ill-posedness by the exponential decay behavior of the singular values for three examples of fault classes.
Resumo:
The existence of organic and inorganic contaminants present in both fossil and biomass fuels and the fact that they can provide undesirable effects (environmental problems, corrosion processes, lead to storage instability, and others) implies a rigorous quality control of these fuels, although these contaminants make up a small part of the final fuel composition. Considering the rising importance of fuel ethanol in the worldwide panorama, this review aims at reporting the use of successful alternative analytical methods in the monitoring of organic and inorganic contaminants at trace levels, used to determine and to quantify these substances in fuel ethanol and also presenting all official norms for quality control of fuel ethanol employed by ABNT (Brazilian Association of Technical Norms), ASTM (American Society for Testing and Materials), and ECS (European Committee for Standardization).
Multivariate quality control studies applied to Ca(II) and Mg(II) determination by a portable method
Resumo:
A portable or field test method for simultaneous spectrophotometric determination of calcium and magnesium in water using multivariate partial least squares (PLS) calibration methods is proposed. The method is based on the reaction between the analytes and methylthymol blue at pH 11. The spectral information was used as the X-block, and the Ca(II) and Mg(II) concentrations obtained by a reference technique (ICP-AES) were used as the Y-block. Two series of analyses were performed, with a month's difference between them. The first series was used as the calibration set and the second one as the validation set. Multivariate statistical process control (MSPC) techniques, based on statistics from principal component models, were used to study the features and evolution with time of the spectral signals. Signal standardization was used to correct the deviations between series. Method validation was performed by comparing the predictions of the PLS model with the reference Ca(II) and Mg(II) concentrations determined by ICP-AES using the joint interval test for the slope and intercept of the regression line with errors in both axes. (C) 1998 John Wiley & Sons, Ltd.
Resumo:
Background: Food and nutritional care quality must be assessed and scored, so as to improve health institution efficacy. This study aimed to detect and compare actions related to food and nutritional care quality in public and private hospitals. Methods: Investigation of the Hospital Food and Nutrition Service (HFNS) of 37 hospitals by means of structured interviews assessing two quality control corpora, namely nutritional care quality (NCQ) and hospital food service quality (FSQ). HFNS was also evaluated with respect to human resources per hospital bed and per produced meal. Results: Comparison between public and private institutions revealed that there was a statistically significant difference between the number of hospital beds per HFNS staff member (p = 0.02) and per dietitian (p < 0.01). The mean compliance with NCQ criteria in public and private institutions was 51.8% and 41.6%, respectively. The percentage of public and private health institutions in conformity with FSQ criteria was 42.4% and 49.1%, respectively. Most of the actions comprising each corpus, NCQ and FSQ, varied considerably between the two types of institution. NCQ was positively influenced by hospital type (general) and presence of a clinical dietitian. FSQ was affected by institution size: large and medium-sized hospitals were significantly better than small ones. Conclusions: Food and nutritional care in hospital is still incipient, and actions concerning both nutritional care and food service take place on an irregular basis. It is clear that the design of food and nutritional care in hospital indicators is mandatory, and that guidelines for the development of actions as well as qualification and assessment of nutritional care are urgent.
Resumo:
In this thesis two major topics inherent with medical ultrasound images are addressed: deconvolution and segmentation. In the first case a deconvolution algorithm is described allowing statistically consistent maximum a posteriori estimates of the tissue reflectivity to be restored. These estimates are proven to provide a reliable source of information for achieving an accurate characterization of biological tissues through the ultrasound echo. The second topic involves the definition of a semi automatic algorithm for myocardium segmentation in 2D echocardiographic images. The results show that the proposed method can reduce inter- and intra observer variability in myocardial contours delineation and is feasible and accurate even on clinical data.
Resumo:
The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.
Resumo:
QUESTIONS UNDER STUDY / PRINCIPLES: Interest groups advocate centre-specific outcome data as a useful tool for patients in choosing a hospital for their treatment and for decision-making by politicians and the insurance industry. Haematopoietic stem cell transplantation (HSCT) requires significant infrastructure and represents a cost-intensive procedure. It therefore qualifies as a prime target for such a policy. METHODS: We made use of the comprehensive database of the Swiss Blood Stem Cells Transplant Group (SBST) to evaluate potential use of mortality rates. Nine institutions reported a total of 4717 HSCT - 1427 allogeneic (30.3%), 3290 autologous (69.7%) - in 3808 patients between the years 1997 and 2008. Data were analysed for survival- and transplantation-related mortality (TRM) at day 100 and at 5 years. RESULTS: The data showed marked and significant differences between centres in unadjusted analyses. These differences were absent or marginal when the results were adjusted for disease, year of transplant and the EBMT risk score (a score incorporating patient age, disease stage, time interval between diagnosis and transplantation, and, for allogeneic transplants, donor type and donor-recipient gender combination) in a multivariable analysis. CONCLUSIONS: These data indicate comparable quality among centres in Switzerland. They show that comparison of crude centre-specific outcome data without adjustment for the patient mix may be misleading. Mandatory data collection and systematic review of all cases within a comprehensive quality management system might, in contrast, serve as a model to ascertain the quality of other cost-intensive therapies in Switzerland.
Resumo:
Introduction Since the quality of patient portrayal of standardized patients (SPs) during an Objective Structured Clinical Exam (OSCE) has a major impact on the reliability and validity of the exam, quality control should be initiated. Literature about quality control of SP’s performance focuses on feedback [1, 2] or completion of checklists [3, 4]. Since we did not find a published instrument meeting our needs for the assessment of patient portrayal, we developed such an instrument after being inspired by others [5] and used it in our high-stakes exam. Methods SP trainers from all five Swiss medical faculties collected and prioritized quality criteria for patient portrayal. Items were revised with the partners twice, based on experiences during OSCEs. The final instrument contains 14 criteria for acting (i.e. adequate verbal and non-verbal expression) and standardization (i.e. verbatim delivery of the first sentence). All partners used the instrument during a high-stakes OSCE. Both, SPs and trainers were introduced to the instrument. The tool was used in training (more than 100 observations) and during the exam (more than 250 observations). FAIR_OSCE The list of items to assess the quality of the simulation by SPs was primarily developed and used to provide formative feedback to the SPs in order to help them to improve their performance. It was therefore named “Feedbackstruckture for the Assessment of Interactive Role play in Objective Structured Clinical Exams (FAIR_OSCE). It was also used to assess the quality of patient portrayal during the exam. The results were calculated for each of the five faculties individually. Formative evaluation was given to the five faculties with individual feedback without revealing results of other faculties other than overall results. Results High quality of patient portrayal during the exam was documented. More than 90% of SP performances were rated to be completely correct or sufficient. An increase in quality of performance between training and exam was noted. In example the rate of completely correct reaction in medical tests increased from 88% to 95%. 95% completely correct reactions together with 4% sufficient reactions add up to 99% of the reactions meeting the requirements of the exam. SP educators using the instrument reported an augmentation of SPs performance induced by the use of the instrument. Disadvantages mentioned were high concentration needed to explicitly observe all criteria and cumbersome handling of the paper-based forms. Conclusion We were able to document a very high quality of SP performance in our exam. The data also indicate that our training is effective. We believe that the high concentration needed using the instrument is well invested, considering the observed augmentation of performance. The development of an iPad based application for the form is planned to address the cumbersome handling of the paper.
Resumo:
In the demanding environment of healthcare reform, reduction of unwanted physician practice variation is promoted, often through evidence-based guidelines. Guidelines represent innovations that direct change(s) in physician practice; however, compliance has been disappointing. Numerous studies have analyzed guideline development and dissemination, while few have evaluated the consequences of guideline adoption. The primary purpose of this study was to explore and analyze the relationship between physician adoption of the glycated hemoglobin test guideline for management of adult patients with diabetes, and the cost of medical care. The study also examined six personal and organizational characteristics of physicians and their association with innovativeness, or adoption of the guideline. ^ Cost was represented by approved charges from a managed care claims database. Total cost, and diabetes and related complications cost, first were compared for all patients of adopter physicians with those of non-adopter physicians. Then, data were analyzed controlling for disease severity based on insulin dependency, and for high cost cases. There was no statistically significant difference in any of eight cost categories analyzed. This study represented a twelve-month period, and did not reflect cost associated with future complications known to result from inadequate management of glycemia. Guideline compliance did not increase annual cost, which, combined with the future benefit of glycemic control, lends support to the cost effectiveness of the guideline in the long term. Physician adoption of the guideline was recommended to reduce the future personal and economic burden of this chronic disease. ^ Only half of physicians studied had adopted the glycated hemoglobin test guideline for at least 75% of their diabetic patients. No statistically significant relationship was found between any physician characteristic and guideline adoption. Instead, it was likely that the innovation-decision process and guideline dissemination methods were most influential. ^ A multidisciplinary, multi-faceted approach, including interventions for each stage of the innovation-decision process, was proposed to diffuse practice guidelines more effectively. Further, it was recommended that Organized Delivery Systems expand existing administrative databases to include clinical information, decision support systems, and reminder mechanisms, to promote and support physician compliance with this and other evidence-based guidelines. ^
Resumo:
Most studies of differential gene-expressions have been conducted between two given conditions. The two-condition experimental (TCE) approach is simple in that all genes detected display a common differential expression pattern responsive to a common two-condition difference. Therefore, the genes that are differentially expressed under the other conditions other than the given two conditions are undetectable with the TCE approach. In order to address the problem, we propose a new approach called multiple-condition experiment (MCE) without replication and develop corresponding statistical methods including inference of pairs of conditions for genes, new t-statistics, and a generalized multiple-testing method for any multiple-testing procedure via a control parameter C. We applied these statistical methods to analyze our real MCE data from breast cancer cell lines and found that 85 percent of gene-expression variations were caused by genotypic effects and genotype-ANAX1 overexpression interactions, which agrees well with our expected results. We also applied our methods to the adenoma dataset of Notterman et al. and identified 93 differentially expressed genes that could not be found in TCE. The MCE approach is a conceptual breakthrough in many aspects: (a) many conditions of interests can be conducted simultaneously; (b) study of association between differential expressions of genes and conditions becomes easy; (c) it can provide more precise information for molecular classification and diagnosis of tumors; (d) it can save lot of experimental resources and time for investigators.^
Resumo:
This paper defines and compares several models for describing excess influenza pneumonia mortality in Houston. First, the methodology used by the Center for Disease Control is examined and several variations of this methodology are studied. All of the models examined emphasize the difficulty of omitting epidemic weeks.^ In an attempt to find a better method of describing expected and epidemic mortality, time series methods are examined. Grouping in four-week periods, truncating the data series to adjust epidemic periods, and seasonally-adjusting the series y(,t), by:^ (DIAGRAM, TABLE OR GRAPHIC OMITTED...PLEASE SEE DAI)^ is the best method examined. This new series w(,t) is stationary and a moving average model MA(1) gives a good fit for forecasting influenza and pneumonia mortality in Houston.^ Influenza morbidity, other causes of death, sex, race, age, climate variables, environmental factors, and school absenteeism are all examined in terms of their relationship to influenza and pneumonia mortality. Both influenza morbidity and ischemic heart disease mortality show a very high relationship that remains when seasonal trends are removed from the data. However, when jointly modeling the three series it is obvious that the simple time series MA(1) model of truncated, seasonally-adjusted four-week data gives a better forecast.^