963 resultados para Six sigma (Quality control standard)
Multicentre evaluation of a new point-of-care test for the determination of NT-proBNP in whole blood
Resumo:
BACKGROUND: The Roche CARDIAC proBNP point-of-care (POC) test is the first test intended for the quantitative determination of N-terminal pro-brain natriuretic peptide (NT-proBNP) in whole blood as an aid in the diagnosis of suspected congestive heart failure, in the monitoring of patients with compensated left-ventricular dysfunction and in the risk stratification of patients with acute coronary syndromes. METHODS: A multicentre evaluation was carried out to assess the analytical performance of the POC NT-proBNP test at seven different sites. RESULTS: The majority of all coefficients of variation (CVs) obtained for within-series imprecision using native blood samples was below 10% for both 52 samples measured ten times and for 674 samples measured in duplicate. Using quality control material, the majority of CV values for day-to-day imprecision were below 14% for the low control level and below 13% for the high control level. In method comparisons for four lots of the POC NT-proBNP test with the laboratory reference method (Elecsys proBNP), the slope ranged from 0.93 to 1.10 and the intercept ranged from 1.8 to 6.9. The bias found between venous and arterial blood with the POC NT-proBNP method was < or =5%. All four lots of the POC NT-proBNP test investigated showed excellent agreement, with mean differences of between -5% and +4%. No significant interference was observed with lipaemic blood (triglyceride concentrations up to 6.3 mmol/L), icteric blood (bilirubin concentrations up to 582 micromol/L), haemolytic blood (haemoglobin concentrations up to 62 mg/L), biotin (up to 10 mg/L), rheumatoid factor (up to 42 IU/mL), or with 50 out of 52 standard or cardiological drugs in therapeutic concentrations. With bisoprolol and BNP, somewhat higher bias in the low NT-proBNP concentration range (<175 ng/L) was found. Haematocrit values between 28% and 58% had no influence on the test result. Interference may be caused by human anti-mouse antibodies (HAMA) types 1 and 2. No significant influence on the results with POC NT-proBNP was found using volumes of 140-165 muL. High NT-proBNP concentrations above the measuring range of the POC NT-proBNP test did not lead to false low results due to a potential high-dose hook effect. CONCLUSIONS: The POC NT-proBNP test showed good analytical performance and excellent agreement with the laboratory method. The POC NT-proBNP assay is therefore suitable in the POC setting.
Resumo:
BACKGROUND: since 1999 data from pulmonary hypertension (PH) patients from all PH centres in Switzerland were prospectively collected. We analyse the epidemiological aspects of these data. METHODS: PH was defined as a mean pulmonary artery pressure of >25 mm Hg at rest or >30 mm Hg during exercise. Patients with pulmonary arterial hypertension (PAH), PH associated with lung diseases, PH due to chronic thrombotic and/or embolic disease (CTEPH), or PH due to miscellaneous disorders were registered. Data from adult patients included between January 1999 and December 2004 were analysed. RESULTS: 250 patients were registered (age 58 +/- 16 years, 104 (41%) males). 152 patients (61%) had PAH, 73 (29%) had CTEPH and 18 (7%) had PH associated with lung disease. Patients <50 years (32%) were more likely to have PAH than patients >50 years (76% vs. 53%, p <0.005). Twenty-four patients (10%) were lost to followup, 58 patients (26%) died and 150 (66%) survived without transplantation or thrombendarterectomy. Survivors differed from patients who died in the baseline six-minute walking distance (400 m [300-459] vs. 273 m [174-415]), the functional impairment (NYHA class III/IV 86% vs. 98%), mixed venous saturation (63% [57-68] vs. 56% [50-61]) and right atrial pressure (7 mm Hg [4-11] vs. 11 mm Hg [4-18]). DISCUSSION: PH is a disease affecting adults of all ages. The management of these patients in specialised centres guarantees a high quality of care. Analysis of the registry data could be an instrument for quality control and might help identify weak points in assessment and treatment of these patients.
Resumo:
BACKGROUND: The prognosis of pulmonary hypertension (PH), especially idiopathic pulmonary arterial hypertension (IPAH), has improved during the recent years. The Swiss Registry for PH represents the collaboration of the various centres in Switzerland dealing with PH and serves as an important tool in quality control. The objective of the study was to describe the treatment and clinical course of this orphan disease in Switzerland. METHODS: We analyzed data from 222 of 252 adult patients, who were included in the registry between January 1999 and December 2004 and suffered from either PAH, PH associated with lung diseases or chronic thromboembolic PH (CTEPH) with respect to the following data: NYHA class, six-minute walking distance (6-MWD), haemodynamics, treatments and survival. RESULTS: If compared with the calculated expected figures the one, two and three year mean survivals in IPAH increased from 67% to 89%, from 55% to 78% and from 46% to 73%, respectively. Most patients (90%) were on oral or inhaled therapy and only 10 patients necessitated lung transplantation. Even though pulmonary endarterectomy (PEA) was performed in only 7 patients during this time, the survival in our CTEPH cohort improved compared with literature data and seems to approach outcomes usually seen after PEA. The 6-MWD increased maximally by 52 m and 59 m in IPAH and CTEPH, respectively, but in the long term returned to or below baseline values, despite the increasing use of multiple specific drugs (overall in 51% of IPAH and 29% of CTEPH). CONCLUSION: Our national registry data indicate that the overall survival of IPAH and presumably CTEPH seems to have improved in Switzerland. Although the 6-MWD improved transiently, it decreased in the long term despite specific and increasingly combined drug treatment. Our findings herewith underscore the progressive nature of the diseases and the need for further intense research in the field.
Resumo:
Non-uniformity of steps within a flight is a major risk factor for falls. Guidelines and requirements for uniformity of step risers and tread depths assume the measurement system provides precise dimensional values. The state-of-the-art measurement system is a relatively new method, known as the nosing-to-nosing method. It involves measuring the distance between the noses of adjacent steps and the angle formed with the horizontal. From these measurements, the effective riser height and tread depth are calculated. This study was undertaken for the purpose of evaluating the measurement system to determine how much of total measurement variability comes from the step variations versus that due to repeatability and reproducibility (R&R) associated with the measurers. Using an experimental design quality control professionals call a measurement system experiment, two measurers measured all steps in six randomly selected flights, and repeated the process on a subsequent day. After marking each step in a flight in three lateral places (left, center, and right), the measurers took their measurement. This process yielded 774 values of riser height and 672 values of tread depth. Results of applying the Gage R&R ANOVA procedure in Minitab software indicated that the R&R contribution to riser height variability was 1.42%; and to tread depth was 0.50%. All remaining variability was attributed to actual step-to-step differences. These results may be compared with guidelines used in the automobile industry for measurement systems that consider R&R less than 1% as an acceptable measurement system; and R&R between 1% and 9% as acceptable depending on the application, the cost of the measuring device, cost of repair, or other factors.
Resumo:
Quality of education should be stable or permanently increased – even if the number of students rises. Quality of education is often related to possibilities for active learning and individual facilitation. This paper deals with the question how high-quality learning within oversized courses could be enabled and it presents the approach of e-flashcards that enables active learning and individual facilitation within large scale university courses.
Resumo:
BACKGROUND: Ethyl glucuronide (EtG) and ethyl sulfate (EtS) are non-oxidative minor metabolites of ethanol. They are detectable in various body fluids shortly after initial consumption of ethanol and have a longer detection time frame than the parent compound. They are regarded highly sensitive and specific markers of recent alcohol uptake. This study evaluates the determination of EtG and EtS from dried blood spots (DBS), a simple and cost-effective sampling method that would shorten the time gap between offense and blood sampling and lead to a better reflectance of the actual impairment. METHODS: For method validation, EtG and EtS standard and quality control samples were prepared in fresh human heparinized blood and spotted on DBS cards, then extracted and measured by an LC-ESI-MS/MS method. Additionally, 76 heparinized blood samples from traffic offense cases were analyzed for EtG and EtS as whole blood and as DBS specimens. The results from these measurements were then compared by calculating the respective mean values, by a matched-paired t test, by a Wilcoxon test, and by Bland-Altman and Mountain plots. RESULTS AND DISCUSSION: Calibrations for EtG and EtS in DBS were linear over the studied calibration range. The precision and accuracy of the method met the requirements of the validation guidelines that were employed in the study. The stability of the biomarkers stored as DBS was demonstrated under different storage conditions. The t test showed no significant difference between whole blood and DBS in the determination of EtG and EtS. In addition, the Bland-Altman analysis and Mountain plot confirmed that the concentration differences that were measured in DBS specimens were not relevant.
Resumo:
Detector uniformity is a fundamental performance characteristic of all modern gamma camera systems, and ensuring a stable, uniform detector response is critical for maintaining clinical images that are free of artifact. For these reasons, the assessment of detector uniformity is one of the most common activities associated with a successful clinical quality assurance program in gamma camera imaging. The evaluation of this parameter, however, is often unclear because it is highly dependent upon acquisition conditions, reviewer expertise, and the application of somewhat arbitrary limits that do not characterize the spatial location of the non-uniformities. Furthermore, as the goal of any robust quality control program is the determination of significant deviations from standard or baseline conditions, clinicians and vendors often neglect the temporal nature of detector degradation (1). This thesis describes the development and testing of new methods for monitoring detector uniformity. These techniques provide more quantitative, sensitive, and specific feedback to the reviewer so that he or she may be better equipped to identify performance degradation prior to its manifestation in clinical images. The methods exploit the temporal nature of detector degradation and spatially segment distinct regions-of-non-uniformity using multi-resolution decomposition. These techniques were tested on synthetic phantom data using different degradation functions, as well as on experimentally acquired time series floods with induced, progressively worsening defects present within the field-of-view. The sensitivity of conventional, global figures-of-merit for detecting changes in uniformity was evaluated and compared to these new image-space techniques. The image-space algorithms provide a reproducible means of detecting regions-of-non-uniformity prior to any single flood image’s having a NEMA uniformity value in excess of 5%. The sensitivity of these image-space algorithms was found to depend on the size and magnitude of the non-uniformities, as well as on the nature of the cause of the non-uniform region. A trend analysis of the conventional figures-of-merit demonstrated their sensitivity to shifts in detector uniformity. The image-space algorithms are computationally efficient. Therefore, the image-space algorithms should be used concomitantly with the trending of the global figures-of-merit in order to provide the reviewer with a richer assessment of gamma camera detector uniformity characteristics.
Resumo:
Heat shock protein 70 (Hsp70) plays a central role in protein homeostasis and quality control in conjunction with other chaperone machines, including Hsp90. The Hsp110 chaperone Sse1 promotes Hsp90 activity in yeast, and functions as a nucleotide exchange factor (NEF) for cytosolic Hsp70, but the precise roles Sse1 plays in client maturation through the Hsp70-Hsp90 chaperone system are not fully understood. We find that upon pharmacological inhibition of Hsp90, a model protein kinase, Ste11DeltaN, is rapidly degraded, whereas heterologously expressed glucocorticoid receptor (GR) remains stable. Hsp70 binding and nucleotide exchange by Sse1 was required for GR maturation and signaling through endogenous Ste11, as well as to promote Ste11DeltaN degradation. Overexpression of another functional NEF partially compensated for loss of Sse1, whereas the paralog Sse2 fully restored GR maturation and Ste11DeltaN degradation. Sse1 was required for ubiquitinylation of Ste11DeltaN upon Hsp90 inhibition, providing a mechanistic explanation for its role in substrate degradation. Sse1/2 copurified with Hsp70 and other proteins comprising the "early-stage" Hsp90 complex, and was absent from "late-stage" Hsp90 complexes characterized by the presence of Sba1/p23. These findings support a model in which Hsp110 chaperones contribute significantly to the decision made by Hsp70 to fold or degrade a client protein.
Resumo:
The performance of high-resolution CZE for determination of carbohydrate-deficient transferrin (CDT) in human serum based on internal and external quality data gathered over a 10-year period is reported. The assay comprises mixing of serum with a Fe(III) ion-containing solution prior to analysis of the iron saturated mixture in a dynamically double-coated capillary using a commercial buffer at alkaline pH. CDT values obtained with a human serum of a healthy individual and commercial quality control sera are shown to vary less than 10%. Values of a control from a specific lot were found to slowly decrease as function of time (less than 10% per year). Furthermore, due to unknown reasons, gradual changes in the monitored pattern around pentasialo-transferrin were detected, which limit the use of commercial control sera of the same lot to less than 2 years. Analysis of external quality control sera revealed correct classification of the samples over the entire 10-year period. Data obtained compare well with those of HPLC and CZE assays of other laboratories. The data gathered over a 10-year period demonstrate the robustness of the high-resolution CZE assay. This is the first account of a CZE-based CDT assay with complete internal and external quality assessment over an extended time period.
Resumo:
Introduction To meet the quality standards for high-stakes OSCEs, it is necessary to ensure high quality standardized performance of the SPs involved.[1] One of the ways this can be assured is through the assessment of the quality of SPs` performance in training and during the assessment. There is some literature concerning validated instruments that have been used to assess SP performance in formative contexts but very little related to high stakes contexts.[2], [3], [4]. Content and structure During this workshop different approaches to quality control for SPs` performance, developed in medicine, pharmacy and nursing OSCEs, will be introduced. Participants will have the opportunity to use these approaches in simulated interactions. Advantages and disadvantages of these approaches will be discussed. Anticipated outcomes By the end of this session, participants will be able to discuss the rationale for quality control of SPs` performance in high stakes OSCEs, outline key factors in creating strategies for quality control, identify various strategies for assuring quality control, and reflect on applications to their own practice. Who should attend The workshop is designed for those interested in quality assurance of SP performance in high stakes OSCEs. Level All levels are welcome. References Adamo G. 2003. Simulated and standardized patients in OSCEs: achievements and challenges:1992-2003. Med Teach. 25(3), 262- 270. Wind LA, Van Dalen J, Muijtjens AM, Rethans JJ. Assessing simulated patients in an educational setting: the MaSP (Maastricht Assessment of Simulated Patients). Med Educ 2004, 38(1):39-44. Bouter S, van Weel-Baumgarten E, Bolhuis S. Construction and validation of the Nijmegen Evaluation of the Simulated Patient (NESP): Assessing Simulated Patients' ability to role-play and provide feedback to students. Acad Med: Journal of the Association of American Medical Colleges 2012. May W, Fisher D, Souder D: Development of an instrument to measure the quality of standardized/simulated patient verbal feedback. Med Educ 2012, 2(1).
Resumo:
Introduction In our program, simulated patients (SPs) give feedback to medical students in the course of communication skills training. To ensure effective training, quality control of the SPs’ feedback should be implemented. At other institutions, medical students evaluate the SPs’ feedback for quality control (Bouter et al., 2012). Thinking about implementing quality control for SPs’ feedback in our program, we wondered whether the evaluation by students would result in the same scores as evaluation by experts. Methods Consultations simulated by 4th-year medical students with SPs were video taped including the SP’s feedback to the students (n=85). At the end of the training sessions students rated the SPs’ performance using a rating instrument called Bernese Assessment for Role-play and Feedback (BARF) containing 11 items concerning feedback quality. Additionally the videos were evaluated by 3 trained experts using the BARF. Results The experts showed a high interrater agreement when rating identical feedbacks (ICCunjust=0.953). Comparing the rating of students and experts, high agreement was found with regard to the following items: 1. The SP invited the student to reflect on the consultation first, Amin (= minimal agreement) 97% 2. The SP asked the student what he/she liked about the consultation, Amin = 88%. 3. The SP started with positive feedback, Amin = 91%. 4. The SP was comparing the student with other students, Amin = 92%. In contrast the following items showed differences between the rating of experts and students: 1. The SP used precise situations for feedback, Amax (=maximal agreement) 55%, Students rated 67 of SPs’ feedbacks to be perfect with regard to this item (highest rating on a 5 point Likert scale), while only 29 feedbacks were rated this way by the experts. 2. The SP gave precise suggestions for improvement, Amax 75%, 62 of SPs’ feedbacks obtained the highest rating from students, while only 44 of SPs’ feedbacks achieved the highest rating in the view of the experts. 3. The SP speaks about his/her role in the third person, Amax 60%. Students rated 77 feedbacks with the highest score, while experts judged only 43 feedbacks this way. Conclusion Although evaluation by the students was in agreement with that of experts concerning some items, students rated the SPs’ feedback more often with the optimal score than experts did. Moreover it seems difficult for students to notice when SPs talk about the role in the first instead of the third person. Since precision and talking about the role in the third person are important quality criteria of feedback, this result should be taken into account when thinking about students’ evaluation of SPs’ feedback for quality control. Bouter, S., E. van Weel-Baumgarten, and S. Bolhuis. 2012. Construction and Validation of the Nijmegen Evaluation of the Simulated Patient (NESP): Assessing Simulated Patients’ Ability to Role-Play and Provide Feedback to Students. Academic Medicine: Journal of the Association of American Medical Colleges
Resumo:
Radiocarbon dating by means of accelerator mass spectrometry (AMS) is a well-established method for samples containing carbon in the milligram range. However, the measurement of small samples containing less than 50 lg carbon often fails. It is difficult to graphitise these samples and the preparation is prone to contamination. To avoid graphitisation, a solution can be the direct measurement of carbon dioxide. The MICADAS, the smallest accelerator for radiocarbon dating in Zurich, is equipped with a hybrid Cs sputter ion source. It allows the measurement of both, graphite targets and gaseous CO2 samples, without any rebuilding. This work presents experiences dealing with small samples containing 1-40 lg carbon. 500 unknown samples of different environmental research fields have been measured yet. Most of the samples were measured with the gas ion source. These data are compared with earlier measurements of small graphite samples. The performance of the two different techniques is discussed and main contributions to the blank determined. An analysis of blank and standard data measured within years allowed a quantification of the contamination, which was found to be of the order of 55 ng and 750 ng carbon (50 pMC) for the gaseous and the graphite samples, respectively. For quality control, a number of certified standards were measured using the gas ion source to demonstrate reliability of the data.
Resumo:
The long-lived radionuclide 129I (T 1/2 = 15.7 My) occurs in the nature in very low concentrations. Since the middle of our century the environmental levels of 129I have been dramatically changed as a consequence of civil and military use of nuclear fission. Its investigation in environmental materials is of interest for environmental surveillance, retrospective dosimetry and for the use as a natural and man-made fracers of environmental processes. We are comparing two analytical methods which presently are capable of determining 129I in environmental materials, namely radiochemical neutron activation analysis (RNAA) and accelerator mass spectrometry (AMS). Emphasis is laid upon the quality control and detection capabilities for the analysis of 129I in environmental materials. Some applications are discussed.
Resumo:
BACKGROUND Long-term hormone therapy has been the standard of care for advanced prostate cancer since the 1940s. STAMPEDE is a randomised controlled trial using a multiarm, multistage platform design. It recruits men with high-risk, locally advanced, metastatic or recurrent prostate cancer who are starting first-line long-term hormone therapy. We report primary survival results for three research comparisons testing the addition of zoledronic acid, docetaxel, or their combination to standard of care versus standard of care alone. METHODS Standard of care was hormone therapy for at least 2 years; radiotherapy was encouraged for men with N0M0 disease to November, 2011, then mandated; radiotherapy was optional for men with node-positive non-metastatic (N+M0) disease. Stratified randomisation (via minimisation) allocated men 2:1:1:1 to standard of care only (SOC-only; control), standard of care plus zoledronic acid (SOC + ZA), standard of care plus docetaxel (SOC + Doc), or standard of care with both zoledronic acid and docetaxel (SOC + ZA + Doc). Zoledronic acid (4 mg) was given for six 3-weekly cycles, then 4-weekly until 2 years, and docetaxel (75 mg/m(2)) for six 3-weekly cycles with prednisolone 10 mg daily. There was no blinding to treatment allocation. The primary outcome measure was overall survival. Pairwise comparisons of research versus control had 90% power at 2·5% one-sided α for hazard ratio (HR) 0·75, requiring roughly 400 control arm deaths. Statistical analyses were undertaken with standard log-rank-type methods for time-to-event data, with hazard ratios (HRs) and 95% CIs derived from adjusted Cox models. This trial is registered at ClinicalTrials.gov (NCT00268476) and ControlledTrials.com (ISRCTN78818544). FINDINGS 2962 men were randomly assigned to four groups between Oct 5, 2005, and March 31, 2013. Median age was 65 years (IQR 60-71). 1817 (61%) men had M+ disease, 448 (15%) had N+/X M0, and 697 (24%) had N0M0. 165 (6%) men were previously treated with local therapy, and median prostate-specific antigen was 65 ng/mL (IQR 23-184). Median follow-up was 43 months (IQR 30-60). There were 415 deaths in the control group (347 [84%] prostate cancer). Median overall survival was 71 months (IQR 32 to not reached) for SOC-only, not reached (32 to not reached) for SOC + ZA (HR 0·94, 95% CI 0·79-1·11; p=0·450), 81 months (41 to not reached) for SOC + Doc (0·78, 0·66-0·93; p=0·006), and 76 months (39 to not reached) for SOC + ZA + Doc (0·82, 0·69-0·97; p=0·022). There was no evidence of heterogeneity in treatment effect (for any of the treatments) across prespecified subsets. Grade 3-5 adverse events were reported for 399 (32%) patients receiving SOC, 197 (32%) receiving SOC + ZA, 288 (52%) receiving SOC + Doc, and 269 (52%) receiving SOC + ZA + Doc. INTERPRETATION Zoledronic acid showed no evidence of survival improvement and should not be part of standard of care for this population. Docetaxel chemotherapy, given at the time of long-term hormone therapy initiation, showed evidence of improved survival accompanied by an increase in adverse events. Docetaxel treatment should become part of standard of care for adequately fit men commencing long-term hormone therapy. FUNDING Cancer Research UK, Medical Research Council, Novartis, Sanofi-Aventis, Pfizer, Janssen, Astellas, NIHR Clinical Research Network, Swiss Group for Clinical Cancer Research.
Resumo:
Five test runs were performed to assess possible bias when performing the loss on ignition (LOI) method to estimate organic matter and carbonate content of lake sediments. An accurate and stable weight loss was achieved after 2 h of burning pure CaCO3 at 950 °C, whereas LOI of pure graphite at 530 °C showed a direct relation to sample size and exposure time, with only 40-70% of the possible weight loss reached after 2 h of exposure and smaller samples losing weight faster than larger ones. Experiments with a standardised lake sediment revealed a strong initial weight loss at 550 °C, but samples continued to lose weight at a slow rate at exposure of up to 64 h, which was likely the effect of loss of volatile salts, structural water of clay minerals or metal oxides, or of inorganic carbon after the initial burning of organic matter. A further test-run revealed that at 550 °C samples in the centre of the furnace lost more weight than marginal samples. At 950 °C this pattern was still apparent but the differences became negligible. Again, LOI was dependent on sample size. An analytical LOI quality control experiment including ten different laboratories was carried out using each laboratory's own LOI procedure as well as a standardised LOI procedure to analyse three different sediments. The range of LOI values between laboratories measured at 550 °C was generally larger when each laboratory used its own method than when using the standard method. This was similar for 950 °C, although the range of values tended to be smaller. The within-laboratory range of LOI measurements for a given sediment was generally small. Comparisons of the results of the individual and the standardised method suggest that there is a laboratory-specific pattern in the results, probably due to differences in laboratory equipment and/or handling that could not be eliminated by standardising the LOI procedure. Factors such as sample size, exposure time, position of samples in the furnace and the laboratory measuring affected LOI results, with LOI at 550 °C being more susceptible to these factors than LOI at 950 °C. We, therefore, recommend analysts to be consistent in the LOI method used in relation to the ignition temperatures, exposure times, and the sample size and to include information on these three parameters when referring to the method.