973 resultados para Quality control measurement
Resumo:
The measurement of 8-oxo-7,8-dihydro-2'-deoxyguanosine is an increasingly popular marker of in vivo oxidative damage to DNA. A random-sequence 21-mer oligonucleotide 5'-TCA GXC GTA CGT GAT CTC AGT-3' in which X was 8-oxo-guanine (8-oxo-G) was purified and accurate determination of the oxidised base was confirmed by a 32P-end labelling strategy. The lyophilised material was analysed for its absolute content of 8-oxo-dG by several major laboratories in Europe and one in Japan. Most laboratories using HPLC-ECD underestimated, while GC-MS-SIM overestimated the level of the lesion. HPLC-ECD measured the target value with greatest accuracy. The results also suggest that none of the procedures can accurately quantitate levels of 1 in 10(6) 8-oxo-(d)G in DNA.
Resumo:
Recommendation for Oxygen Measurements from Argo Floats: Implementation of In-Air-Measurement Routine to Assure Highest Long-term Accuracy As Argo has entered its second decade and chemical/biological sensor technology is improving constantly, the marine biogeochemistry community is starting to embrace the successful Argo float program. An augmentation of the global float observatory, however, has to follow rather stringent constraints regarding sensor characteristics as well as data processing and quality control routines. Owing to the fairly advanced state of oxygen sensor technology and the high scientific value of oceanic oxygen measurements (Gruber et al., 2010), an expansion of the Argo core mission to routine oxygen measurements is perhaps the most mature and promising candidate (Freeland et al., 2010). In this context, SCOR Working Group 142 “Quality Control Procedures for Oxygen and Other Biogeochemical Sensors on Floats and Gliders” (www.scor-int.org/SCOR_WGs_WG142.htm) set out in 2014 to assess the current status of biogeochemical sensor technology with particular emphasis on float-readiness, develop pre- and post-deployment quality control metrics and procedures for oxygen sensors, and to disseminate procedures widely to ensure rapid adoption in the community.
Resumo:
Introduction: Paper and thin layer chromatography methods are frequently used in Classic Nuclear Medicine for the determination of radiochemical purity (RCP) on radiopharmaceutical preparations. An aliquot of the radiopharmaceutical to be tested is spotted at the origin of a chromatographic strip (stationary phase), which in turn is placed in a chromatographic chamber in order to separate and quantify radiochemical species present in the radiopharmaceutical preparation. There are several methods for the RCP measurement, based on the use of equipment as dose calibrators, well scintillation counters, radiochromatografic scanners and gamma cameras. The purpose of this study was to compare these quantification methods for the determination of RCP. Material and Methods: 99mTc-Tetrofosmin and 99mTc-HDP are the radiopharmaceuticals chosen to serve as the basis for this study. For the determination of RCP of 99mTc-Tetrofosmin we used ITLC-SG (2.5 x 10 cm) and 2-butanone (99mTc-tetrofosmin Rf = 0.55, 99mTcO4- Rf = 1.0, other labeled impurities 99mTc-RH RF = 0.0). For the determination of RCP of 99mTc-HDP, Whatman 31ET and acetone was used (99mTc-HDP Rf = 0.0, 99mTcO4- Rf = 1.0, other labeled impurities RF = 0.0). After the development of the solvent front, the strips were allowed to dry and then imaged on the gamma camera (256x256 matrix; zoom 2; LEHR parallel-hole collimator; 5-minute image) and on the radiochromatogram scanner. Then, strips were cut in Rf 0.8 in the case of 99mTc-tetrofosmin and Rf 0.5 in the case of 99mTc-HDP. The resultant pieces were smashed in an assay tube (to minimize the effect of counting geometry) and counted in the dose calibrator and in the well scintillation counter (during 1 minute). The RCP was calculated using the formula: % 99mTc-Complex = [(99mTc-Complex) / (Total amount of 99mTc-labeled species)] x 100. Statistical analysis was done using the test of hypotheses for the difference between means in independent samples. Results:The gamma camera based method demonstrated higher operator-dependency (especially concerning the drawing of the ROIs) and the measures obtained using the dose calibrator are very sensitive to the amount of activity spotted in the chromatographic strip, so the use of a minimum of 3.7 MBq activity is essential to minimize quantification errors. Radiochromatographic scanner and well scintillation counter showed concordant results and demonstrated the higher level of precision. Conclusions: Radiochromatographic scanners and well scintillation counters based methods demonstrate to be the most accurate and less operator-dependant methods.
Resumo:
Asphalt pavements suffer various failures due to insufficient quality within their design lives. The American Association of State Highway and Transportation Officials (AASHTO) Mechanistic-Empirical Pavement Design Guide (MEPDG) has been proposed to improve pavement quality through quantitative performance prediction. Evaluation of the actual performance (quality) of pavements requires in situ nondestructive testing (NDT) techniques that can accurately measure the most critical, objective, and sensitive properties of pavement systems. The purpose of this study is to assess existing as well as promising new NDT technologies for quality control/quality assurance (QC/QA) of asphalt mixtures. Specifically, this study examined field measurements of density via the PaveTracker electromagnetic gage, shear-wave velocity via surface-wave testing methods, and dynamic stiffness via the Humboldt GeoGauge for five representative paving projects covering a range of mixes and traffic loads. The in situ tests were compared against laboratory measurements of core density and dynamic modulus. The in situ PaveTracker density had a low correlation with laboratory density and was not sensitive to variations in temperature or asphalt mix type. The in situ shear-wave velocity measured by surface-wave methods was most sensitive to variations in temperature and asphalt mix type. The in situ density and in situ shear-wave velocity were combined to calculate an in situ dynamic modulus, which is a performance-based quality measurement. The in situ GeoGauge stiffness measured on hot asphalt mixtures several hours after paving had a high correlation with the in situ dynamic modulus and the laboratory density, whereas the stiffness measurement of asphalt mixtures cooled with dry ice or at ambient temperature one or more days after paving had a very low correlation with the other measurements. To transform the in situ moduli from surface-wave testing into quantitative quality measurements, a QC/QA procedure was developed to first correct the in situ moduli measured at different field temperatures to the moduli at a common reference temperature based on master curves from laboratory dynamic modulus tests. The corrected in situ moduli can then be compared against the design moduli for an assessment of the actual pavement performance. A preliminary study of microelectromechanical systems- (MEMS)-based sensors for QC/QA and health monitoring of asphalt pavements was also performed.
Resumo:
The application of computer vision based quality control has been slowly but steadily gaining importance mainly due to its speed in achieving results and also greatly due to its non- destnictive nature of testing. Besides, in food applications it also does not contribute to contamination. However, computer vision applications in quality control needs the application of an appropriate software for image analysis. Eventhough computer vision based quality control has several advantages, its application has limitations as to the type of work to be done, particularly so in the food industries. Selective applications, however, can be highly advantageous and very accurate.Computer vision based image analysis could be used in morphometric measurements of fish with the same accuracy as the existing conventional method. The method is non-destructive and non-contaminating thus providing anadvantage in seafood processing.The images could be stored in archives and retrieved at anytime to carry out morphometric studies for biologists.Computer vision and subsequent image analysis could be used in measurements of various food products to assess uniformity of size. One product namely cutlet and product ingredients namely coating materials such as bread crumbs and rava were selected for the study. Computer vision based image analysis was used in the measurements of length, width and area of cutlets. Also the width of coating materials like bread crumbs was measured.Computer imaging and subsequent image analysis can be very effectively used in quality evaluations of product ingredients in food processing. Measurement of width of coating materials could establish uniformity of particles or the lack of it. The application of image analysis in bacteriological work was also done
Resumo:
BACKGROUND Retinal optical coherence tomography (OCT) permits quantification of retinal layer atrophy relevant to assessment of neurodegeneration in multiple sclerosis (MS). Measurement artefacts may limit the use of OCT to MS research. OBJECTIVE An expert task force convened with the aim to provide guidance on the use of validated quality control (QC) criteria for the use of OCT in MS research and clinical trials. METHODS A prospective multi-centre (n = 13) study. Peripapillary ring scan QC rating of an OCT training set (n = 50) was followed by a test set (n = 50). Inter-rater agreement was calculated using kappa statistics. Results were discussed at a round table after the assessment had taken place. RESULTS The inter-rater QC agreement was substantial (kappa = 0.7). Disagreement was found highest for judging signal strength (kappa = 0.40). Future steps to resolve these issues were discussed. CONCLUSION Substantial agreement for QC assessment was achieved with aid of the OSCAR-IB criteria. The task force has developed a website for free online training and QC certification. The criteria may prove useful for future research and trials in MS using OCT as a secondary outcome measure in a multi-centre setting.
Resumo:
The characterization of blood pressure in treatment trials assessing the benefits of blood pressure lowering regimens is a critical factor for the appropriate interpretation of study results. With numerous operators involved in the measurement of blood pressure in many thousands of patients being screened for entry into clinical trials, it is essential that operators follow pre-defined measurement protocols involving multiple measurements and standardized techniques. Blood pressure measurement protocols have been developed by international societies and emphasize the importance of appropriate choice of cuff size, identification of Korotkoff sounds, and digit preference. Training of operators and auditing of blood pressure measurement may assist in reducing the operator-related errors in measurement. This paper describes the quality control activities adopted for the screening stage of the 2nd Australian National Blood Pressure Study (ANBP2). ANBP2 is cardiovascular outcome trial of the treatment of hypertension in the elderly that was conducted entirely in general practices in Australia. A total of 54 288 subjects were screened; 3688 previously untreated subjects were identified as having blood pressure >140/90 mmHg at the initial screening visit, 898 (24%) were not eligible for study entry after two further visits due to the elevated reading not being sustained. For both systolic and diastolic blood pressure recording, observed digit preference fell within 7 percentage points of the expected frequency. Protocol adherence, in terms of the required minimum blood pressure difference between the last two successive recordings, was 99.8%. These data suggest that adherence to blood pressure recording protocols and elimination of digit preferences can be achieved through appropriate training programs and quality control activities in large multi-centre community-based trials in general practice. Repeated blood pressure measurement prior to initial diagnosis and study entry is essential to appropriately characterize hypertension in these elderly patients.
Resumo:
Background: The present work aims at the application of the decision theory to radiological image quality control ( QC) in diagnostic routine. The main problem addressed in the framework of decision theory is to accept or reject a film lot of a radiology service. The probability of each decision of a determined set of variables was obtained from the selected films. Methods: Based on a radiology service routine a decision probability function was determined for each considered group of combination characteristics. These characteristics were related to the film quality control. These parameters were also framed in a set of 8 possibilities, resulting in 256 possible decision rules. In order to determine a general utility application function to access the decision risk, we have used a simple unique parameter called r. The payoffs chosen were: diagnostic's result (correct/incorrect), cost (high/low), and patient satisfaction (yes/no) resulting in eight possible combinations. Results: Depending on the value of r, more or less risk will occur related to the decision-making. The utility function was evaluated in order to determine the probability of a decision. The decision was made with patients or administrators' opinions from a radiology service center. Conclusion: The model is a formal quantitative approach to make a decision related to the medical imaging quality, providing an instrument to discriminate what is really necessary to accept or reject a film or a film lot. The method presented herein can help to access the risk level of an incorrect radiological diagnosis decision.
Resumo:
This paper presents a novel adaptive control scheme. with improved convergence rate, for the equalization of harmonic disturbances such as engine noise. First, modifications for improving convergence speed of the standard filtered-X LMS control are described. Equalization capabilities are then implemented, allowing the independent tuning of harmonics. Eventually, by providing the desired order vs. engine speed profiles, the pursued sound quality attributes can be achieved. The proposed control scheme is first demonstrated with a simple secondary path model and, then, experimentally validated with the aid of a vehicle mockup which is excited with engine noise. The engine excitation is provided by a real-time sound quality equivalent engine simulator. Stationary and transient engine excitations are used to assess the control performance. The results reveal that the proposed controller is capable of large order-level reductions (up to 30 dB) for stationary excitation, which allows a comfortable margin for equalization. The same holds for slow run-ups ( > 15s) thanks to the improved convergence rate. This margin, however, gets narrower with shorter run-ups (<= 10s). (c) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Active control solutions appear to be a feasible approach to cope with the steadily increasing requirements for noise reduction in the transportation industry. Active controllers tend to be designed with a target on the sound pressure level reduction. However, the perceived control efficiency for the occupants can be more accurately assessed if psychoacoustic metrics can be taken into account. Therefore, this paper aims to evaluate, numerically and experimentally, the effect of a feedback controller on the sound quality of a vehicle mockup excited with engine noise. The proposed simulation scheme is described and experimentally validated. The engine excitation is provided by a sound quality equivalent engine simulator, running on a real-time platform that delivers harmonic excitation in function of the driving condition. The controller performance is evaluated in terms of specific loudness and roughness. It is shown that the use of a quite simple control strategy, such as a velocity feedback, can result in satisfactory loudness reduction with slightly spread roughness, improving the overall perception of the engine sound. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
A method was optimized for the analysis of omeprazole (OMZ) by ultra-high speed LC with diode array detection using a monolithic Chromolith Fast Gradient RP 18 endcapped column (50 x 2.0 mm id). The analyses were performed at 30 degrees C using a mobile phase consisting of 0.15% (v/v) trifluoroacetic acid (TFA) in water (solvent A) and 0.15% (v/v) TFA in acetonitrile (solvent B) under a linear gradient of 5 to 90% B in 1 min at a flow rate of 1.0 mL/min and detection at 220 nm. Under these conditions, OMZ retention time was approximately 0.74 min. Validation parameters, such as selectivity, linearity, precision, accuracy, and robustness, showed results within the acceptable criteria. The method developed was successfully applied to OMZ enteric-coated pellets, showing that this assay can be used in the pharmaceutical industry for routine QC analysis. Moreover, the analytical conditions established allow for the simultaneous analysis of OMZ metabolites, 5-hydroxyomeprazole and omeprazole sulfone, in the same run, showing that this method can be extended to other matrixes with adequate procedures for sample preparation.
Resumo:
Background: The cerebrospinal fluid (CSF) biomarkers amyloid beta (A beta)-42, total-tau (T-tau), and phosphorylated-tau (P-tau) demonstrate good diagnostic accuracy for Alzheimer`s disease (AD). However, there are large variations in biomarker measurements between studies, and between and within laboratories. The Alzheimer`s Association has initiated a global quality control program to estimate and monitor variability of measurements, quantify batch-to-batch assay variations, and identify sources of variability. In this article, we present the results from the first two rounds of the program. Methods: The program is open for laboratories using commercially available kits for A beta, T-tau, or P-tau. CSF samples (aliquots of pooled CSF) are sent for analysis several times a year from the Clinical Neurochemistry Laboratory at the Molndal campus of the University of Gothenburg, Sweden. Each round consists of three quality control samples. Results: Forty laboratories participated. Twenty-six used INNOTEST enzyme-linked immunosorbent assay kits, 14 used Luminex xMAP with the INNO-BIA AlzBio3 kit (both measure A beta-(1-42), P-tau(181P), and T-tau), and 5 used Mesa Scale Discovery with the A beta triplex (A beta N-42, A beta N-40, and A beta N-38) or T-tau kits. The total coefficients of variation between the laboratories were 13% to 36%. Five laboratories analyzed the samples six times on different occasions. Within-laboratory precisions differed considerably between biomarkers within individual laboratories. Conclusions: Measurements of CSF AD biomarkers show large between-laboratory variability, likely caused by factors related to analytical procedures and the analytical kits. Standardization of laboratory procedures and efforts by kit vendors to increase kit performance might lower variability, and will likely increase the usefulness of CSF AD biomarkers. (C) 2011 The Alzheimer`s Association. All rights reserved.
Resumo:
Toxoplasma gondii causes severe disease both to man and livestock and its detection in meat after slaughtering requires PCR or biological tests. Meat packages contain retained exudate that could be used for serology due to its blood content. Similar studies reported false negative assays in those tests. We standardized an anti-T. gondii IgG ELISA in muscle juices from experimentally infected rabbits, with blood content determination by cyanhemoglobin spectrophotometry. IgG titers and immunoblotting profiles were similar in blood, serum or meat juice, after blood content correction. These assays were adequate regardless of the storage time up to 120 days or freeze-thaw cycles, without false negative results. We also found 1.35% (1/74) positive sample in commercial Brazilian rabbit meat cuts, by this assay. The blood content determination shows ELISA of meat juice may be useful for quality control for toxoplasmosis monitoring. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
BACKGROUND: Previous publications have documented the damage caused to red blood cells (RBCs) irradiated with X-rays produced by a linear accelerator and with gamma rays derived from a Cs-137 source. The biologic effects on RBCs of gamma rays from a Co-60 source, however, have not been characterized. STUDY DESIGN AND METHODS: This study investigated the effect of 3000 and 4000 cGy on the in vitro properties of RBCs preserved with preservative solution and irradiated with a cobalt teletherapy unit. A thermal device equipped with a data acquisition system was used to maintain and monitor the blood temperature during irradiation. The device was rotated at 2 r.p.m. in the irradiation beam by means of an automated system. The spatial distribution of the absorbed dose over the irradiated volume was obtained with phantom and thermoluminescent dosimeters (TLDs). Levels of Hb, K+, and Cl- were assessed by spectrophotometric techniques over a period of 45 days. The change in the topology of the RBC membrane was investigated by flow cytometry. RESULTS: Irradiation caused significant changes in the extracellular levels of K+ and Hb and in the organizational structure of the phospholipid bilayer of the RBC membrane. Blood temperature ranged from 2 to 4 degrees C during irradiation. Rotation at 2 r.p.m. distributed the dose homogeneously (92%-104%) and did not damage the RBCs. CONCLUSIONS: The method used to store the blood bags during irradiation guaranteed that all damage caused to the cells was exclusively due to the action of radiation at the doses applied. It was demonstrated that prolonged storage of Co-60-irradiated RBCs results in loss of membrane phospholipids asymmetry, exposing phosphatidylserine (PS) on the cells` surface with a time and dose dependence, which can reduce the in vivo recovery of these cells. A time- and dose-dependence effect on the extracellular K+ and plasma-free Hb levels was also observed. The magnitude of all these effects, however, seems not to be clinically important and can support the storage of irradiated RBC units for at last 28 days.