921 resultados para Data control quality


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Longitudinal joint quality control/assurance is essential to the successful performance of asphalt pavements and it has received considerable amount of attention in recent years. The purpose of the study is to evaluate the level of compaction at the longitudinal joint and determine the effect of segregation on the longitudinal joint performance. Five paving projects with the use of traditional butt joint, infrared joint heater, edge restraint by milling and modified butt joint with the hot pinch longitudinal joint construction techniques were selected in this study. For each project, field density and permeability tests were made and cores from the pavement were obtained for in-lab permeability, air void and indirect tensile strength. Asphalt content and gradations were also obtained to determine the joint segregation. In general, this study finds that the minimum required joint density should be around 90.0% of the theoretical maximum density based on the AASHTO T166 method. The restrained-edge by milling and butt joint with the infrared heat treatment construction methods both create the joint density higher than this 90.0% limit. Traditional butt joint exhibits lower density and higher permeability than the criterion. In addition, all of the projects appear to have segregation at the longitudinal joint except for the edge-restraint by milling method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Asphalt pavements suffer various failures due to insufficient quality within their design lives. The American Association of State Highway and Transportation Officials (AASHTO) Mechanistic-Empirical Pavement Design Guide (MEPDG) has been proposed to improve pavement quality through quantitative performance prediction. Evaluation of the actual performance (quality) of pavements requires in situ nondestructive testing (NDT) techniques that can accurately measure the most critical, objective, and sensitive properties of pavement systems. The purpose of this study is to assess existing as well as promising new NDT technologies for quality control/quality assurance (QC/QA) of asphalt mixtures. Specifically, this study examined field measurements of density via the PaveTracker electromagnetic gage, shear-wave velocity via surface-wave testing methods, and dynamic stiffness via the Humboldt GeoGauge for five representative paving projects covering a range of mixes and traffic loads. The in situ tests were compared against laboratory measurements of core density and dynamic modulus. The in situ PaveTracker density had a low correlation with laboratory density and was not sensitive to variations in temperature or asphalt mix type. The in situ shear-wave velocity measured by surface-wave methods was most sensitive to variations in temperature and asphalt mix type. The in situ density and in situ shear-wave velocity were combined to calculate an in situ dynamic modulus, which is a performance-based quality measurement. The in situ GeoGauge stiffness measured on hot asphalt mixtures several hours after paving had a high correlation with the in situ dynamic modulus and the laboratory density, whereas the stiffness measurement of asphalt mixtures cooled with dry ice or at ambient temperature one or more days after paving had a very low correlation with the other measurements. To transform the in situ moduli from surface-wave testing into quantitative quality measurements, a QC/QA procedure was developed to first correct the in situ moduli measured at different field temperatures to the moduli at a common reference temperature based on master curves from laboratory dynamic modulus tests. The corrected in situ moduli can then be compared against the design moduli for an assessment of the actual pavement performance. A preliminary study of microelectromechanical systems- (MEMS)-based sensors for QC/QA and health monitoring of asphalt pavements was also performed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIM: To study prospectively patients after heart transplantation with respect to quality of life, mortality, morbidity, and clinical parameters before and up to 10 years after the operation. METHODS: Sixty patients (47.9 +/- 10.9 years, 57 men, 3 women) were transplanted at the University of Vienna Hospital, Department for Heart and Thorax Surgery and were included in this study. They were assessed when set on the waiting list, then exactly one, 5 and 10 years after the transplantation. The variables evaluated included physical and emotional complaints, well-being, mortality and morbidity. In the sample of patients who survived 10 years (n = 23), morbidity (infections, malignancies, graft arteriosclerosis, and rejection episodes) as well as quality of life were evaluated. RESULTS: Actuarial survival rates were 83.3, 66.7, 48.3% at 1, 5, and 10 years after transplantation, respectively. During the first year, infections were the most important reasons for premature death. As a cause of mortality, malignancies were found between years 1 and 5, and graft arteriosclerosis between years 5 and 10. Physical complaints diminished significantly after the operation, but grew significantly during the period from 5 to 10 years (p < 0.001). However, trembling (p < 0.05) and paraesthesies (p < 0.01) diminished continuously. Emotional complaints such as depression and dysphoria (both p < 0.05) increased until the tenth year after their nadir at year 1. In long-time survivors, 3 malignancies (lung, skin, thyroidea) were diagnosed 6 to 9 years postoperatively. Three patients (13%) had signs of graft arteriosclerosis at year 10; 9 (40%) patients suffered from rejection episodes during the course of 10 years. There were no serious rejection episodes deserving immediate therapy. Quality of life at 10 years is good in these patients. CONCLUSIONS: Heart transplantation is a successful therapy for patients with terminal heart disease. Long-term survivors feel well after 10 years and report a good quality of life.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monitoring the impact of sea storms on coastal areas is fundamental to study beach evolution and the vulnerability of low-lying coasts to erosion and flooding. Modelling wave runup on a beach is possible, but it requires accurate topographic data and model tuning, that can be done comparing observed and modeled runup. In this study we collected aerial photos using an Unmanned Aerial Vehicle after two different swells on the same study area. We merged the point cloud obtained with photogrammetry with multibeam data, in order to obtain a complete beach topography. Then, on each set of rectified and georeferenced UAV orthophotos, we identified the maximum wave runup for both events recognizing the wet area left by the waves. We then used our topography and numerical models to simulate the wave runup and compare the model results to observed values during the two events. Our results highlight the potential of the methodology presented, which integrates UAV platforms, photogrammetry and Geographic Information Systems to provide faster and cheaper information on beach topography and geomorphology compared with traditional techniques without losing in accuracy. We use the results obtained from this technique as a topographic base for a model that calculates runup for the two swells. The observed and modeled runups are consistent, and open new directions for future research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Includes bibliography.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Genomics is expanding the horizons of epidemiology, providing a new dimension for classical epidemiological studies and inspiring the development of large-scale multicenter studies with the statistical power necessary for the assessment of gene-gene and gene-environment interactions in cancer etiology and prognosis. This paper describes the methodology of the Clinical Genome of Cancer Project in São Paulo, Brazil (CGCP), which includes patients with nine types of tumors and controls. Three major epidemiological designs were used to reach specific objectives: cross-sectional studies to examine gene expression, case-control studies to evaluate etiological factors, and follow-up studies to analyze genetic profiles in prognosis. The clinical groups included patients' data in the electronic database through the Internet. Two approaches were used for data quality control: continuous data evaluation and data entry consistency. A total of 1749 cases and 1509 controls were entered into the CGCP database from the first trimester of 2002 to the end of 2004. Continuous evaluation showed that, for all tumors taken together, only 0.5% of the general form fields still included potential inconsistencies by the end of 2004. Regarding data entry consistency, the highest percentage of errors (11.8%) was observed for the follow-up form, followed by 6.7% for the clinical form, 4.0% for the general form, and only 1.1% for the pathology form. Good data quality is required for their transformation into useful information for clinical application and for preventive measures. The use of the Internet for communication among researchers and for data entry is perhaps the most innovative feature of the CGCP. The monitoring of patients' data guaranteed their quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives - Review available guidance for quality assurance (QA) in mammography and discuss its contribution to harmonise practices worldwide. Methods - Literature search was performed on different sources to identify guidance documents for QA in mammography available worldwide in international bodies, healthcare providers, professional/scientific associations. The guidance documents identified were reviewed and a selection was compared for type of guidance (clinical/technical), technology and proposed QA methodologies focusing on dose and image quality (IQ) performance assessment. Results - Fourteen protocols (targeted at conventional and digital mammography) were reviewed. All included recommendations for testing acquisition, processing and display systems associated with mammographic equipment. All guidance reviewed highlighted the importance of dose assessment and testing the Automatic Exposure Control (AEC) system. Recommended tests for assessment of IQ showed variations in the proposed methodologies. Recommended testing focused on assessment of low-contrast detection, spatial resolution and noise. QC of image display is recommended following the American Association of Physicists in Medicine guidelines. Conclusions - The existing QA guidance for mammography is derived from key documents (American College of Radiology and European Union guidelines) and proposes similar tests despite the variations in detail and methodologies. Studies reported on QA data should provide detail on experimental technique to allow robust data comparison. Countries aiming to implement a mammography/QA program may select/prioritise the tests depending on available technology and resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: We aimed to critically evaluate the importance of quality control (QC) and quality assurance (QA) strategies in the routine work of uterine cervix cytology. Study Design: We revised all the main principles of QC and QA that are already being implemented worldwide and then discussed the positive aspects and limitations of these as well as proposing alternatives when pertinent. Results: A literature review was introduced after highlighting the main historical revisions, and then a critical evaluation of the principal innovations in screening programmes was conducted, with recommendations being postulated. Conclusions: Based on the analysed data, QC and QA are two essential arms that support the quality of a screening programme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MRSI grids frequently show spectra with poor quality, mainly because of the high sensitivity of MRS to field inhomogeneities. These poor quality spectra are prone to quantification and/or interpretation errors that can have a significant impact on the clinical use of spectroscopic data. Therefore, quality control of the spectra should always precede their clinical use. When performed manually, quality assessment of MRSI spectra is not only a tedious and time-consuming task, but is also affected by human subjectivity. Consequently, automatic, fast and reliable methods for spectral quality assessment are of utmost interest. In this article, we present a new random forest-based method for automatic quality assessment of (1) H MRSI brain spectra, which uses a new set of MRS signal features. The random forest classifier was trained on spectra from 40 MRSI grids that were classified as acceptable or non-acceptable by two expert spectroscopists. To account for the effects of intra-rater reliability, each spectrum was rated for quality three times by each rater. The automatic method classified these spectra with an area under the curve (AUC) of 0.976. Furthermore, in the subset of spectra containing only the cases that were classified every time in the same way by the spectroscopists, an AUC of 0.998 was obtained. Feature importance for the classification was also evaluated. Frequency domain skewness and kurtosis, as well as time domain signal-to-noise ratios (SNRs) in the ranges 50-75 ms and 75-100 ms, were the most important features. Given that the method is able to assess a whole MRSI grid faster than a spectroscopist (approximately 3 s versus approximately 3 min), and without loss of accuracy (agreement between classifier trained with just one session and any of the other labelling sessions, 89.88%; agreement between any two labelling sessions, 89.03%), the authors suggest its implementation in the clinical routine. The method presented in this article was implemented in jMRUI's SpectrIm plugin. Copyright © 2016 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and purpose Survey data quality is a combination of the representativeness of the sample, the accuracy and precision of measurements, data processing and management with several subcomponents in each. The purpose of this paper is to show how, in the final risk factor surveys of the WHO MONICA Project, information on data quality were obtained, quantified, and used in the analysis. Methods and results In the WHO MONICA (Multinational MONItoring of trends and determinants in CArdiovascular disease) Project, the information about the data quality components was documented in retrospective quality assessment reports. On the basis of the documented information and the survey data, the quality of each data component was assessed and summarized using quality scores. The quality scores were used in sensitivity testing of the results both by excluding populations with low quality scores and by weighting the data by its quality scores. Conclusions Detailed documentation of all survey procedures with standardized protocols, training, and quality control are steps towards optimizing data quality. Quantifying data quality is a further step. Methods used in the WHO MONICA Project could be adopted to improve quality in other health surveys.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study identifies predictors and normative data for quality of life (QOL) in a sample of Portuguese adults from general population. A cross-sectional correlational study was undertaken with two hundred and fifty-five (N = 255) individuals from Portuguese general population (mean age 43 years, range 25–84 years; 148 females, 107 males). Participants completed the European Portuguese version of the World Health Organization Quality of Life short-form instrument and the European Portuguese version of the Center for Epidemiologic Studies Depression Scale. Demographic information was also collected. Portuguese adults reported their QOL as good. The physical, psychological and environmental domains predicted 44 % of the variance of QOL. The strongest predictor was the physical domain and the weakest was social relationships. Age, educational level, socioeconomic status and emotional status were significantly correlated with QOL and explained 25 % of the variance of QOL. The strongest predictor of QOL was emotional status followed by education and age. QOL was significantly different according to: marital status; living place (mainland or islands); type of cohabitants; occupation; health. The sample of adults from general Portuguese population reported high levels of QOL. The life domain that better explained QOL was the physical domain. Among other variables, emotional status best predicted QOL. Further variables influenced overall QOL. These findings inform our understanding on adults from Portuguese general population QOL and can be helpful for researchers and practitioners using this assessment tool to compare their results with normative data

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quality control (QuaCo) in urology is mandatory to standardize or even increase the level of care. While QuaCo is undertaken at every step in the clinical pathway, it should focus on the patient's comorbidities and on the urologist and its complication rate. Resulting from political and economical pressures, comparing QuaCo and outcomes between urologists and institutions is nowadays often performed. However, careful interpretation of these comparisons is mandatory to avoid potential discriminations. Indeed, the reader has to make sure that patients groups and surgical techniques are comparable, definitions of complications are similar, classification of complications is standardized, and finally that the methodology in collecting data is irreproachable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background. Through a national policy agreement, over 167 million Euros will be invested in the Swedish National Quality Registries (NQRs) between 2012 and 2016. One of the policy agreement¿s intentions is to increase the use of NQR data for quality improvement (QI). However, the evidence is fragmented as to how the use of medical registries and the like lead to quality improvement, and little is known about non-clinical use. The aim was therefore to investigate the perspectives of Swedish politicians and administrators on quality improvement based on national registry data. Methods. Politicians and administrators from four county councils were interviewed. A qualitative content analysis guided by the Consolidated Framework for Implementation Research (CFIR) was performed. Results. The politicians and administrators perspectives on the use of NQR data for quality improvement were mainly assigned to three of the five CFIR domains. In the domain of intervention characteristics, data reliability and access in reasonable time were not considered entirely satisfactory, making it difficult for the politico-administrative leaderships to initiate, monitor, and support timely QI efforts. Still, politicians and administrators trusted the idea of using the NQRs as a base for quality improvement. In the domain of inner setting, the organizational structures were not sufficiently developed to utilize the advantages of the NQRs, and readiness for implementation appeared to be inadequate for two reasons. Firstly, the resources for data analysis and quality improvement were not considered sufficient at politico-administrative or clinical level. Secondly, deficiencies in leadership engagement at multiple levels were described and there was a lack of consensus on the politicians¿ role and level of involvement. Regarding the domain of outer setting, there was a lack of communication and cooperation between the county councils and the national NQR organizations. Conclusions. The Swedish experiences show that a government-supported national system of well-funded, well-managed, and reputable national quality registries needs favorable local politico-administrative conditions to be used for quality improvement; such conditions are not yet in place according to local politicians and administrators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optical remote sensing techniques have obvious advantages for monitoring gas and aerosol emissions, since they enable the operation over large distances, far from hostile environments, and fast processing of the measured signal. In this study two remote sensing devices, namely a Lidar (Light Detection and Ranging) for monitoring the vertical profile of backscattered light intensity, and a Sodar (Acoustic Radar, Sound Detection and Ranging) for monitoring the vertical profile of the wind vector were operated during specific periods. The acquired data were processed and compared with data of air quality obtained from ground level monitoring stations, in order to verify the possibility of using the remote sensing techniques to monitor industrial emissions. The campaigns were carried out in the area of the Environmental Research Center (Cepema) of the University of São Paulo, in the city of Cubatão, Brazil, a large industrial site, where numerous different industries are located, including an oil refinery, a steel plant, as well as fertilizer, cement and chemical/petrochemical plants. The local environmental problems caused by the industrial activities are aggravated by the climate and topography of the site, unfavorable to pollutant dispersion. Results of a campaign are presented for a 24- hour period, showing data of a Lidar, an air quality monitoring station and a Sodar. © 2011 SPIE.