928 resultados para Objective assessment
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
Computed tomography (CT) is a valuable technology to the healthcare enterprise as evidenced by the more than 70 million CT exams performed every year. As a result, CT has become the largest contributor to population doses amongst all medical imaging modalities that utilize man-made ionizing radiation. Acknowledging the fact that ionizing radiation poses a health risk, there exists the need to strike a balance between diagnostic benefit and radiation dose. Thus, to ensure that CT scanners are optimally used in the clinic, an understanding and characterization of image quality and radiation dose are essential.
The state-of-the-art in both image quality characterization and radiation dose estimation in CT are dependent on phantom based measurements reflective of systems and protocols. For image quality characterization, measurements are performed on inserts imbedded in static phantoms and the results are ascribed to clinical CT images. However, the key objective for image quality assessment should be its quantification in clinical images; that is the only characterization of image quality that clinically matters as it is most directly related to the actual quality of clinical images. Moreover, for dose estimation, phantom based dose metrics, such as CT dose index (CTDI) and size specific dose estimates (SSDE), are measured by the scanner and referenced as an indicator for radiation exposure. However, CTDI and SSDE are surrogates for dose, rather than dose per-se.
Currently there are several software packages that track the CTDI and SSDE associated with individual CT examinations. This is primarily the result of two causes. The first is due to bureaucracies and governments pressuring clinics and hospitals to monitor the radiation exposure to individuals in our society. The second is due to the personal concerns of patients who are curious about the health risks associated with the ionizing radiation exposure they receive as a result of their diagnostic procedures.
An idea that resonates with clinical imaging physicists is that patients come to the clinic to acquire quality images so they can receive a proper diagnosis, not to be exposed to ionizing radiation. Thus, while it is important to monitor the dose to patients undergoing CT examinations, it is equally, if not more important to monitor the image quality of the clinical images generated by the CT scanners throughout the hospital.
The purposes of the work presented in this thesis are threefold: (1) to develop and validate a fully automated technique to measure spatial resolution in clinical CT images, (2) to develop and validate a fully automated technique to measure image contrast in clinical CT images, and (3) to develop a fully automated technique to estimate radiation dose (not surrogates for dose) from a variety of clinical CT protocols.
Resumo:
Large efforts are on-going within the EU to prepare the Marine Strategy Framework Directive’s (MSFD) assessment of the environmental status of the European seas. This assessment will only be as good as the indicators chosen to monitor the eleven descriptors of good environmental status (GEnS). An objective and transparent framework to determine whether chosen indicators actually support the aims of this policy is, however, not yet in place. Such frameworks are needed to ensure that the limited resources available to this assessment optimize the likelihood of achieving GEnS within collaborating states. Here, we developed a hypothesis-based protocol to evaluate whether candidate indicators meet quality criteria explicit to the MSFD, which the assessment community aspires to. Eight quality criteria are distilled from existing initiatives, and a testing and scoring protocol for each of them is presented. We exemplify its application in three worked examples, covering indicators for three GEnS descriptors (1, 5 and 6), various habitat components (seaweeds, seagrasses, benthic macrofauna and plankton), and assessment regions (Danish, Lithuanian and UK waters). We argue that this framework provides a necessary, transparent and standardized structure to support the comparison of candidate indicators, and the decision-making process leading to indicator selection. Its application could help identify potential limitations in currently available candidate metrics and, in such cases, help focus the development of more adequate indicators. Use of such standardized approaches will facilitate the sharing of knowledge gained across the MSFD parties despite context-specificity across assessment regions, and support the evidence-based management of European seas.
Resumo:
Large efforts are on-going within the EU to prepare the Marine Strategy Framework Directive’s (MSFD) assessment of the environmental status of the European seas. This assessment will only be as good as the indicators chosen to monitor the eleven descriptors of good environmental status (GEnS). An objective and transparent framework to determine whether chosen indicators actually support the aims of this policy is, however, not yet in place. Such frameworks are needed to ensure that the limited resources available to this assessment optimize the likelihood of achieving GEnS within collaborating states. Here, we developed a hypothesis-based protocol to evaluate whether candidate indicators meet quality criteria explicit to the MSFD, which the assessment community aspires to. Eight quality criteria are distilled from existing initiatives, and a testing and scoring protocol for each of them is presented. We exemplify its application in three worked examples, covering indicators for three GEnS descriptors (1, 5 and 6), various habitat components (seaweeds, seagrasses, benthic macrofauna and plankton), and assessment regions (Danish, Lithuanian and UK waters). We argue that this framework provides a necessary, transparent and standardized structure to support the comparison of candidate indicators, and the decision-making process leading to indicator selection. Its application could help identify potential limitations in currently available candidate metrics and, in such cases, help focus the development of more adequate indicators. Use of such standardized approaches will facilitate the sharing of knowledge gained across the MSFD parties despite context-specificity across assessment regions, and support the evidence-based management of European seas.
Resumo:
BACKGROUND AND OBJECTIVE: Molecular analysis by PCR of monoclonally rearranged immunoglobulin (Ig) genes can be used for diagnosis in B-cell lymphoproliferative disorders (LPD), as well as for monitoring minimal residual disease (MRD) after treatment. This technique has the risk of false-positive results due to the "background" amplification of similar rearrangements derived from polyclonal B-cells. This problem can be resolved in advance by additional analyses that discern between polyclonal and monoclonal PCR products, such as the heteroduplex analysis. A second problem is that PCR frequently fails to amplify the junction regions, mainly due to somatic mutations frequently present in mature (post-follicular) B-cell lymphoproliferations. The use of additional targets (e.g. Ig light chain genes) can avoid this problem. DESIGN AND METHODS: We studied the specificity of heteroduplex PCR analysis of several Ig junction regions to detect monoclonal products in samples from 84 MM patients and 24 patients with B cell polyclonal disorders. RESULTS: Using two distinct VH consensus primers (FR3 and FR2) in combination with one JH primer, 79% of the MM displayed monoclonal products. The percentage of positive cases was increased by amplification of the Vlamda-Jlamda junction regions or kappa(de) rearrangements, using two or five pairs of consensus primers, respectively. After including these targets in the heteroduplex PCR analysis, 93% of MM cases displayed monoclonal products. None of the polyclonal samples analyzed resulted in monoclonal products. Dilution experiments showed that monoclonal rearrangements could be detected with a sensitivity of at least 10(-2) in a background with >30% polyclonal B-cells, the sensitivity increasing up to 10(-3) when the polyclonal background was
Resumo:
An objective structured long examination record (OSLER) is a modification of the long-case clinical examination and is mainly used in medical education. This study aims to obtain nursing students' views of the OSLER compared with the objective structured clinical examination (OSCE), which is used to assess discrete clinical skills. A sample of third-year undergraduate nursing students (n=21) volunteered to participate from a cohort of 230 students. Participants undertook the OSLER under examination conditions. Pre-and post-test questionnaires gathered the students' views on the assessments and these were analysed from a mainly qualitative perspective. Teachers' and simulated patient views were also used for data triangulation. The findings indicate that the OSLER ensures more holistic assessment of a student's clinical skills and particularly essential skills such as communication, and that the OSLER, together with the OSCE, should be used to supplement the assessment of clinical competence in nursing education.
Resumo:
BACKGROUND: The recently developed Context Assessment for Community Health (COACH) tool aims to measure aspects of the local healthcare context perceived to influence knowledge translation in low- and middle-income countries. The tool measures eight dimensions (organizational resources, community engagement, monitoring services for action, sources of knowledge, commitment to work, work culture, leadership, and informal payment) through 49 items. OBJECTIVE: The study aimed to explore the understanding and stability of the COACH tool among health providers in Vietnam. DESIGNS: To investigate the response process, think-aloud interviews were undertaken with five community health workers, six nurses and midwives, and five physicians. Identified problems were classified according to Conrad and Blair's taxonomy and grouped according to an estimation of the magnitude of the problem's effect on the response data. Further, the stability of the tool was examined using a test-retest survey among 77 respondents. The reliability was analyzed for items (intraclass correlation coefficient (ICC) and percent agreement) and dimensions (ICC and Bland-Altman plots). RESULTS: In general, the think-aloud interviews revealed that the COACH tool was perceived as clear, well organized, and easy to answer. Most items were understood as intended. However, seven prominent problems in the items were identified and the content of three dimensions was perceived to be of a sensitive nature. In the test-retest survey, two-thirds of the items and seven of eight dimensions were found to have an ICC agreement ranging from moderate to substantial (0.5-0.7), demonstrating that the instrument has an acceptable level of stability. CONCLUSIONS: This study provides evidence that the Vietnamese translation of the COACH tool is generally perceived to be clear and easy to understand and has acceptable stability. There is, however, a need to rephrase and add generic examples to clarify some items and to further review items with low ICC.
Resumo:
Les procédures appliquées avant l’abattage des animaux influencent directement la qualité de la viande en modulant l’état physiologique des porcs; ainsi, l’augmentation de la température corporelle, les taux élevés de lactate sanguin et l’épuisement des réserves de glycogène entre autres, occasionnent la majorité des baisses de qualité. L’objectif de cette thèse était de valider des outils indicateurs de stress porcin pour les fermes et les abattoirs. Ceux-ci seraient appliqués à la surveillance du bien-être animal et à la prédiction de variation de qualité de la viande porcine au niveau commercial. Premierement, les résultats de la thèse ont permis de conclure qu’un des outils développés (analyseur portatif de lactate) mesure la variation du niveau de lactate sanguin associé à l’état physiologique des porcs dans la phase péri-mortem et aide à expliquer la variation de la qualité de la viande chez le porc à l’abattoir, en particulier dans les muscles du jambon. Deuxièmement, les résultats des audits du bien-être animal appliqués de la ferme à l’abattoir ont démontré que la qualité du système d’élevage à la ferme d’origine et les compétences du chauffeur de camion sont d’importants critères affectant la réponse comportementale des porcs à la manipulation avant l’abattage. Ces résultats ont également démontré que les conditions de logement à la ferme (la faible densité et l’enrichissement dans les enclos), le comportement des porcs en période pré-abattage (glissade), ainsi que les interventions du manipulateur (utilisation du bâton électrique) dans la zone d’étourdissement de l’abattoir affectent négativement la variation de la qualité de la viande. L’application des protocoles d’audits dans la filière porcine a également démontré que le respect des critères de bien-être animal fixés par un outil de vérification est primordiale et permet de contrôler les conditions de bien-être des porcs à chaque étape de la période pré-abattage, de produire une viande de qualité supérieure et de réduire les pertes. Les audits de bien-être animal sont donc un outil qui apporte des resultats très pertinents pour aider a éviter les variations de la qualité de la viande chez le porc. Troisièmement, la thermographie infrarouge s’est avéré être une technique prometteuse permettant d’évaluer la variation de température corporelle de l’animal pendant et après un stress physique, en particulier lorsque cette mesure est prise derrière les oreilles. En conclusion, les outils validés à travers cette thèse représentent des méthodologies non invasives et potentiellement complémentaires à d’autres approches d’évaluation de l’état physiologique et du bien-être animal par rapport au stress, permettant de réduire les pertes de qualité de viande (par exemple en utilisation conjointe avec le niveau de lactate sanguin et les indicateurs de stress comportemental, entre autres).
Resumo:
The overall objective of the work contained in this paper is to identify background information on the use of load-transfer devices in highway pavement joints and to provide a preliminary assessment of the market potential for use of alternative materials in that capacity. The intent of the authors is to provide a concise compilation of information upon which HITEC personnel may judge whether or not the use of alternative materials for concrete highway pavement joints is worth a more thorough and rigorous evaluation.
Resumo:
Objective: Caffeine has been shown to have effects on certain areas of cognition, but in executive functioning the research is limited and also inconsistent. One reason could be the need for a more sensitive measure to detect the effects of caffeine on executive function. This study used a new non-immersive virtual reality assessment of executive functions known as JEF© (the Jansari Assessment of Executive Function) alongside the ‘classic’ Stroop Colour- Word task to assess the effects of a normal dose of caffeinated coffee on executive function. Method: Using a double-blind, counterbalanced within participants procedure 43 participants were administered either a caffeinated or decaffeinated coffee and completed the ‘JEF©’ and Stroop tasks, as well as a subjective mood scale and blood pressure pre- and post condition on two separate occasions a week apart. JEF© yields measures for eight separate aspects of executive functions, in addition to a total average score. Results: Findings indicate that performance was significantly improved on the planning, creative thinking, event-, time- and action-based prospective memory, as well as total JEF© score following caffeinated coffee relative to the decaffeinated coffee. The caffeinated beverage significantly decreased reaction times on the Stroop task, but there was no effect on Stroop interference. Conclusion: The results provide further support for the effects of a caffeinated beverage on cognitive functioning. In particular, it has demonstrated the ability of JEF© to detect the effects of caffeine across a number of executive functioning constructs, which weren’t shown in the Stroop task, suggesting executive functioning improvements as a result of a ‘typical’ dose of caffeine may only be detected by the use of more real-world, ecologically valid tasks.
Resumo:
In the recent years, vibration-based structural damage identification has been subject of significant research in structural engineering. The basic idea of vibration-based methods is that damage induces mechanical properties changes that cause anomalies in the dynamic response of the structure, which measures allow to localize damage and its extension. Vibration measured data, such as frequencies and mode shapes, can be used in the Finite Element Model Updating in order to adjust structural parameters sensible at damage (e.g. Young’s Modulus). The novel aspect of this thesis is the introduction into the objective function of accurate measures of strains mode shapes, evaluated through FBG sensors. After a review of the relevant literature, the case of study, i.e. an irregular prestressed concrete beam destined for roofing of industrial structures, will be presented. The mathematical model was built through FE models, studying static and dynamic behaviour of the element. Another analytical model was developed, based on the ‘Ritz method’, in order to investigate the possible interaction between the RC beam and the steel supporting table used for testing. Experimental data, recorded through the contemporary use of different measurement techniques (optical fibers, accelerometers, LVDTs) were compared whit theoretical data, allowing to detect the best model, for which have been outlined the settings for the updating procedure.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Introduction - Milk is considered a complete food from the nutritional point of view. Milk can be exposed to various types of contamination, such as mycotoxins. These metabolites are naturally occurring toxic compounds produced by fungi. Several studies on milk samples have reported the presence of aflatoxin B1 (AFB1) and M1 (AFM1), due to the high incidence in samples intended for human consumption, carcinogenicity proven AFB1 and resistance of the contaminants to the process of digestion, making those available for intestinal absorption. Considering these aspects, the objective of this study was to evaluate the genotoxicity of milk samples contaminated by AFB1 and AFM1 before and after the action of lactic acid bacteria using Caco-2 intestinal human cells.
Resumo:
Poor hospital indoor air quality (IAQ) may lead to hospital-acquired infections, sick hospital syndrome and various occupational hazards. Air-control measures are crucial for reducing dissemination of airborne biological particles in hospitals. The objective of this study was to perform a survey of bioaerosol quality in different sites in a Portuguese Hospital, namely the operating theater (OT), the emergency service (ES) and the surgical ward (SW). Aerobic mesophilic bacterial counts (BCs) and fungal load (FL) were assessed by impaction directly onto tryptic soy agar and malt extract agar supplemented with antibiotic chloramphenicol (0.05%) plates, respectively using a MAS-100 air sampler. The ES revealed the highest airborne microbial concentrations (BC range 240-736 CFU/m(3) CFU/m(3); FL range 27-933 CFU/m(3)), exceeding, at several sampling sites, conformity criteria defined in national legislation [6]. Bacterial concentrations in the SW (BC range 99-495 CFU/m(3)) and the OT (BC range 12-170 CFU/m(3)) were under recommended criteria. While fungal levels were below 1 CFU/m(3) in the OT, in the SW (range 1-32 CFU/m(3)), there existed a site with fungal indoor concentrations higher than those detected outdoors. Airborne Gram-positive cocci were the most frequent phenotype (88%) detected from the measured bacterial population in all indoor environments. Staphylococcus (51%) and Micrococcus (37%) were dominant among the bacterial genera identified in the present study. Concerning indoor fungal characterization, the prevalent genera were Penicillium (41%) and Aspergillus (24%). Regular monitoring is essential for assessing air control efficiency and for detecting irregular introduction of airborne particles via clothing of visitors and medical staff or carriage by personal and medical materials. Furthermore, microbiological survey data should be used to clearly define specific air quality guidelines for controlled environments in hospital settings.