976 resultados para computed tomograph (CT)
Resumo:
Purpose: Accurate delineation of the rectum is of high importance in off-line adaptive radiation therapy since it is a major dose-limiting organ in prostate cancer radiotherapy. The intensity-based deformable image registration (DIR) methods cannot create a correct spatial transformation if there is no correspondence between the template and the target images. The variation of rectal filling, gas, or feces, creates a noncorrespondence in image intensities that becomes a great obstacle for intensity-based DIR. Methods: In this study the authors have designed and implemented a semiautomatic method to create a rectum mask in pelvic computed tomography (CT) images. The method, that includes a DIR based on the demons algorithm, has been tested in 13 prostate cancer cases, each comprising of two CT scans, for a total of 26 CT scans. Results: The use of the manual segmentation in the planning image and the proposed rectum mask method (RMM) method in the daily image leads to an improvement in the DIR performance in pelvic CT images, obtaining a mean value of overlap volume index = 0.89, close to the values obtained using the manual segmentations in both images. Conclusions: The application of the RMM method in the daily image and the manual segmentations in the planning image during prostate cancer treatments increases the performance of the registration in presence of rectal fillings, obtaining very good agreement with a physician's manual contours.
Resumo:
Purpose: Accurate delineation of the rectum is of high importance in off-line adaptive radiation therapy since it is a major dose-limiting organ in prostate cancer radiotherapy. The intensity-based deformable image registration (DIR) methods cannot create a correct spatial transformation if there is no correspondence between the template and the target images. The variation of rectal filling, gas, or feces, creates a noncorrespondence in image intensities that becomes a great obstacle for intensity-based DIR. Methods: In this study the authors have designed and implemented a semiautomatic method to create a rectum mask in pelvic computed tomography (CT) images. The method, that includes a DIR based on the demons algorithm, has been tested in 13 prostate cancer cases, each comprising of two CT scans, for a total of 26 CT scans. Results: The use of the manual segmentation in the planning image and the proposed rectum mask method (RMM) method in the daily image leads to an improvement in the DIR performance in pelvic CT images, obtaining a mean value of overlap volume index = 0.89, close to the values obtained using the manual segmentations in both images. Conclusions: The application of the RMM method in the daily image and the manual segmentations in the planning image during prostate cancer treatments increases the performance of the registration in presence of rectal fillings, obtaining very good agreement with a physician's manual contours.
Resumo:
Accurate detection of liver lesions is of great importance in hepatic surgery planning. Recent studies have shown that the detection rate of liver lesions is significantly higher in gadoxetic acid-enhanced magnetic resonance imaging (Gd–EOB–DTPA-enhanced MRI) than in contrast-enhanced portal-phase computed tomography (CT); however, the latter remains essential because of its high specificity, good performance in estimating liver volumes and better vessel visibility. To characterize liver lesions using both the above image modalities, we propose a multimodal nonrigid registration framework using organ-focused mutual information (OF-MI). This proposal tries to improve mutual information (MI) based registration by adding spatial information, benefiting from the availability of expert liver segmentation in clinical protocols. The incorporation of an additional information channel containing liver segmentation information was studied. A dataset of real clinical images and simulated images was used in the validation process. A Gd–EOB–DTPA-enhanced MRI simulation framework is presented. To evaluate results, warping index errors were calculated for the simulated data, and landmark-based and surface-based errors were calculated for the real data. An improvement of the registration accuracy for OF-MI as compared with MI was found for both simulated and real datasets. Statistical significance of the difference was tested and confirmed in the simulated dataset (p < 0.01).
Resumo:
Important physical and biological processes in soil-plant-microbial systems are dominated by the geometry of soil pore space, and a correct model of this geometry is critical for understanding them. We analyze the geometry of soil pore space with the X-ray computed tomography (CT) of intact soil columns. We present here some preliminary results of our investigation on Minkowski functionals of parallel sets to characterize soil structure. We also show how the evolution of Minkowski morphological measurements of parallel sets may help to characterize the influence of conventional tillage and permanent cover crop of resident vegetation on soil structure in a Spanish Mediterranean vineyard.
Resumo:
Soil structure plays an important role in flow and transport phenomena, and a quantitative characterization of the spatial heterogeneity of the pore space geometry is beneficial for prediction of soil physical properties. Morphological features such as pore-size distribution, pore space volume or pore?solid surface can be altered by different soil management practices. Irregularity of these features and their changes can be described using fractal geometry. In this study, we focus primarily on the characterization of soil pore space as a 3D geometrical shape by fractal analysis and on the ability of fractal dimensions to differentiate between two a priori different soil structures. We analyze X-ray computed tomography (CT) images of soils samples from two nearby areas with contrasting management practices. Within these two different soil systems, samples were collected from three depths. Fractal dimensions of the pore-size distributions were different depending on soil use and averaged values also differed at each depth. Fractal dimensions of the volume and surface of the pore space were lower in the tilled soil than in the natural soil but their standard deviations were higher in the former as compared to the latter. Also, it was observed that soil use was a factor that had a statistically significant effect on fractal parameters. Fractal parameters provide useful complementary information about changes in soil structure due to changes in soil management. Read More: http://www.worldscientific.com/doi/abs/10.1142/S0218348X14400118?queryID=%24%7BresultBean.queryID%7D&
Resumo:
Recent advances in non-destructive imaging techniques, such as X-ray computed tomography (CT), make it possible to analyse pore space features from the direct visualisation from soil structures. A quantitative characterisation of the three-dimensional solid-pore architecture is important to understand soil mechanics, as they relate to the control of biological, chemical, and physical processes across scales. This analysis technique therefore offers an opportunity to better interpret soil strata, as new and relevant information can be obtained. In this work, we propose an approach to automatically identify the pore structure of a set of 200-2D images that represent slices of an original 3D CT image of a soil sample, which can be accomplished through non-linear enhancement of the pixel grey levels and an image segmentation based on a PFCM (Possibilistic Fuzzy C-Means) algorithm. Once the solids and pore spaces have been identified, the set of 200-2D images is then used to reconstruct an approximation of the soil sample by projecting only the pore spaces. This reconstruction shows the structure of the soil and its pores, which become more bounded, less bounded, or unbounded with changes in depth. If the soil sample image quality is sufficiently favourable in terms of contrast, noise and sharpness, the pore identification is less complicated, and the PFCM clustering algorithm can be used without additional processing; otherwise, images require pre-processing before using this algorithm. Promising results were obtained with four soil samples, the first of which was used to show the algorithm validity and the additional three were used to demonstrate the robustness of our proposal. The methodology we present here can better detect the solid soil and pore spaces on CT images, enabling the generation of better 2D?3D representations of pore structures from segmented 2D images.
Resumo:
El estudio de la estructura del suelo es de vital importancia en diferentes campos de la ciencia y la tecnología. La estructura del suelo controla procesos físicos y biológicos importantes en los sistemas suelo-planta-microorganismos. Estos procesos están dominados por la geometría de la estructura del suelo, y una caracterización cuantitativa de la heterogeneidad de la geometría del espacio poroso es beneficiosa para la predicción de propiedades físicas del suelo. La tecnología de la tomografía computerizada de rayos-X (CT) nos permite obtener imágenes digitales tridimensionales del interior de una muestra de suelo, proporcionando información de la geometría de los poros del suelo y permitiendo el estudio de los poros sin destruir las muestras. Las técnicas de la geometría fractal y de la morfología matemática se han propuesto como una poderosa herramienta para analizar y cuantificar características geométricas. Las dimensiones fractales del espacio poroso, de la interfaz poro-sólido y de la distribución de tamaños de poros son indicadores de la complejidad de la estructura del suelo. Los funcionales de Minkowski y las funciones morfológicas proporcionan medios para medir características geométricas fundamentales de los objetos geométricos tridimensionales. Esto es, volumen, superficie, curvatura media de la superficie y conectividad. Las características del suelo como la distribución de tamaños de poros, el volumen del espacio poroso o la superficie poro-solido pueden ser alteradas por diferentes practicas de manejo de suelo. En este trabajo analizamos imágenes tomográficas de muestras de suelo de dos zonas cercanas con practicas de manejo diferentes. Obtenemos un conjunto de medidas geométricas, para evaluar y cuantificar posibles diferencias que el laboreo pueda haber causado en el suelo. ABSTRACT The study of soil structure is of vital importance in different fields of science and technology. Soil structure controls important physical and biological processes in soil-plant-microbial systems. Those processes are dominated by the geometry of soil pore structure, and a quantitative characterization of the spatial heterogeneity of the pore space geometry is beneficial for prediction of soil physical properties. The technology of X-ray computed tomography (CT) allows us to obtain three-dimensional digital images of the inside of a soil sample providing information on soil pore geometry and enabling the study of the pores without disturbing the samples. Fractal geometry and mathematical morphological techniques have been proposed as powerful tools to analyze and quantify geometrical features. Fractal dimensions of pore space, pore-solid interface and pore size distribution are indicators of soil structure complexity. Minkowski functionals and morphological functions provide means to measure fundamental geometrical features of three-dimensional geometrical objects, that is, volume, boundary surface, mean boundary surface curvature, and connectivity. Soil features such as pore-size distribution, pore space volume or pore-solid surface can be altered by different soil management practices. In this work we analyze CT images of soil samples from two nearby areas with contrasting management practices. We performed a set of geometrical measures, some of them from mathematical morphology, to assess and quantify any possible difference that tillage may have caused on the soil.
Resumo:
La tomografía axial computerizada (TAC) es la modalidad de imagen médica preferente para el estudio de enfermedades pulmonares y el análisis de su vasculatura. La segmentación general de vasos en pulmón ha sido abordada en profundidad a lo largo de los últimos años por la comunidad científica que trabaja en el campo de procesamiento de imagen; sin embargo, la diferenciación entre irrigaciones arterial y venosa es aún un problema abierto. De hecho, la separación automática de arterias y venas está considerado como uno de los grandes retos futuros del procesamiento de imágenes biomédicas. La segmentación arteria-vena (AV) permitiría el estudio de ambas irrigaciones por separado, lo cual tendría importantes consecuencias en diferentes escenarios médicos y múltiples enfermedades pulmonares o estados patológicos. Características como la densidad, geometría, topología y tamaño de los vasos sanguíneos podrían ser analizados en enfermedades que conllevan remodelación de la vasculatura pulmonar, haciendo incluso posible el descubrimiento de nuevos biomarcadores específicos que aún hoy en dípermanecen ocultos. Esta diferenciación entre arterias y venas también podría ayudar a la mejora y el desarrollo de métodos de procesamiento de las distintas estructuras pulmonares. Sin embargo, el estudio del efecto de las enfermedades en los árboles arterial y venoso ha sido inviable hasta ahora a pesar de su indudable utilidad. La extrema complejidad de los árboles vasculares del pulmón hace inabordable una separación manual de ambas estructuras en un tiempo realista, fomentando aún más la necesidad de diseñar herramientas automáticas o semiautomáticas para tal objetivo. Pero la ausencia de casos correctamente segmentados y etiquetados conlleva múltiples limitaciones en el desarrollo de sistemas de separación AV, en los cuales son necesarias imágenes de referencia tanto para entrenar como para validar los algoritmos. Por ello, el diseño de imágenes sintéticas de TAC pulmonar podría superar estas dificultades ofreciendo la posibilidad de acceso a una base de datos de casos pseudoreales bajo un entorno restringido y controlado donde cada parte de la imagen (incluyendo arterias y venas) está unívocamente diferenciada. En esta Tesis Doctoral abordamos ambos problemas, los cuales están fuertemente interrelacionados. Primero se describe el diseño de una estrategia para generar, automáticamente, fantomas computacionales de TAC de pulmón en humanos. Partiendo de conocimientos a priori, tanto biológicos como de características de imagen de CT, acerca de la topología y relación entre las distintas estructuras pulmonares, el sistema desarrollado es capaz de generar vías aéreas, arterias y venas pulmonares sintéticas usando métodos de crecimiento iterativo, que posteriormente se unen para formar un pulmón simulado con características realistas. Estos casos sintéticos, junto a imágenes reales de TAC sin contraste, han sido usados en el desarrollo de un método completamente automático de segmentación/separación AV. La estrategia comprende una primera extracción genérica de vasos pulmonares usando partículas espacio-escala, y una posterior clasificación AV de tales partículas mediante el uso de Graph-Cuts (GC) basados en la similitud con arteria o vena (obtenida con algoritmos de aprendizaje automático) y la inclusión de información de conectividad entre partículas. La validación de los fantomas pulmonares se ha llevado a cabo mediante inspección visual y medidas cuantitativas relacionadas con las distribuciones de intensidad, dispersión de estructuras y relación entre arterias y vías aéreas, los cuales muestran una buena correspondencia entre los pulmones reales y los generados sintéticamente. La evaluación del algoritmo de segmentación AV está basada en distintas estrategias de comprobación de la exactitud en la clasificación de vasos, las cuales revelan una adecuada diferenciación entre arterias y venas tanto en los casos reales como en los sintéticos, abriendo así un amplio abanico de posibilidades en el estudio clínico de enfermedades cardiopulmonares y en el desarrollo de metodologías y nuevos algoritmos para el análisis de imágenes pulmonares. ABSTRACT Computed tomography (CT) is the reference image modality for the study of lung diseases and pulmonary vasculature. Lung vessel segmentation has been widely explored by the biomedical image processing community, however, differentiation of arterial from venous irrigations is still an open problem. Indeed, automatic separation of arterial and venous trees has been considered during last years as one of the main future challenges in the field. Artery-Vein (AV) segmentation would be useful in different medical scenarios and multiple pulmonary diseases or pathological states, allowing the study of arterial and venous irrigations separately. Features such as density, geometry, topology and size of vessels could be analyzed in diseases that imply vasculature remodeling, making even possible the discovery of new specific biomarkers that remain hidden nowadays. Differentiation between arteries and veins could also enhance or improve methods processing pulmonary structures. Nevertheless, AV segmentation has been unfeasible until now in clinical routine despite its objective usefulness. The huge complexity of pulmonary vascular trees makes a manual segmentation of both structures unfeasible in realistic time, encouraging the design of automatic or semiautomatic tools to perform the task. However, this lack of proper labeled cases seriously limits in the development of AV segmentation systems, where reference standards are necessary in both algorithm training and validation stages. For that reason, the design of synthetic CT images of the lung could overcome these difficulties by providing a database of pseudorealistic cases in a constrained and controlled scenario where each part of the image (including arteries and veins) is differentiated unequivocally. In this Ph.D. Thesis we address both interrelated problems. First, the design of a complete framework to automatically generate computational CT phantoms of the human lung is described. Starting from biological and imagebased knowledge about the topology and relationships between structures, the system is able to generate synthetic pulmonary arteries, veins, and airways using iterative growth methods that can be merged into a final simulated lung with realistic features. These synthetic cases, together with labeled real CT datasets, have been used as reference for the development of a fully automatic pulmonary AV segmentation/separation method. The approach comprises a vessel extraction stage using scale-space particles and their posterior artery-vein classification using Graph-Cuts (GC) based on arterial/venous similarity scores obtained with a Machine Learning (ML) pre-classification step and particle connectivity information. Validation of pulmonary phantoms from visual examination and quantitative measurements of intensity distributions, dispersion of structures and relationships between pulmonary air and blood flow systems, show good correspondence between real and synthetic lungs. The evaluation of the Artery-Vein (AV) segmentation algorithm, based on different strategies to assess the accuracy of vessel particles classification, reveal accurate differentiation between arteries and vein in both real and synthetic cases that open a huge range of possibilities in the clinical study of cardiopulmonary diseases and the development of methodological approaches for the analysis of pulmonary images.
Resumo:
A presença da Medicina Nuclear como modalidade de obtenção de imagens médicas é um dos principais procedimentos utilizados hoje nos centros de saúde, tendo como grande vantagem a capacidade de analisar o comportamento metabólico do paciente, traduzindo-se em diagnósticos precoces. Entretanto, sabe-se que a quantificação em Medicina Nuclear é dificultada por diversos fatores, entre os quais estão a correção de atenuação, espalhamento, algoritmos de reconstrução e modelos assumidos. Neste contexto, o principal objetivo deste projeto foi melhorar a acurácia e a precisão na análise de imagens de PET/CT via processos realísticos e bem controlados. Para esse fim, foi proposta a elaboração de uma estrutura modular, a qual está composta por um conjunto de passos consecutivamente interligados começando com a simulação de phantoms antropomórficos 3D para posteriormente gerar as projeções realísticas PET/CT usando a plataforma GATE (com simulação de Monte Carlo), em seguida é aplicada uma etapa de reconstrução de imagens 3D, na sequência as imagens são filtradas (por meio do filtro de Anscombe/Wiener para a redução de ruído Poisson caraterístico deste tipo de imagens) e, segmentadas (baseados na teoria Fuzzy Connectedness). Uma vez definida a região de interesse (ROI) foram produzidas as Curvas de Atividade de Entrada e Resultante requeridas no processo de análise da dinâmica de compartimentos com o qual foi obtida a quantificação do metabolismo do órgão ou estrutura de estudo. Finalmente, de uma maneira semelhante imagens PET/CT reais fornecidas pelo Instituto do Coração (InCor) do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (HC-FMUSP) foram analisadas. Portanto, concluiu-se que a etapa de filtragem tridimensional usando o filtro Anscombe/Wiener foi relevante e de alto impacto no processo de quantificação metabólica e em outras etapas importantes do projeto em geral.
Resumo:
The aim of analogue model experiments in geology is to simulate structures in nature under specific imposed boundary conditions using materials whose rheological properties are similar to those of rocks in nature. In the late 1980s, X-ray computed tomography (CT) was first applied to the analysis of such models. In early studies only a limited number of cross-sectional slices could be recorded because of the time involved in CT data acquisition, the long cooling periods for the X-ray source and computational capacity. Technological improvements presently allow an almost unlimited number of closely spaced serial cross-sections to be acquired and calculated. Computer visualization software allows a full 3D analysis of every recorded stage. Such analyses are especially valuable when trying to understand complex geological structures, commonly with lateral changes in 3D geometry. Periodic acquisition of volumetric data sets in the course of the experiment makes it possible to carry out a 4D analysis of the model, i.e. 3D analysis through time. Examples are shown of 4D analysis of analogue models that tested the influence of lateral rheological changes on the structures obtained in contractional and extensional settings.
Resumo:
Acute epiploic appendagitis is an uncommon cause of abdominal pain. It is caused by torsion of an epiploic appendage or spontaneous venous thrombosis of a draining appendageal vein.1 The diagnosis of this condition primarily relies on cross-sectional imaging and is made most often after computed tomography (CT). Clinically, it is most often mistaken for acute diverticulitis. Approximately 7.1% of patients investigated to exclude sigmoid diverticulitis have imaging findings of primary epiploic appendagitis.
Resumo:
Objective. To critically evaluate the current literature in an effort to establish the current role of radiologic imaging, advances in computed tomography (CT) and standard film radiography in the diagnosis, and characterization of urinary tract calculi. Conclusion. CT has a valuable role when utilized prudently during surveillance of patients following endourological therapy. In this paper, we outline the basic principles relating to the effects of exposure to ionizing radiation as a result of CT scanning. We discuss the current developments in low-dose CT technology, which have resulted in significant reductions in CT radiation doses (to approximately one-third of what they were a decade ago) while preserving image quality. Finally, we will discuss an important recent development now commercially available on the latest generation of CT scanners, namely, dual energy imaging, which is showing promise in urinary tract imaging as a means of characterizing the composition of urinary tract calculi.
Resumo:
Patient awareness and concern regarding the potential health risks from ionizing radiation have peaked recently (Coakley et al., 2011) following widespread press and media coverage of the projected cancer risks from the increasing use of computed tomography (CT) (Berrington et al., 2007). The typical young and educated patient with inflammatory bowel disease (IBD) may in particular be conscious of his/her exposure to ionising radiation as a result of diagnostic imaging. Cumulative effective doses (CEDs) in patients with IBD have been reported as being high and are rising, primarily due to the more widespread and repeated use of CT (Desmond et al., 2008). Radiologists, technologists, and referring physicians have a responsibility to firstly counsel their patients accurately regarding the actual risks of ionizing radiation exposure; secondly to limit the use of those imaging modalities which involve ionising radiation to clinical situations where they are likely to change management; thirdly to ensure that a diagnostic quality imaging examination is acquired with lowest possible radiation exposure. In this paper, we synopsize available evidence related to radiation exposure and risk and we report advances in low-dose CT technology and examine the role for alternative imaging modalities such as ultrasonography or magnetic resonance imaging which avoid radiation exposure.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
Purpose
The objective of our study was to test a new approach to approximating organ dose by using the effective energy of the combined 80kV/140kV beam used in fast kV switch dual-energy (DE) computed tomography (CT). The two primary focuses of the study were to first validate experimentally the dose equivalency between MOSFET and ion chamber (as a gold standard) in a fast kV switch DE environment, and secondly to estimate effective dose (ED) of DECT scans using MOSFET detectors and an anthropomorphic phantom.
Materials and Methods
A GE Discovery 750 CT scanner was employed using a fast-kV switch abdomen/pelvis protocol alternating between 80 kV and 140 kV. The specific aims of our study were to (1) Characterize the effective energy of the dual energy environment; (2) Estimate the f-factor for soft tissue; (3) Calibrate the MOSFET detectors using a beam with effective energy equal to the combined DE environment; (4) Validate our calibration by using MOSFET detectors and ion chamber to measure dose at the center of a CTDI body phantom; (5) Measure ED for an abdomen/pelvis scan using an anthropomorphic phantom and applying ICRP 103 tissue weighting factors; and (6) Estimate ED using AAPM Dose Length Product (DLP) method. The effective energy of the combined beam was calculated by measuring dose with an ion chamber under varying thicknesses of aluminum to determine half-value layer (HVL).
Results
The effective energy of the combined dual-energy beams was found to be 42.8 kV. After calibration, tissue dose in the center of the CTDI body phantom was measured at 1.71 ± 0.01 cGy using an ion chamber, and 1.73±0.04 and 1.69±0.09 using two separate MOSFET detectors. This result showed a -0.93% and 1.40 % difference, respectively, between ion chamber and MOSFET. ED from the dual-energy scan was calculated as 16.49 ± 0.04 mSv by the MOSFET method and 14.62 mSv by the DLP method.