233 resultados para PHANTOMS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES The aim of this phantom study was to minimize the radiation dose by finding the best combination of low tube current and low voltage that would result in accurate volume measurements when compared to standard CT imaging without significantly decreasing the sensitivity of detecting lung nodules both with and without the assistance of CAD. METHODS An anthropomorphic chest phantom containing artificial solid and ground glass nodules (GGNs, 5-12 mm) was examined with a 64-row multi-detector CT scanner with three tube currents of 100, 50 and 25 mAs in combination with three tube voltages of 120, 100 and 80 kVp. This resulted in eight different protocols that were then compared to standard CT sensitivity (100 mAs/120 kVp). For each protocol, at least 127 different nodules were scanned in 21-25 phantoms. The nodules were analyzed in two separate sessions by three independent, blinded radiologists and computer-aided detection (CAD) software. RESULTS The mean sensitivity of the radiologists for identifying solid lung nodules on a standard CT was 89.7% ± 4.9%. The sensitivity was not significantly impaired when the tube and current voltage were lowered at the same time, except at the lowest exposure level of 25 mAs/80 kVp [80.6% ± 4.3% (p = 0.031)]. Compared to the standard CT, the sensitivity for detecting GGNs was significantly lower at all dose levels when the voltage was 80 kVp; this result was independent of the tube current. The CAD significantly increased the radiologists' sensitivity for detecting solid nodules at all dose levels (5-11%). No significant volume measurement errors (VMEs) were documented for the radiologists or the CAD software at any dose level. CONCLUSIONS Our results suggest a CT protocol with 25 mAs and 100 kVp is optimal for detecting solid and ground glass nodules in lung cancer screening. The use of CAD software is highly recommended at all dose levels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE The purpose of this study was to investigate the feasibility of microdose CT using a comparable dose as for conventional chest radiographs in two planes including dual-energy subtraction for lung nodule assessment. MATERIALS AND METHODS We investigated 65 chest phantoms with 141 lung nodules, using an anthropomorphic chest phantom with artificial lung nodules. Microdose CT parameters were 80 kV and 6 mAs, with pitch of 2.2. Iterative reconstruction algorithms and an integrated circuit detector system (Stellar, Siemens Healthcare) were applied for maximum dose reduction. Maximum intensity projections (MIPs) were reconstructed. Chest radiographs were acquired in two projections with bone suppression. Four blinded radiologists interpreted the images in random order. RESULTS A soft-tissue CT kernel (I30f) delivered better sensitivities in a pilot study than a hard kernel (I70f), with respective mean (SD) sensitivities of 91.1% ± 2.2% versus 85.6% ± 5.6% (p = 0.041). Nodule size was measured accurately for all kernels. Mean clustered nodule sensitivity with chest radiography was 45.7% ± 8.1% (with bone suppression, 46.1% ± 8%; p = 0.94); for microdose CT, nodule sensitivity was 83.6% ± 9% without MIP (with additional MIP, 92.5% ± 6%; p < 10(-3)). Individual sensitivities of microdose CT for readers 1, 2, 3, and 4 were 84.3%, 90.7%, 68.6%, and 45.0%, respectively. Sensitivities with chest radiography for readers 1, 2, 3, and 4 were 42.9%, 58.6%, 36.4%, and 90.7%, respectively. In the per-phantom analysis, respective sensitivities of microdose CT versus chest radiography were 96.2% and 75% (p < 10(-6)). The effective dose for chest radiography including dual-energy subtraction was 0.242 mSv; for microdose CT, the applied dose was 0.1323 mSv. CONCLUSION Microdose CT is better than the combination of chest radiography and dual-energy subtraction for the detection of solid nodules between 5 and 12 mm at a lower dose level of 0.13 mSv. Soft-tissue kernels allow better sensitivities. These preliminary results indicate that microdose CT has the potential to replace conventional chest radiography for lung nodule detection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: Traditionally, the proximal isovelocity surface area (PISA) is based on the assumption of a single hemisphere (hemispheric PISA), but this technique has not been validated for the quantification of mitral regurgitation (MR) with multiple jets. Methods: The left heart simulator was actuated by a pulsatile pump at various stroke amplitudes. The regurgitant volume (Rvol) passing through the mitral valve phantoms with single and double regurgitant orifices of varying size and interspace was quantified by a flowmeter as reference technique. Color Doppler 3-D full-volumes were obtained, and Rvol were derived from 2-D PISA surfaces on the basis of hemispheric and hemicylindric assumption with one base (partial hemicylindric PISA) or 2 bases (total hemicylindric PISA). Results: 72 regurgitant volumes (Rvol range: 8 to 76 ml/beat) were obtained. Hemispheric PISA Rvol correlated well with reference Rvol by one orifice (R²=0.97; bias -2.7±3.2ml), but less by ≥ one orifice (R²=0.89). When a fusion of two PISAs occured, addition of two hemispheric PISA overestimated Rvol (bias 9.1±12.2ml, fig.1), and single hemispheric PISA underestimated Rvol (bias -12.4±4.9ml). If an integrated approach was used (hemispheric in single orifice, total hemicylindric in two non-fused PISAs and partial hemicylindric in two fused PISAs), the correlation was R²=0.95, bias -1.6±5.6ml (fig.2). In the ROC analysis, the cutoff to detect ≥ moderate-to-severe Rvol (≥45ml) was 42ml (AUC 0.99, sens. 100%, spec. 93%). Conclusions: In MR with two regurgitant jets, the 2-D hemicylindric assumption of the PISA offers a better quantification of Rvol than the hemispheric assumption. Quantification of MR using 2-D PISA requires an integrated approach that considers number of regurgitant orifices and fusion of the PISAs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aberrations of the acoustic wave front, caused by spatial variations of the speed-of-sound, are a main limiting factor to the diagnostic power of medical ultrasound imaging. If not accounted for, aberrations result in low resolution and increased side lobe level, over all reducing contrast in deep tissue imaging. Various techniques have been proposed for quantifying aberrations by analysing the arrival time of coherent echoes from so-called guide stars or beacons. In situations where a guide star is missing, aperture-based techniques may give ambiguous results. Moreover, they are conceptually focused on aberrators that can be approximated as a phase screen in front of the probe. We propose a novel technique, where the effect of aberration is detected in the reconstructed image as opposed to the aperture data. The varying local echo phase when changing the transmit beam steering angle directly reflects the varying arrival time of the transmit wave front. This allows sensing the angle-dependent aberration delay in a spatially resolved way, and thus aberration correction for a spatially distributed volume aberrator. In phantoms containing a cylindrical aberrator, we achieved location-independent diffraction-limited resolution as well as accurate display of echo location based on reconstructing the speed-of-sound spatially resolved. First successful volunteer results confirm the clinical potential of the proposed technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE The aim of this study was to directly compare metal artifact reduction (MAR) of virtual monoenergetic extrapolations (VMEs) from dual-energy computed tomography (CT) with iterative MAR (iMAR) from single energy in pelvic CT with hip prostheses. MATERIALS AND METHODS A human pelvis phantom with unilateral or bilateral metal inserts of different material (steel and titanium) was scanned with third-generation dual-source CT using single (120 kVp) and dual-energy (100/150 kVp) at similar radiation dose (CT dose index, 7.15 mGy). Three image series for each phantom configuration were reconstructed: uncorrected, VME, and iMAR. Two independent, blinded radiologists assessed image quality quantitatively (noise and attenuation) and subjectively (5-point Likert scale). Intraclass correlation coefficients (ICCs) and Cohen κ were calculated to evaluate interreader agreements. Repeated measures analysis of variance and Friedman test were used to compare quantitative and qualitative image quality. Post hoc testing was performed using a corrected (Bonferroni) P < 0.017. RESULTS Agreements between readers were high for noise (all, ICC ≥ 0.975) and attenuation (all, ICC ≥ 0.986); agreements for qualitative assessment were good to perfect (all, κ ≥ 0.678). Compared with uncorrected images, VME showed significant noise reduction in the phantom with titanium only (P < 0.017), and iMAR showed significantly lower noise in all regions and phantom configurations (all, P < 0.017). In all phantom configurations, deviations of attenuation were smallest in images reconstructed with iMAR. For VME, there was a tendency toward higher subjective image quality in phantoms with titanium compared with uncorrected images, however, without reaching statistical significance (P > 0.017). Subjective image quality was rated significantly higher for images reconstructed with iMAR than for uncorrected images in all phantom configurations (all, P < 0.017). CONCLUSIONS Iterative MAR showed better MAR capabilities than VME in settings with bilateral hip prosthesis or unilateral steel prosthesis. In settings with unilateral hip prosthesis made of titanium, VME and iMAR performed similarly well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has recently been reported in this journal that local fat depots produce a sizable frequency-dependent signal attenuation in magnetic resonance spectroscopy (MRS) of the brain. If of a general nature, this effect would question the use of internal reference signals for quantification of MRS and the quantitative use of MRS as a whole. Here, it was attempted to verify this effect and pinpoint the potential causes by acquiring data with various acquisition settings, including two field strengths, two MR scanners from different vendors, different water suppression sequences, RF coils, localization sequences, echo times, and lipid/metabolite phantoms. With all settings tested, the reported effect could not be reproduced, and it is concluded that water referencing and quantitative MRS per se remain valid tools under common acquisition conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE To reliably determine the amplitude of the transmit radiofrequency ( B1+) field in moving organs like the liver and heart, where most current techniques are usually not feasible. METHODS B1+ field measurement based on the Bloch-Siegert shift induced by a pair of Fermi pulses in a double-triggered modified Point RESolved Spectroscopy (PRESS) sequence with motion-compensated crusher gradients has been developed. Performance of the sequence was tested in moving phantoms and in muscle, liver, and heart of six healthy volunteers each, using different arrangements of transmit/receive coils. RESULTS B1+ determination in a moving phantom was almost independent of type and amplitude of the motion and agreed well with theory. In vivo, repeated measurements led to very small coefficients of variance (CV) if the amplitude of the Fermi pulse was chosen above an appropriate level (CV in muscle 0.6%, liver 1.6%, heart 2.3% with moderate amplitude of the Fermi pulses and 1.2% with stronger Fermi pulses). CONCLUSION The proposed sequence shows a very robust determination of B1+ in a single voxel even under challenging conditions (transmission with a surface coil or measurements in the heart without breath-hold). Magn Reson Med, 2015. © 2015 Wiley Periodicals, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: A fully three-dimensional (3D) massively parallelizable list-mode ordered-subsets expectation-maximization (LM-OSEM) reconstruction algorithm has been developed for high-resolution PET cameras. System response probabilities are calculated online from a set of parameters derived from Monte Carlo simulations. The shape of a system response for a given line of response (LOR) has been shown to be asymmetrical around the LOR. This work has been focused on the development of efficient region-search techniques to sample the system response probabilities, which are suitable for asymmetric kernel models, including elliptical Gaussian models that allow for high accuracy and high parallelization efficiency. The novel region-search scheme using variable kernel models is applied in the proposed PET reconstruction algorithm. Methods: A novel region-search technique has been used to sample the probability density function in correspondence with a small dynamic subset of the field of view that constitutes the region of response (ROR). The ROR is identified around the LOR by searching for any voxel within a dynamically calculated contour. The contour condition is currently defined as a fixed threshold over the posterior probability, and arbitrary kernel models can be applied using a numerical approach. The processing of the LORs is distributed in batches among the available computing devices, then, individual LORs are processed within different processing units. In this way, both multicore and multiple many-core processing units can be efficiently exploited. Tests have been conducted with probability models that take into account the noncolinearity, positron range, and crystal penetration effects, that produced tubes of response with varying elliptical sections whose axes were a function of the crystal's thickness and angle of incidence of the given LOR. The algorithm treats the probability model as a 3D scalar field defined within a reference system aligned with the ideal LOR. Results: This new technique provides superior image quality in terms of signal-to-noise ratio as compared with the histogram-mode method based on precomputed system matrices available for a commercial small animal scanner. Reconstruction times can be kept low with the use of multicore, many-core architectures, including multiple graphic processing units. Conclusions: A highly parallelizable LM reconstruction method has been proposed based on Monte Carlo simulations and new parallelization techniques aimed at improving the reconstruction speed and the image signal-to-noise of a given OSEM algorithm. The method has been validated using simulated and real phantoms. A special advantage of the new method is the possibility of defining dynamically the cut-off threshold over the calculated probabilities thus allowing for a direct control on the trade-off between speed and quality during the reconstruction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Contact Spatially Resolved Spectroscopy (SRS) measurements by means of a fiber-optics probe were employed for nondestructive assessment and monitoring of Braeburn apples during shelflife storage. SRS measurements and estimation of optical properties were calibrated and validated by means of liquid optical phantoms with known optical properties and a metamodeling method. The acquired optical properties (absorption and reduced scattering coefficients) for the apples during shelf-life storage were found to provide useful information for nondestructive evaluation of apple quality attributes (firmness and SSC) and for monitoring the changes in their microstructure and chemical composition. On-line SRS measurement was achieved by mounting the SRS probe over a conveyor system

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Desde hace tiempo ha habido mucho interés en la automatización de todo tipo de tareas en las que la intervención humana es esencial para que sean completadas con éxito. Esto es de especial interés si además se ciertas tareas que pueden ser perfectamente reproducibles y, o bien requieren mucha formación, o bien consumen mucho tiempo. Este proyecto está dirigido a la búsqueda de métodos para automatizar la anotación de imágenes médicas. En concreto, se centra en el apartado de delimitación de las regiones de interés (ROIs) en imágenes de tipo PET siendo éstas usadas con frecuencia junto con las imágenes de tipo CT en el campo de oncología para delinear volúmenes afectados por cáncer. Se pretende con esto ayudar a los hospitales a organizar y estructurar las imágenes de sus pacientes y relacionarlas con las notas clínicas. Esto es lo que llamaremos el proceso de anotación de imágenes y la integración con la anotación de notas clínicas respectivamente. En este documento nos vamos a centrar en describir cuáles eran los objetivos iniciales, los pasos dados para su consecución y las dificultades encontradas durante el proceso. De todas las técnicas existentes en la literatura, se han elegido 4 técnicas de segmentación, 2 de ellas probadas en pacientes reales y las otras 2 probadas solo en phantoms según la literatura. En nuestro caso, las pruebas, se han realizado en imágenes PET de 6 pacientes reales diagnosticados de cáncer. Los resultados han sido analizados y presentados. ---ABSTRACT---For a long period of time, there has been an increasing interest in automation of tasks where human intervention is needed in order to succeed. This interest is even greater if those tasks must be solved by qualifed specialists in the area and the task is reproducible or if the task is too time consuming. The main objective of this project is to find methods which can help to automate medical image annotation processes. In our specific case, we are willing to delineate regions of interest (ROIs) in PET images which are frequently used simultaneaously ith CT images in oncology to determine those volumes that are afected by cancer. With this process we want to help hospitals organize and have from their patient studies and to relate these images to the corpus annotations. We may call this the image annotation process and the integration with the corpus annotation respectively. In this document we are going to concentrate in the description of the initial objectives, the steps we had to go through and the di�culties we had to face during this process. From all existing techniques in the literature, 4 segmentation techniques have been chosen, 2 of them were tested in real patients and the other 2 were tested using phantoms according to the literature. In our case, the tests have been done using PET images from 6 real patients diagnosed with cancer. The results have been analyzed and presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The structural connectivity of the brain is considered to encode species-wise and subject-wise patterns that will unlock large areas of understanding of the human brain. Currently, diffusion MRI of the living brain enables to map the microstructure of tissue, allowing to track the pathways of fiber bundles connecting the cortical regions across the brain. These bundles are summarized in a network representation called connectome that is analyzed using graph theory. The extraction of the connectome from diffusion MRI requires a large processing flow including image enhancement, reconstruction, segmentation, registration, diffusion tracking, etc. Although a concerted effort has been devoted to the definition of standard pipelines for the connectome extraction, it is still crucial to define quality assessment protocols of these workflows. The definition of quality control protocols is hindered by the complexity of the pipelines under test and the absolute lack of gold-standards for diffusion MRI data. Here we characterize the impact on structural connectivity workflows of the geometrical deformation typically shown by diffusion MRI data due to the inhomogeneity of magnetic susceptibility across the imaged object. We propose an evaluation framework to compare the existing methodologies to correct for these artifacts including whole-brain realistic phantoms. Additionally, we design and implement an image segmentation and registration method to avoid performing the correction task and to enable processing in the native space of diffusion data. We release PySDCev, an evaluation framework for the quality control of connectivity pipelines, specialized in the study of susceptibility-derived distortions. In this context, we propose Diffantom, a whole-brain phantom that provides a solution to the lack of gold-standard data. The three correction methodologies under comparison performed reasonably, and it is difficult to determine which method is more advisable. We demonstrate that susceptibility-derived correction is necessary to increase the sensitivity of connectivity pipelines, at the cost of specificity. Finally, with the registration and segmentation tool called regseg we demonstrate how the problem of susceptibility-derived distortion can be overcome allowing data to be used in their original coordinates. This is crucial to increase the sensitivity of the whole pipeline without any loss in specificity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La tomografía axial computerizada (TAC) es la modalidad de imagen médica preferente para el estudio de enfermedades pulmonares y el análisis de su vasculatura. La segmentación general de vasos en pulmón ha sido abordada en profundidad a lo largo de los últimos años por la comunidad científica que trabaja en el campo de procesamiento de imagen; sin embargo, la diferenciación entre irrigaciones arterial y venosa es aún un problema abierto. De hecho, la separación automática de arterias y venas está considerado como uno de los grandes retos futuros del procesamiento de imágenes biomédicas. La segmentación arteria-vena (AV) permitiría el estudio de ambas irrigaciones por separado, lo cual tendría importantes consecuencias en diferentes escenarios médicos y múltiples enfermedades pulmonares o estados patológicos. Características como la densidad, geometría, topología y tamaño de los vasos sanguíneos podrían ser analizados en enfermedades que conllevan remodelación de la vasculatura pulmonar, haciendo incluso posible el descubrimiento de nuevos biomarcadores específicos que aún hoy en dípermanecen ocultos. Esta diferenciación entre arterias y venas también podría ayudar a la mejora y el desarrollo de métodos de procesamiento de las distintas estructuras pulmonares. Sin embargo, el estudio del efecto de las enfermedades en los árboles arterial y venoso ha sido inviable hasta ahora a pesar de su indudable utilidad. La extrema complejidad de los árboles vasculares del pulmón hace inabordable una separación manual de ambas estructuras en un tiempo realista, fomentando aún más la necesidad de diseñar herramientas automáticas o semiautomáticas para tal objetivo. Pero la ausencia de casos correctamente segmentados y etiquetados conlleva múltiples limitaciones en el desarrollo de sistemas de separación AV, en los cuales son necesarias imágenes de referencia tanto para entrenar como para validar los algoritmos. Por ello, el diseño de imágenes sintéticas de TAC pulmonar podría superar estas dificultades ofreciendo la posibilidad de acceso a una base de datos de casos pseudoreales bajo un entorno restringido y controlado donde cada parte de la imagen (incluyendo arterias y venas) está unívocamente diferenciada. En esta Tesis Doctoral abordamos ambos problemas, los cuales están fuertemente interrelacionados. Primero se describe el diseño de una estrategia para generar, automáticamente, fantomas computacionales de TAC de pulmón en humanos. Partiendo de conocimientos a priori, tanto biológicos como de características de imagen de CT, acerca de la topología y relación entre las distintas estructuras pulmonares, el sistema desarrollado es capaz de generar vías aéreas, arterias y venas pulmonares sintéticas usando métodos de crecimiento iterativo, que posteriormente se unen para formar un pulmón simulado con características realistas. Estos casos sintéticos, junto a imágenes reales de TAC sin contraste, han sido usados en el desarrollo de un método completamente automático de segmentación/separación AV. La estrategia comprende una primera extracción genérica de vasos pulmonares usando partículas espacio-escala, y una posterior clasificación AV de tales partículas mediante el uso de Graph-Cuts (GC) basados en la similitud con arteria o vena (obtenida con algoritmos de aprendizaje automático) y la inclusión de información de conectividad entre partículas. La validación de los fantomas pulmonares se ha llevado a cabo mediante inspección visual y medidas cuantitativas relacionadas con las distribuciones de intensidad, dispersión de estructuras y relación entre arterias y vías aéreas, los cuales muestran una buena correspondencia entre los pulmones reales y los generados sintéticamente. La evaluación del algoritmo de segmentación AV está basada en distintas estrategias de comprobación de la exactitud en la clasificación de vasos, las cuales revelan una adecuada diferenciación entre arterias y venas tanto en los casos reales como en los sintéticos, abriendo así un amplio abanico de posibilidades en el estudio clínico de enfermedades cardiopulmonares y en el desarrollo de metodologías y nuevos algoritmos para el análisis de imágenes pulmonares. ABSTRACT Computed tomography (CT) is the reference image modality for the study of lung diseases and pulmonary vasculature. Lung vessel segmentation has been widely explored by the biomedical image processing community, however, differentiation of arterial from venous irrigations is still an open problem. Indeed, automatic separation of arterial and venous trees has been considered during last years as one of the main future challenges in the field. Artery-Vein (AV) segmentation would be useful in different medical scenarios and multiple pulmonary diseases or pathological states, allowing the study of arterial and venous irrigations separately. Features such as density, geometry, topology and size of vessels could be analyzed in diseases that imply vasculature remodeling, making even possible the discovery of new specific biomarkers that remain hidden nowadays. Differentiation between arteries and veins could also enhance or improve methods processing pulmonary structures. Nevertheless, AV segmentation has been unfeasible until now in clinical routine despite its objective usefulness. The huge complexity of pulmonary vascular trees makes a manual segmentation of both structures unfeasible in realistic time, encouraging the design of automatic or semiautomatic tools to perform the task. However, this lack of proper labeled cases seriously limits in the development of AV segmentation systems, where reference standards are necessary in both algorithm training and validation stages. For that reason, the design of synthetic CT images of the lung could overcome these difficulties by providing a database of pseudorealistic cases in a constrained and controlled scenario where each part of the image (including arteries and veins) is differentiated unequivocally. In this Ph.D. Thesis we address both interrelated problems. First, the design of a complete framework to automatically generate computational CT phantoms of the human lung is described. Starting from biological and imagebased knowledge about the topology and relationships between structures, the system is able to generate synthetic pulmonary arteries, veins, and airways using iterative growth methods that can be merged into a final simulated lung with realistic features. These synthetic cases, together with labeled real CT datasets, have been used as reference for the development of a fully automatic pulmonary AV segmentation/separation method. The approach comprises a vessel extraction stage using scale-space particles and their posterior artery-vein classification using Graph-Cuts (GC) based on arterial/venous similarity scores obtained with a Machine Learning (ML) pre-classification step and particle connectivity information. Validation of pulmonary phantoms from visual examination and quantitative measurements of intensity distributions, dispersion of structures and relationships between pulmonary air and blood flow systems, show good correspondence between real and synthetic lungs. The evaluation of the Artery-Vein (AV) segmentation algorithm, based on different strategies to assess the accuracy of vessel particles classification, reveal accurate differentiation between arteries and vein in both real and synthetic cases that open a huge range of possibilities in the clinical study of cardiopulmonary diseases and the development of methodological approaches for the analysis of pulmonary images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Desde o seu desenvolvimento na década de 1970 a tomografia computadorizada (TC) passou por grandes mudanças tecnológicas, tornando-se uma importante ferramenta diagnóstica para a medicina. Consequentemente o papel da TC em diagnóstico por imagem expandiu-se rapidamente, principalmente devido a melhorias na qualidade da imagem e tempo de aquisição. A dose de radiação recebida por pacientes devido a tais procedimentos vem ganhando atenção, levando a comunidade científica e os fabricantes a trabalharem juntos em direção a determinação e otimização de doses. Nas últimas décadas muitas metodologias para dosimetria em pacientes têm sido propostas, baseadas especialmente em cálculos utilizando a técnica Monte Carlo ou medições experimentais com objetos simuladores e dosímetros. A possibilidade de medições in vivo também está sendo investigada. Atualmente as principais técnicas para a otimização da dose incluem redução e/ou modulação da corrente anódica. O presente trabalho propõe uma metodologia experimental para estimativa de doses absorvidas pelos pulmões devido a protocolos clínicos de TC, usando um objeto simulador antropomórfico adulto e dosímetros termoluminescentes de Fluoreto de Lítio (LiF). Sete protocolos clínicos diferentes foram selecionados, com base em sua relevância com respeito à otimização de dose e frequência na rotina clínica de dois hospitais de grande porte: Instituto de Radiologia do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (InRad) e Instituto do Câncer do Estado de São Paulo Octávio Frias de Oliveira (ICESP). Quatro protocolos de otimização de dose foram analisados: Auto mA, Auto + Smart mA, Baixa Dose (BD) e Ultra Baixa Dose (UBD). Os dois primeiros protocolos supracitados buscam redução de dose por meio de modulação da corrente anódica, enquanto os protocolos BD e UBD propõem a redução do valor da corrente anódica, mantendo-a constante. Os protocolos BD e UBD proporcionaram redução de dose de 72,7(8) % e 91(1) %, respectivamente; 16,8(1,3) % e 35,0(1,2) % de redução de dose foram obtidas com os protocolos Auto mA e Auto + Smart mA, respectivamente. As estimativas de dose para os protocolos analisados neste estudo são compatíveis com estudos similares publicados na literatura, demonstrando a eficiência da metodologia para o cálculo de doses absorvidas no pulmão. Sua aplicabilidade pode ser estendida a diferentes órgãos, diferentes protocolos de CT e diferentes tipos de objetos simuladores antropomórficos (pediátricos, por exemplo). Por fim, a comparação entre os valores de doses estimadas para os pulmões e valores de estimativas de doses dependentes do tamanho (Size Specific Dose Estimates SSDE) demonstrou dependência linear entre as duas grandezas. Resultados de estudos similares exibiram comportamentos similares para doses no reto, sugerindo que doses absorvidas pelos uma órgãos podem ser linearmente dependente dos valores de SSDE, com coeficientes lineares específicos para cada órgão. Uma investigação mais aprofundada sobre doses em órgãos é necessária para avaliar essa hipótese.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabalho apresenta uma nova metodologia para elastografia virtual em imagens simuladas de ultrassom utilizando métodos numéricos e métodos de visão computacional. O objetivo é estimar o módulo de elasticidade de diferentes tecidos tendo como entrada duas imagens da mesma seção transversal obtidas em instantes de tempo e pressões aplicadas diferentes. Esta metodologia consiste em calcular um campo de deslocamento das imagens com um método de fluxo óptico e aplicar um método iterativo para estimar os módulos de elasticidade (análise inversa) utilizando métodos numéricos. Para o cálculo dos deslocamentos, duas formulações são utilizadas para fluxo óptico: Lucas-Kanade e Brox. A análise inversa é realizada utilizando duas técnicas numéricas distintas: o Método dos Elementos Finitos (MEF) e o Método dos Elementos de Contorno (MEC), sendo ambos implementados em Unidades de Processamento Gráfico de uso geral, GpGPUs ( \"General Purpose Graphics Units\" ). Considerando uma quantidade qualquer de materiais a serem determinados, para a implementação do Método dos Elementos de Contorno é empregada a técnica de sub-regiões para acoplar as matrizes de diferentes estruturas identificadas na imagem. O processo de otimização utilizado para determinar as constantes elásticas é realizado de forma semi-analítica utilizando cálculo por variáveis complexas. A metodologia é testada em três etapas distintas, com simulações sem ruído, simulações com adição de ruído branco gaussiano e phantoms matemáticos utilizando rastreamento de ruído speckle. Os resultados das simulações apontam o uso do MEF como mais preciso, porém computacionalmente mais caro, enquanto o MEC apresenta erros toleráveis e maior velocidade no tempo de processamento.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A presença da Medicina Nuclear como modalidade de obtenção de imagens médicas é um dos principais procedimentos utilizados hoje nos centros de saúde, tendo como grande vantagem a capacidade de analisar o comportamento metabólico do paciente, traduzindo-se em diagnósticos precoces. Entretanto, sabe-se que a quantificação em Medicina Nuclear é dificultada por diversos fatores, entre os quais estão a correção de atenuação, espalhamento, algoritmos de reconstrução e modelos assumidos. Neste contexto, o principal objetivo deste projeto foi melhorar a acurácia e a precisão na análise de imagens de PET/CT via processos realísticos e bem controlados. Para esse fim, foi proposta a elaboração de uma estrutura modular, a qual está composta por um conjunto de passos consecutivamente interligados começando com a simulação de phantoms antropomórficos 3D para posteriormente gerar as projeções realísticas PET/CT usando a plataforma GATE (com simulação de Monte Carlo), em seguida é aplicada uma etapa de reconstrução de imagens 3D, na sequência as imagens são filtradas (por meio do filtro de Anscombe/Wiener para a redução de ruído Poisson caraterístico deste tipo de imagens) e, segmentadas (baseados na teoria Fuzzy Connectedness). Uma vez definida a região de interesse (ROI) foram produzidas as Curvas de Atividade de Entrada e Resultante requeridas no processo de análise da dinâmica de compartimentos com o qual foi obtida a quantificação do metabolismo do órgão ou estrutura de estudo. Finalmente, de uma maneira semelhante imagens PET/CT reais fornecidas pelo Instituto do Coração (InCor) do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (HC-FMUSP) foram analisadas. Portanto, concluiu-se que a etapa de filtragem tridimensional usando o filtro Anscombe/Wiener foi relevante e de alto impacto no processo de quantificação metabólica e em outras etapas importantes do projeto em geral.