949 resultados para Three-dimensional computed tomography


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract PRINCIPLES: Computed tomography (CT) is inferior to the fibroscan and laboratory testing in the noninvasive diagnosis of liver fibrosis. On the other hand, CT is a frequently used diagnostic tool in modern medicine. The auxiliary finding of clinically occult liver fibrosis in CT scans could result in an earlier diagnosis. The aim of this study was to analyse quantifiable direct signs of liver remodelling in CT scans to depict liver fibrosis in a precirrhotic stage. METHODS: Retrospective review of 148 abdominal CT scans (80 liver cirrhosis, 35 precirrhotic fibrosis and 33 control patients). Fibrosis and cirrhosis were histologically proven. The diameters of the three main hepatic veins were measured 1-2 cm before their aperture into the inferior caval vein. The width of the caudate and the right hepatic lobe were divided, and measured horizontally at the level of the first bifurcation of the right portal vein in axial planes (caudate-right-lobe ratio). A combination of both (sum of liver vein diameters divided by the caudate-right lobe ratio) was defined as the ld/crl ratio. These metrics were analysed for the detection of liver fibrosis and cirrhosis. RESULTS: An ld/crl-r <24 showed a sensitivity of 83% and a specificity of 76% for precirrhotic liver fibrosis. Liver cirrhosis could be detected with a sensitivity of 88% and a specificity of 82% if ld/crl-r <20. CONCLUSION: An ld/crl-r <24 justifies laboratory testing and a fibroscan. This could bring forward the diagnosis and patients would profit from early treatment in a potentially reversible stage of disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND The accuracy of CT pulmonary angiography (CTPA) in detecting or excluding pulmonary embolism has not yet been assessed in patients with high body weight (BW). METHODS This retrospective study involved CTPAs of 114 patients weighing 75-99 kg and those of 123 consecutive patients weighing 100-150 kg. Three independent blinded radiologists analyzed all examinations in randomized order. Readers' data on pulmonary emboli were compared with a composite reference standard, comprising clinical probability, reference CTPA result, additional imaging when performed and 90-day follow-up. Results in both BW groups and in two body mass index (BMI) groups (BMI <30 kg/m(2) and BMI ≥ 30 kg/m(2), i.e., non-obese and obese patients) were compared. RESULTS The prevalence of pulmonary embolism was not significantly different in the BW groups (P=1.0). The reference CTPA result was positive in 23 of 114 patients in the 75-99 kg group and in 25 of 123 patients in the ≥ 100 kg group, respectively (odds ratio, 0.991; 95% confidence interval, 0.501 to 1.957; P=1.0). No pulmonary embolism-related death or venous thromboembolism occurred during follow-up. The mean accuracy of three readers was 91.5% in the 75-99 kg group and 89.9% in the ≥ 100 kg group (odds ratio, 1.207; 95% confidence interval, 0.451 to 3.255; P=0.495), and 89.9% in non-obese patients and 91.2% in obese patients (odds ratio, 0.853; 95% confidence interval, 0.317 to 2.319; P=0.816). CONCLUSION The diagnostic accuracy of CTPA in patients weighing 75-99 kg or 100-150 kg proved not to be significantly different.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Descending cerebellar tonsillar herniation is a serious and common complication of intracranial mass lesions. We documented three cases of fatal blunt head injury using post-mortem multi-slice computed tomography (MSCT) and magnetic resonance imaging (MRI). The results showed massive bone and soft-tissue injuries of the head and signs of high intracranial pressure with herniation of the cerebellar tonsils. The diagnosis of tonsillar herniation by post-mortem radiological examination was performed prior to autopsy. This paper describes the detailed retrospective evaluation of the position of the cerebellar tonsils in post-mortem imaging in comparison to clinical studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to evaluate the diagnostic criteria and to identify the radiological signs (derived from known radiological signs) for the detection of aortic dissections using postmortem computed tomography (PMCT). Thirty-three aortic dissection cases were retrospectively evaluated; all underwent PMCT and autopsy. The images were initially evaluated independently by two readers and were subsequently evaluated in consensus. Known radiological signs, such as dislocated calcification and an intimomedial flap, were identified. The prevalence of the double sedimentation level in the true and false lumen of the dissected aorta was assessed and defined as a postmortem characteristic sign of aortic dissection. Dislocated calcification was detected in 85% of the cases with aortic calcification; whereas in 54% of the non-calcified aortas, the intimomedial flap could also be recognized. Double sedimentation was identified in 16/33 of the cases. Overall, in 76% (25/33) of the study cases, the described signs, which are indicative for aortic dissection, could be identified. In this study, three diagnostic criteria of aortic dissection were identified using non-enhanced PMCT images of autopsy-confirmed dissection cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE The aim of this study was to directly compare metal artifact reduction (MAR) of virtual monoenergetic extrapolations (VMEs) from dual-energy computed tomography (CT) with iterative MAR (iMAR) from single energy in pelvic CT with hip prostheses. MATERIALS AND METHODS A human pelvis phantom with unilateral or bilateral metal inserts of different material (steel and titanium) was scanned with third-generation dual-source CT using single (120 kVp) and dual-energy (100/150 kVp) at similar radiation dose (CT dose index, 7.15 mGy). Three image series for each phantom configuration were reconstructed: uncorrected, VME, and iMAR. Two independent, blinded radiologists assessed image quality quantitatively (noise and attenuation) and subjectively (5-point Likert scale). Intraclass correlation coefficients (ICCs) and Cohen κ were calculated to evaluate interreader agreements. Repeated measures analysis of variance and Friedman test were used to compare quantitative and qualitative image quality. Post hoc testing was performed using a corrected (Bonferroni) P < 0.017. RESULTS Agreements between readers were high for noise (all, ICC ≥ 0.975) and attenuation (all, ICC ≥ 0.986); agreements for qualitative assessment were good to perfect (all, κ ≥ 0.678). Compared with uncorrected images, VME showed significant noise reduction in the phantom with titanium only (P < 0.017), and iMAR showed significantly lower noise in all regions and phantom configurations (all, P < 0.017). In all phantom configurations, deviations of attenuation were smallest in images reconstructed with iMAR. For VME, there was a tendency toward higher subjective image quality in phantoms with titanium compared with uncorrected images, however, without reaching statistical significance (P > 0.017). Subjective image quality was rated significantly higher for images reconstructed with iMAR than for uncorrected images in all phantom configurations (all, P < 0.017). CONCLUSIONS Iterative MAR showed better MAR capabilities than VME in settings with bilateral hip prosthesis or unilateral steel prosthesis. In settings with unilateral hip prosthesis made of titanium, VME and iMAR performed similarly well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Endolithic bioerosion is difficult to analyse and to describe, and it usually requires damaging of the sample material. Sponge erosion (Entobia) may be one of the most difficult to evaluate as it is simultaneously macroscopically inhomogeneous and microstructurally intricate. We studied the bioerosion traces of the two Australian sponges Cliona celata Grant, 1826 (sensu Schönberg 2000) and Cliona orientalis Thiele, 1900 with a newly available radiographic technology: high resolution X-ray micro-computed tomography (MCT). MCT allows non-destructive visualisation of live and dead structures in three dimensions and was compared to traditional microscopic methods. MCT and microscopy showed that C. celata bioerosion was more intense in the centre and branched out in the periphery. In contrast, C. orientalis produced a dense, even trace meshwork and caused an overall more intense erosion pattern than C. celata. Extended pioneering filaments were not usually found at the margins of the studied sponge erosion, but branches ended abruptly or tapered to points. Results obtained with MCT were similar in quality to observations from transparent optical spar under the dissecting microscope. Microstructures could not be resolved as well as with e.g. scanning electron microscopy (SEM). Even though sponge scars and sponge chips were easily recognisable on maximum magnification MCT images, they lacked the detail that is available from SEM. Other drawbacks of MCT involve high costs and presently limited access. Even though MCT cannot presently replace traditional techniques such as corrosion casts viewed by SEM, we obtained valuable information. Especially for the possibility to measure endolithic pore volumes, we regard MCT as a very promising tool that will continue to be optimised. A combination of different methods will produce the best results in the study of Entobia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Amundsenisen is an ice field, 80 km2 in area, located in Southern Spitsbergen, Svalbard. Radio-echo sounding measurements at 20 MHz show high intensity returns from a nearly flat basal reflector at four zones, all of them with ice thickness larger than 500m. These reflections suggest possible subglacial lakes. To determine whether basal liquid water is compatible with current pressure and temperature conditions, we aim at applying a thermo mechanical model with a free boundary at the bed defined as solution of a Stefan problem for the interface ice-subglaciallake. The complexity of the problem suggests the use of a bi-dimensional model, but this requires that well-defined flowlines across the zones with suspected subglacial lakes are available. We define these flow lines from the solution of a three-dimensional dynamical model, and this is the main goal of the present contribution. We apply a three-dimensional full-Stokes model of glacier dynamics to Amundsenisen icefield. We are mostly interested in the plateau zone of the icefield, so we introduce artificial vertical boundaries at the heads of the main outlet glaciers draining Amundsenisen. At these boundaries we set velocity boundary conditions. Velocities near the centres of the heads of the outlets are known from experimental measurements. The velocities at depth are calculated according to a SIA velocity-depth profile, and those at the rest of the transverse section are computed following Nye’s (1952) model. We select as southeastern boundary of the model domain an ice divide, where we set boundary conditions of zero horizontal velocities and zero vertical shear stresses. The upper boundary is a traction-free boundary. For the basal boundary conditions, on the zones of suspected subglacial lakes we set free-slip boundary conditions, while for the rest of the basal boundary we use a friction law linking the sliding velocity to the basal shear stress,in such a way that, contrary to the shallow ice approximation, the basal shear stress is not equal to the basal driving stress but rather part of the solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The volumic rearrangement of both chromosomes and immunolabeled upstream binding factor in entire well-preserved mitotic cells was studied by confocal microscopy. By using high-quality three-dimensional visualization and tomography, it was possible to investigate interactively the volumic organization of chromosome sets and to focus on their internal characteristics. More particularly, this study demonstrates the nonrandom positioning of metaphase chromosomes bearing nucleolar organizer regions as revealed by their positive upstream binding factor immunolabeling. During the complex morphogenesis of the progeny nuclei from anaphase to late telophase, the equal partitioning of the nucleolar organizer regions is demonstrated by quantification, and their typical nonrandom central positioning within the chromosome sets is revealed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies the fracturing process in low-porous rocks during uniaxial compressive tests considering the original defects and the new mechanical cracks in the material. For this purpose, five different kinds of rocks have been chosen with carbonate mineralogy and low porosity (lower than 2%). The characterization of the fracture damage is carried out using three different techniques: ultrasounds, mercury porosimetry and X-ray computed tomography. The proposed methodology allows quantifying the evolution of the porous system as well as studying the location of new cracks in the rock samples. Intercrystalline porosity (the smallest pores with pore radius < 1 μm) shows a limited development during loading, disappearing rapidly from the porosimetry curves and it is directly related to the initial plastic behaviour in the stress–strain patterns. However, the biggest pores (corresponding to the cracks) suffer a continuous enlargement until the unstable propagation of fractures. The measured crack initiation stress varies between 0.25 σp and 0.50 σp for marbles and between 0.50 σp and 0.85 σp for micrite limestone. The unstable propagation of cracks is assumed to occur very close to the peak strength. Crack propagation through the sample is completely independent of pre-existing defects (porous bands, stylolites, fractures and veins). The ultrasonic response in the time-domain is less sensitive to the fracture damage than the frequency-domain. P-wave velocity increases during loading test until the beginning of the unstable crack propagation. This increase is higher for marbles (between 15% and 30% from initial vp values) and lower for micrite limestones (between 5% and 10%). When the mechanical cracks propagate unstably, the velocity stops to increase and decreases only when rock damage is very high. Frequency analysis of the ultrasonic signals shows clear changes during the loading process. The spectrum of treated waveforms shows two main frequency peaks centred at low (~ 20 kHz) and high (~ 35 kHz) values. When new fractures appear and grow the amplitude of the high-frequency peak decreases, while that of the low-frequency peak increases. Besides, a slight frequency shift is observed towards higher frequencies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on the three-dimensional elastic inclusion model proposed by Dobrovolskii, we developed a rheological inclusion model to study earthquake preparation processes. By using the Corresponding Principle in the theory of rheologic mechanics, we derived the analytic expressions of viscoelastic displacement U(r, t) , V(r, t) and W(r, t), normal strains epsilon(xx) (r, t), epsilon(yy) (r, t) and epsilon(zz) (r, t) and the bulk strain theta (r, t) at an arbitrary point (x, y, z) in three directions of X axis, Y axis and Z axis produced by a three-dimensional inclusion in the semi-infinite rheologic medium defined by the standard linear rheologic model. Subsequent to the spatial-temporal variation of bulk strain being computed on the ground produced by such a spherical rheologic inclusion, interesting results are obtained, suggesting that the bulk strain produced by a hard inclusion change with time according to three stages (alpha, beta, gamma) with different characteristics, similar to that of geodetic deformation observations, but different with the results of a soft inclusion. These theoretical results can be used to explain the characteristics of spatial-temporal evolution, patterns, quadrant-distribution of earthquake precursors, the changeability, spontaneity and complexity of short-term and imminent-term precursors. It offers a theoretical base to build physical models for earthquake precursors and to predict the earthquakes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While it is well known that exposure to radiation can result in cataract formation, questions still remain about the presence of a dose threshold in radiation cataractogenesis. Since the exposure history from diagnostic CT exams is well documented in a patient’s medical record, the population of patients chronically exposed to radiation from head CT exams may be an interesting area to explore for further research in this area. However, there are some challenges in estimating lens dose from head CT exams. An accurate lens dosimetry model would have to account for differences in imaging protocols, differences in head size, and the use of any dose reduction methods.

The overall objective of this dissertation was to develop a comprehensive method to estimate radiation dose to the lens of the eye for patients receiving CT scans of the head. This research is comprised of a physics component, in which a lens dosimetry model was derived for head CT, and a clinical component, which involved the application of that dosimetry model to patient data.

The physics component includes experiments related to the physical measurement of the radiation dose to the lens by various types of dosimeters placed within anthropomorphic phantoms. These dosimeters include high-sensitivity MOSFETs, TLDs, and radiochromic film. The six anthropomorphic phantoms used in these experiments range in age from newborn to adult.

First, the lens dose from five clinically relevant head CT protocols was measured in the anthropomorphic phantoms with MOSFET dosimeters on two state-of-the-art CT scanners. The volume CT dose index (CTDIvol), which is a standard CT output index, was compared to the measured lens doses. Phantom age-specific CTDIvol-to-lens dose conversion factors were derived using linear regression analysis. Since head size can vary among individuals of the same age, a method was derived to estimate the CTDIvol-to-lens dose conversion factor using the effective head diameter. These conversion factors were derived for each scanner individually, but also were derived with the combined data from the two scanners as a means to investigate the feasibility of a scanner-independent method. Using the scanner-independent method to derive the CTDIvol-to-lens dose conversion factor from the effective head diameter, most of the fitted lens dose values fell within 10-15% of the measured values from the phantom study, suggesting that this is a fairly accurate method of estimating lens dose from the CTDIvol with knowledge of the patient’s head size.

Second, the dose reduction potential of organ-based tube current modulation (OB-TCM) and its effect on the CTDIvol-to-lens dose estimation method was investigated. The lens dose was measured with MOSFET dosimeters placed within the same six anthropomorphic phantoms. The phantoms were scanned with the five clinical head CT protocols with OB-TCM enabled on the one scanner model at our institution equipped with this software. The average decrease in lens dose with OB-TCM ranged from 13.5 to 26.0%. Using the size-specific method to derive the CTDIvol-to-lens dose conversion factor from the effective head diameter for protocols with OB-TCM, the majority of the fitted lens dose values fell within 15-18% of the measured values from the phantom study.

Third, the effect of gantry angulation on lens dose was investigated by measuring the lens dose with TLDs placed within the six anthropomorphic phantoms. The 2-dimensional spatial distribution of dose within the areas of the phantoms containing the orbit was measured with radiochromic film. A method was derived to determine the CTDIvol-to-lens dose conversion factor based upon distance from the primary beam scan range to the lens. The average dose to the lens region decreased substantially for almost all the phantoms (ranging from 67 to 92%) when the orbit was exposed to scattered radiation compared to the primary beam. The effectiveness of this method to reduce lens dose is highly dependent upon the shape and size of the head, which influences whether or not the angled scan range coverage can include the entire brain volume and still avoid the orbit.

The clinical component of this dissertation involved performing retrospective patient studies in the pediatric and adult populations, and reconstructing the lens doses from head CT examinations with the methods derived in the physics component. The cumulative lens doses in the patients selected for the retrospective study ranged from 40 to 1020 mGy in the pediatric group, and 53 to 2900 mGy in the adult group.

This dissertation represents a comprehensive approach to lens of the eye dosimetry in CT imaging of the head. The collected data and derived formulas can be used in future studies on radiation-induced cataracts from repeated CT imaging of the head. Additionally, it can be used in the areas of personalized patient dose management, and protocol optimization and clinician training.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient’s medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method.

Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated.

Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated.

Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is an investigation into collimator designs for a deuterium-deuterium (DD) neutron generator for an inexpensive and compact neutron imaging system that can be implemented in a hospital. The envisioned application is for a spectroscopic imaging technique called neutron stimulated emission computed tomography (NSECT).

Previous NSECT studies have been performed using a Van-de-Graaff accelerator at the Triangle Universities Nuclear Laboratory (TUNL) in Duke University. This facility has provided invaluable research into the development of NSECT. To transition the current imaging method into a clinically feasible system, there is a need for a high-intensity fast neutron source that can produce collimated beams. The DD neutron generator from Adelphi Technologies Inc. is being explored as a possible candidate to provide the uncollimated neutrons. This DD generator is a compact source that produces 2.5 MeV fast neutrons with intensities of 1012 n/s (4π). The neutron energy is sufficient to excite most isotopes of interest in the body with the exception of carbon and oxygen. However, a special collimator is needed to collimate the 4π neutron emission into a narrow beam. This work describes the development and evaluation of a series of collimator designs to collimate the DD generator for narrow beams suitable for NSECT imaging.

A neutron collimator made of high-density polyethylene (HDPE) and lead was modeled and simulated using the GEANT4 toolkit. The collimator was designed as a 52 x 52 x 52 cm3 HDPE block coupled with 1 cm lead shielding. Non-tapering (cylindrical) and tapering (conical) opening designs were modeled into the collimator to permit passage of neutrons. The shape, size, and geometry of the aperture were varied to assess the effects on the collimated neutron beam. Parameters varied were: inlet diameter (1-5 cm), outlet diameter (1-5 cm), aperture diameter (0.5-1.5 cm), and aperture placement (13-39 cm). For each combination of collimator parameters, the spatial and energy distributions of neutrons and gammas were tracked and analyzed to determine three performance parameters: neutron beam-width, primary neutron flux, and the output quality. To evaluate these parameters, the simulated neutron beams are then regenerated for a NSECT breast scan. Scan involved a realistic breast lesion implanted into an anthropomorphic female phantom.

This work indicates potential for collimating and shielding a DD neutron generator for use in a clinical NSECT system. The proposed collimator designs produced a well-collimated neutron beam that can be used for NSECT breast imaging. The aperture diameter showed a strong correlation to the beam-width, where the collimated neutron beam-width was about 10% larger than the physical aperture diameter. In addition, a collimator opening consisting of a tapering inlet and cylindrical outlet allowed greater neutron throughput when compared to a simple cylindrical opening. The tapering inlet design can allow additional neutron throughput when the neck is placed farther from the source. On the other hand, the tapering designs also decrease output quality (i.e. increase in stray neutrons outside the primary collimated beam). All collimators are cataloged in measures of beam-width, neutron flux, and output quality. For a particular NSECT application, an optimal choice should be based on the collimator specifications listed in this work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The gravitationally confined detonation (GCD) model has been proposed as a possible explosion mechanism for Type Ia supernovae in the single-degenerate evolution channel. It starts with ignition of a deflagration in a single off-centre bubble in a near-Chandrasekhar-mass white dwarf. Driven by buoyancy, the deflagration flame rises in a narrow cone towards the surface. For the most part, the main component of the flow of the expanding ashes remains radial, but upon reaching the outer, low-pressure layers of the white dwarf, an additional lateral component develops. This causes the deflagration ashes to converge again at the opposite side, where the compression heats fuel and a detonation may be launched. We first performed five three-dimensional hydrodynamic simulations of the deflagration phase in 1.4 M⊙ carbon/oxygen white dwarfs at intermediate-resolution (2563computational zones). We confirm that the closer the initial deflagration is ignited to the centre, the slower the buoyant rise and the longer the deflagration ashes takes to break out and close in on the opposite pole to collide. To test the GCD explosion model, we then performed a high-resolution (5123 computational zones) simulation for a model with an ignition spot offset near the upper limit of what is still justifiable, 200 km. This high-resolution simulation met our deliberately optimistic detonation criteria, and we initiated a detonation. The detonation burned through the white dwarf and led to its complete disruption. For this model, we determined detailed nucleosynthetic yields by post-processing 106 tracer particles with a 384 nuclide reaction network, and we present multi-band light curves and time-dependent optical spectra. We find that our synthetic observables show a prominent viewing-angle sensitivity in ultraviolet and blue wavelength bands, which contradicts observed SNe Ia. The strong dependence on the viewing angle is caused by the asymmetric distribution of the deflagration ashes in the outer ejecta layers. Finally, we compared our model to SN 1991T. The overall flux level of the model is slightly too low, and the model predicts pre-maximum light spectral features due to Ca, S, and Si that are too strong. Furthermore, the model chemical abundance stratification qualitatively disagrees with recent abundance tomography results in two key areas: our model lacks low-velocity stable Fe and instead has copious amounts of high-velocity 56Ni and stable Fe. We therefore do not find good agreement of the model with SN 1991T.