5 resultados para Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E)

em Duke University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long term, high quality estimates of burned area are needed for improving both prognostic and diagnostic fire emissions models and for assessing feedbacks between fire and the climate system. We developed global, monthly burned area estimates aggregated to 0.5° spatial resolution for the time period July 1996 through mid-2009 using four satellite data sets. From 2001ĝ€ "2009, our primary data source was 500-m burned area maps produced using Moderate Resolution Imaging Spectroradiometer (MODIS) surface reflectance imagery; more than 90% of the global area burned during this time period was mapped in this fashion. During times when the 500-m MODIS data were not available, we used a combination of local regression and regional regression trees developed over periods when burned area and Terra MODIS active fire data were available to indirectly estimate burned area. Cross-calibration with fire observations from the Tropical Rainfall Measuring Mission (TRMM) Visible and Infrared Scanner (VIRS) and the Along-Track Scanning Radiometer (ATSR) allowed the data set to be extended prior to the MODIS era. With our data set we estimated that the global annual area burned for the years 1997ĝ€ "2008 varied between 330 and 431 Mha, with the maximum occurring in 1998. We compared our data set to the recent GFED2, L3JRC, GLOBCARBON, and MODIS MCD45A1 global burned area products and found substantial differences in many regions. Lastly, we assessed the interannual variability and long-term trends in global burned area over the past 13 years. This burned area time series serves as the basis for the third version of the Global Fire Emissions Database (GFED3) estimates of trace gas and aerosol emissions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New burned area datasets and top-down constraints from atmospheric concentration measurements of pyrogenic gases have decreased the large uncertainty in fire emissions estimates. However, significant gaps remain in our understanding of the contribution of deforestation, savanna, forest, agricultural waste, and peat fires to total global fire emissions. Here we used a revised version of the Carnegie-Ames-Stanford-Approach (CASA) biogeochemical model and improved satellite-derived estimates of area burned, fire activity, and plant productivity to calculate fire emissions for the 1997-2009 period on a 0.5° spatial resolution with a monthly time step. For November 2000 onwards, estimates were based on burned area, active fire detections, and plant productivity from the MODerate resolution Imaging Spectroradiometer (MODIS) sensor. For the partitioning we focused on the MODIS era. We used maps of burned area derived from the Tropical Rainfall Measuring Mission (TRMM) Visible and Infrared Scanner (VIRS) and Along-Track Scanning Radiometer (ATSR) active fire data prior to MODIS (1997-2000) and estimates of plant productivity derived from Advanced Very High Resolution Radiometer (AVHRR) observations during the same period. Average global fire carbon emissions according to this version 3 of the Global Fire Emissions Database (GFED3) were 2.0 PgC year-1 with significant interannual variability during 1997-2001 (2.8 Pg Cyear-1 in 1998 and 1.6 PgC year-1 in 2001). Globally, emissions during 2002-2007 were rela-tively constant (around 2.1 Pg C year-1) before declining in 2008 (1.7 Pg Cyear-1) and 2009 (1.5 PgC year-1) partly due to lower deforestation fire emissions in South America and tropical Asia. On a regional basis, emissions were highly variable during 2002-2007 (e.g., boreal Asia, South America, and Indonesia), but these regional differences canceled out at a global level. During the MODIS era (2001-2009), most carbon emissions were from fires in grasslands and savannas (44%) with smaller contributions from tropical deforestation and degradation fires (20%), woodland fires (mostly confined to the tropics, 16%), forest fires (mostly in the extratropics, 15%), agricultural waste burning (3%), and tropical peat fires (3%). The contribution from agricultural waste fires was likely a lower bound because our approach for measuring burned area could not detect all of these relatively small fires. Total carbon emissions were on average 13% lower than in our previous (GFED2) work. For reduced trace gases such as CO and CH4, deforestation, degradation, and peat fires were more important contributors because of higher emissions of reduced trace gases per unit carbon combusted compared to savanna fires. Carbon emissions from tropical deforestation, degradation, and peatland fires were on average 0.5 PgC year-1. The carbon emissions from these fires may not be balanced by regrowth following fire. Our results provide the first global assessment of the contribution of different sources to total global fire emissions for the past decade, and supply the community with an improved 13-year fire emissions time series. © 2010 Author(s).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Optical coherence tomography (OCT) is a noninvasive three-dimensional interferometric imaging technique capable of achieving micrometer scale resolution. It is now a standard of care in ophthalmology, where it is used to improve the accuracy of early diagnosis, to better understand the source of pathophysiology, and to monitor disease progression and response to therapy. In particular, retinal imaging has been the most prevalent clinical application of OCT, but researchers and companies alike are developing OCT systems for cardiology, dermatology, dentistry, and many other medical and industrial applications.

Adaptive optics (AO) is a technique used to reduce monochromatic aberrations in optical instruments. It is used in astronomical telescopes, laser communications, high-power lasers, retinal imaging, optical fabrication and microscopy to improve system performance. Scanning laser ophthalmoscopy (SLO) is a noninvasive confocal imaging technique that produces high contrast two-dimensional retinal images. AO is combined with SLO (AOSLO) to compensate for the wavefront distortions caused by the optics of the eye, providing the ability to visualize the living retina with cellular resolution. AOSLO has shown great promise to advance the understanding of the etiology of retinal diseases on a cellular level.

Broadly, we endeavor to enhance the vision outcome of ophthalmic patients through improved diagnostics and personalized therapy. Toward this end, the objective of the work presented herein was the development of advanced techniques for increasing the imaging speed, reducing the form factor, and broadening the versatility of OCT and AOSLO. Despite our focus on applications in ophthalmology, the techniques developed could be applied to other medical and industrial applications. In this dissertation, a technique to quadruple the imaging speed of OCT was developed. This technique was demonstrated by imaging the retinas of healthy human subjects. A handheld, dual depth OCT system was developed. This system enabled sequential imaging of the anterior segment and retina of human eyes. Finally, handheld SLO/OCT systems were developed, culminating in the design of a handheld AOSLO system. This system has the potential to provide cellular level imaging of the human retina, resolving even the most densely packed foveal cones.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As complex radiotherapy techniques become more readily-practiced, comprehensive 3D dosimetry is a growing necessity for advanced quality assurance. However, clinical implementation has been impeded by a wide variety of factors, including the expense of dedicated optical dosimeter readout tools, high operational costs, and the overall difficulty of use. To address these issues, a novel dry-tank optical CT scanner was designed for PRESAGE 3D dosimeter readout, relying on 3D printed components and omitting costly parts from preceding optical scanners. This work details the design, prototyping, and basic commissioning of the Duke Integrated-lens Optical Scanner (DIOS).

The convex scanning geometry was designed in ScanSim, an in-house Monte Carlo optical ray-tracing simulation. ScanSim parameters were used to build a 3D rendering of a convex ‘solid tank’ for optical-CT, which is capable of collimating a point light source into telecentric geometry without significant quantities of refractive-index matched fluid. The model was 3D printed, processed, and converted into a negative mold via rubber casting to produce a transparent polyurethane scanning tank. The DIOS was assembled with the solid tank, a 3W red LED light source, a computer-controlled rotation stage, and a 12-bit CCD camera. Initial optical phantom studies show negligible spatial inaccuracies in 2D projection images and 3D tomographic reconstructions. A PRESAGE 3D dose measurement for a 4-field box treatment plan from Eclipse shows 95% of voxels passing gamma analysis at 3%/3mm criteria. Gamma analysis between tomographic images of the same dosimeter in the DIOS and DLOS systems show 93.1% agreement at 5%/1mm criteria. From this initial study, the DIOS has demonstrated promise as an economically-viable optical-CT scanner. However, further improvements will be necessary to fully develop this system into an accurate and reliable tool for advanced QA.

Pre-clinical animal studies are used as a conventional means of translational research, as a midpoint between in-vitro cell studies and clinical implementation. However, modern small animal radiotherapy platforms are primitive in comparison with conventional linear accelerators. This work also investigates a series of 3D printed tools to expand the treatment capabilities of the X-RAD 225Cx orthovoltage irradiator, and applies them to a feasibility study of hippocampal avoidance in rodent whole-brain radiotherapy.

As an alternative material to lead, a novel 3D-printable tungsten-composite ABS plastic, GMASS, was tested to create precisely-shaped blocks. Film studies show virtually all primary radiation at 225 kVp can be attenuated by GMASS blocks of 0.5cm thickness. A state-of-the-art software, BlockGen, was used to create custom hippocampus-shaped blocks from medical image data, for any possible axial treatment field arrangement. A custom 3D printed bite block was developed to immobilize and position a supine rat for optimal hippocampal conformity. An immobilized rat CT with digitally-inserted blocks was imported into the SmART-Plan Monte-Carlo simulation software to determine the optimal beam arrangement. Protocols with 4 and 7 equally-spaced fields were considered as viable treatment options, featuring improved hippocampal conformity and whole-brain coverage when compared to prior lateral-opposed protocols. Custom rodent-morphic PRESAGE dosimeters were developed to accurately reflect these treatment scenarios, and a 3D dosimetry study was performed to confirm the SmART-Plan simulations. Measured doses indicate significant hippocampal sparing and moderate whole-brain coverage.