3 resultados para Methodologies to measure market risk

em Duke University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Habitat loss, fragmentation, and degradation threaten the World’s ecosystems and species. These, and other threats, will likely be exacerbated by climate change. Due to a limited budget for conservation, we are forced to prioritize a few areas over others. These places are selected based on their uniqueness and vulnerability. One of the most famous examples is the biodiversity hotspots: areas where large quantities of endemic species meet alarming rates of habitat loss. Most of these places are in the tropics, where species have smaller ranges, diversity is higher, and ecosystems are most threatened.

Species distributions are useful to understand ecological theory and evaluate extinction risk. Small-ranged species, or those endemic to one place, are more vulnerable to extinction than widely distributed species. However, current range maps often overestimate the distribution of species, including areas that are not within the suitable elevation or habitat for a species. Consequently, assessment of extinction risk using these maps could underestimate vulnerability.

In order to be effective in our quest to conserve the World’s most important places we must: 1) Translate global and national priorities into practical local actions, 2) Find synergies between biodiversity conservation and human welfare, 3) Evaluate the different dimensions of threats, in order to design effective conservation measures and prepare for future threats, and 4) Improve the methods used to evaluate species’ extinction risk and prioritize areas for conservation. The purpose of this dissertation is to address these points in Colombia and other global biodiversity hotspots.

In Chapter 2, I identified the global, strategic conservation priorities and then downscaled to practical local actions within the selected priorities in Colombia. I used existing range maps of 171 bird species to identify priority conservation areas that would protect the greatest number of species at risk in Colombia (endemic and small-ranged species). The Western Andes had the highest concentrations of such species—100 in total—but the lowest densities of national parks. I then adjusted the priorities for this region by refining these species ranges by selecting only areas of suitable elevation and remaining habitat. The estimated ranges of these species shrank by 18–100% after accounting for habitat and suitable elevation. Setting conservation priorities on the basis of currently available range maps excluded priority areas in the Western Andes and, by extension, likely elsewhere and for other taxa. By incorporating detailed maps of remaining natural habitats, I made practical recommendations for conservation actions. One recommendation was to restore forest connections to a patch of cloud forest about to become isolated from the main Andes.

For Chapter 3, I identified areas where bird conservation met ecosystem service protection in the Central Andes of Colombia. Inspired by the November 11th (2011) landslide event near Manizales, and the current poor results of Colombia’s Article 111 of Law 99 of 1993 as a conservation measure in this country, I set out to prioritize conservation and restoration areas where landslide prevention would complement bird conservation in the Central Andes. This area is one of the most biodiverse places on Earth, but also one of the most threatened. Using the case of the Rio Blanco Reserve, near Manizales, I identified areas for conservation where endemic and small-range bird diversity was high, and where landslide risk was also high. I further prioritized restoration areas by overlapping these conservation priorities with a forest cover map. Restoring forests in bare areas of high landslide risk and important bird diversity yields benefits for both biodiversity and people. I developed a simple landslide susceptibility model using slope, forest cover, aspect, and stream proximity. Using publicly available bird range maps, refined by elevation, I mapped concentrations of endemic and small-range bird species. I identified 1.54 km2 of potential restoration areas in the Rio Blanco Reserve, and 886 km2 in the Central Andes region. By prioritizing these areas, I facilitate the application of Article 111 which requires local and regional governments to invest in land purchases for the conservation of watersheds.

Chapter 4 dealt with elevational ranges of montane birds and the impact of lowland deforestation on their ranges in the Western Andes of Colombia, an important biodiversity hotspot. Using point counts and mist-nets, I surveyed six altitudinal transects spanning 2200 to 2800m. Three transects were forested from 2200 to 2800m, and three were partially deforested with forest cover only above 2400m. I compared abundance-weighted mean elevation, minimum elevation, and elevational range width. In addition to analyzing the effect of deforestation on 134 species, I tested its impact within trophic guilds and habitat preference groups. Abundance-weighted mean and minimum elevations were not significantly different between forested and partially deforested transects. Range width was marginally different: as expected, ranges were larger in forested transects. Species in different trophic guilds and habitat preference categories showed different trends. These results suggest that deforestation may affect species’ elevational ranges, even within the forest that remains. Climate change will likely exacerbate harmful impacts of deforestation on species’ elevational distributions. Future conservation strategies need to account for this by protecting connected forest tracts across a wide range of elevations.

In Chapter 5, I refine the ranges of 726 species from six biodiversity hotspots by suitable elevation and habitat. This set of 172 bird species for the Atlantic Forest, 138 for Central America, 100 for the Western Andes of Colombia, 57 for Madagascar, 102 for Sumatra, and 157 for Southeast Asia met the criteria for range size, endemism, threat, and forest use. Of these 586 species, the Red List deems 108 to be threatened: 15 critically endangered, 29 endangered, and 64 vulnerable. When ranges are refined by elevational limits and remaining forest cover, 10 of those critically endangered species have ranges < 100km2, but then so do 2 endangered species, seven vulnerable, and eight non-threatened ones. Similarly, 4 critically endangered species, 20 endangered, and 12 vulnerable species have refined ranges < 5000km2, but so do 66 non-threatened species. A striking 89% of these species I have classified in higher threat categories have <50% of their refined ranges inside protected areas. I find that for 43% of the species I assessed, refined range sizes fall within thresholds that typically have higher threat categories than their current assignments. I recommend these species for closer inspection by those who assess risk. These assessments are not only important on a species-by-species basis, but by combining distributions of threatened species, I create maps of conservation priorities. They differ significantly from those created from unrefined ranges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: It is unclear whether diagnostic protocols based on cardiac markers to identify low-risk chest pain patients suitable for early release from the emergency department can be applied to patients older than 65 years or with traditional cardiac risk factors. METHODS AND RESULTS: In a single-center retrospective study of 231 consecutive patients with high-risk factor burden in which a first cardiac troponin (cTn) level was measured in the emergency department and a second cTn sample was drawn 4 to 14 hours later, we compared the performance of a modified 2-Hour Accelerated Diagnostic Protocol to Assess Patients with Chest Pain Using Contemporary Troponins as the Only Biomarker (ADAPT) rule to a new risk classification scheme that identifies patients as low risk if they have no known coronary artery disease, a nonischemic electrocardiogram, and 2 cTn levels below the assay's limit of detection. Demographic and outcome data were abstracted through chart review. The median age of our population was 64 years, and 75% had Thrombosis In Myocardial Infarction risk score ≥2. Using our risk classification rule, 53 (23%) patients were low risk with a negative predictive value for 30-day cardiac events of 98%. Applying a modified ADAPT rule to our cohort, 18 (8%) patients were identified as low risk with a negative predictive value of 100%. In a sensitivity analysis, the negative predictive value of our risk algorithm did not change when we relied only on undetectable baseline cTn and eliminated the second cTn assessment. CONCLUSIONS: If confirmed in prospective studies, this less-restrictive risk classification strategy could be used to safely identify chest pain patients with more traditional cardiac risk factors for early emergency department release.