946 resultados para Multiple Quantum Well Lasers
Resumo:
Why do Argentines continue to support democracy despite distrusting political institutions and politicians? Support for democracy is high even though performance of the regime is poor. One would suspect that poor economic and political performance would open the door for military intervention given the history of Argentina. What changed? What explains variance across the multiple dimensions of political trust, such as trust in the regime, trust in political institutions, and trust in politicians? This dissertation is a case study of political culture through public opinion exploring the multiple dimensions of political trust in Argentina during the 1990s. ^ Variance across the different dimensions of political trust may be an indicator of the rise of a new type of citizens called "critical citizens." Critical citizens are citizens who criticize the regime to obtain democratic reforms but support the ideals of democracy. In established democracies, the rise of critical citizens is explained by a shift in individuals' value priorities towards postmaterialism. Postmaterialism is a cultural change in the direction of values that emphasize self-realization and individual well-being. Postmaterialism influences various social and political attitudes. ^ Because Argentina is experiencing a cultural change and a rise of critical citizens similar to more advanced societies, the theory of postmaterialism generated the main hypothesis to explain the multiple dimensions of political trust. This dissertation also tested an alternative explanation: the multiple dimensions of political trust responded instead to citizens' evaluations of performance. Ultimately, postmaterialism explained trust in the political regime and trust in the political institutions. Contrary to expectations, postmaterialism did not explain trust in the political elites or politicians. Trust in politicians was better explained by the alternative hypothesis, performance. ^ The main method of research was the statistical method supplemented with the comparative method when data were available. Two main databases were used: the World Values Surveys and the Latinobarometer. ^
Resumo:
Novel predator introductions are thought to have a high impact on native prey, especially in freshwater systems. Prey may fail to recognize predators as a threat, or show inappropriate or ineffective responses. The ability of prey to recognize and respond appropriately to novel predators may depend on the prey’s use of general or specific cues to detect predation threats.We used laboratory experiments to examine the ability of three native Everglades prey species (Eastern mosquitofish, flagfish and riverine grass shrimp) to respond to the presence, as well as to the chemical and visual cues of a native predator (warmouth) and a recentlyintroduced non-native predator (African jewelfish). We used prey from populations that had not previously encountered jewelfish. Despite this novelty, the native warmouth and nonnative jewelfish had overall similar predatory effects, except on mosquitofish, which suffered higher warmouth predation. When predators were present, the three prey taxa showed consistent and strong responses to the non-native jewelfish, which were similar in magnitude to the responses exhibited to the native warmouth. When cues were presented, fish prey responded largely to chemical cues, while shrimp showed no response to either chemical or visual cues. Overall, responses by mosquitofish and flagfish to chemical cues indicated low differentiation among cue types, with similar responses to general and specific cues. The fact that antipredator behaviours were similar toward native and non-native predators suggests that the susceptibility to a novel fish predator may be similar to that of native fishes, and prey may overcome predator novelty, at least when predators are confamilial to other common and longer-established non-native threats.
Resumo:
The 1996 welfare reform, for the first time in U.S. history, set a five-year residence requirement for immigrants to be eligible for federal welfare benefits. This dissertation assessed the impact of the 1996 welfare reform, specifically the immigrant provisions, on the economic well-being of low-income immigrants. This dissertation also explored the roles that migration selection theory and social capital theory play in the economic well-being of low-income immigrants. ^ This dissertation was based on an analysis of the March 1995, March 2002, and March 2006 Annual Demographic Supplement Files of the Current Population Survey (CPS). Both logistic regression and multiple regression were used to analyze economic well-being, comparing low-income immigrants with low-income citizens. Economic well-being was measured in the current survey year and the year before on the following variables: employment status, full-time status (35 or more hours per week), the number of weeks worked, and the total annual wage or salary.^ The major findings reported in this dissertation were that low-income immigrants had advantages over low-income citizens in the labor market. This may be due to immigrants' stronger motivation to obtain success, consistent with migration selection theory. Also, this research suggested that immigrant provisions had not ameliorated employment outcomes of low-income immigrants as policymakers may have expected.^ The study also confirmed the role of social capital in advancing the economic well-being of qualified immigrants. Ultimately, this dissertation contributed to our understanding of low-income immigrants in the U.S. The study questioned the claim that immigrants are attracted to the U.S. by welfare benefits. This dissertation suggested that immigrants come to the U.S., to a large extent, to pursue the goal of upward mobility. Consequently, immigrants may employ greater initiative and work harder than native-born Americans.^
Resumo:
Quantitative Structure-Activity Relationship (QSAR) has been applied extensively in predicting toxicity of Disinfection By-Products (DBPs) in drinking water. Among many toxicological properties, acute and chronic toxicities of DBPs have been widely used in health risk assessment of DBPs. These toxicities are correlated with molecular properties, which are usually correlated with molecular descriptors. The primary goals of this thesis are: (1) to investigate the effects of molecular descriptors (e.g., chlorine number) on molecular properties such as energy of the lowest unoccupied molecular orbital (E LUMO) via QSAR modelling and analysis; (2) to validate the models by using internal and external cross-validation techniques; (3) to quantify the model uncertainties through Taylor and Monte Carlo Simulation. One of the very important ways to predict molecular properties such as ELUMO is using QSAR analysis. In this study, number of chlorine (NCl ) and number of carbon (NC) as well as energy of the highest occupied molecular orbital (EHOMO) are used as molecular descriptors. There are typically three approaches used in QSAR model development: (1) Linear or Multi-linear Regression (MLR); (2) Partial Least Squares (PLS); and (3) Principle Component Regression (PCR). In QSAR analysis, a very critical step is model validation after QSAR models are established and before applying them to toxicity prediction. The DBPs to be studied include five chemical classes: chlorinated alkanes, alkenes, and aromatics. In addition, validated QSARs are developed to describe the toxicity of selected groups (i.e., chloro-alkane and aromatic compounds with a nitro- or cyano group) of DBP chemicals to three types of organisms (e.g., Fish, T. pyriformis, and P.pyosphoreum) based on experimental toxicity data from the literature. The results show that: (1) QSAR models to predict molecular property built by MLR, PLS or PCR can be used either to select valid data points or to eliminate outliers; (2) The Leave-One-Out Cross-Validation procedure by itself is not enough to give a reliable representation of the predictive ability of the QSAR models, however, Leave-Many-Out/K-fold cross-validation and external validation can be applied together to achieve more reliable results; (3) E LUMO are shown to correlate highly with the NCl for several classes of DBPs; and (4) According to uncertainty analysis using Taylor method, the uncertainty of QSAR models is contributed mostly from NCl for all DBP classes.
Resumo:
Arsenic trioxide (ATO) has been tested in relapsed/refractory multiple myeloma with limited success. In order to better understand drug mechanism and resistance pathways in myeloma we generated an ATO-resistant cell line, 8226/S-ATOR05, with an IC50 that is 2–3-fold higher than control cell lines and significantly higher than clinically achievable concentrations. Interestingly we found two parallel pathways governing resistance to ATO in 8226/S-ATOR05, and the relevance of these pathways appears to be linked to the concentration of ATO used. We found changes in the expression of Bcl-2 family proteins Bfl-1 and Noxa as well as an increase in cellular glutathione (GSH) levels. At low, clinically achievable concentrations, resistance was primarily associated with an increase in expression of the anti-apoptotic protein Bfl-1 and a decrease in expression of the pro-apoptotic protein Noxa. However, as the concentration of ATO increased, elevated levels of intracellular GSH in 8226/S-ATOR05 became the primary mechanism of ATO resistance. Removal of arsenic selection resulted in a loss of the resistance phenotype, with cells becoming sensitive to high concentrations of ATO within 7 days following drug removal, indicating changes associated with high level resistance (elevated GSH) are dependent upon the presence of arsenic. Conversely, not until 50 days without arsenic did cells once again become sensitive to clinically relevant doses of ATO, coinciding with a decrease in the expression of Bfl-1. In addition we found cross-resistance to melphalan and doxorubicin in 8226/S-ATOR05, suggesting ATO-resistance pathways may also be involved in resistance to other chemotherapeutic agents used in the treatment of multiple myeloma.
Resumo:
Construction projects are complex endeavors that require the involvement of different professional disciplines in order to meet various project objectives that are often conflicting. The level of complexity and the multi-objective nature of construction projects lend themselves to collaborative design and construction such as integrated project delivery (IPD), in which relevant disciplines work together during project conception, design and construction. Traditionally, the main objectives of construction projects have been to build in the least amount of time with the lowest cost possible, thus the inherent and well-established relationship between cost and time has been the focus of many studies. The importance of being able to effectively model relationships among multiple objectives in building construction has been emphasized in a wide range of research. In general, the trade-off relationship between time and cost is well understood and there is ample research on the subject. However, despite sustainable building designs, relationships between time and environmental impact, as well as cost and environmental impact, have not been fully investigated. The objectives of this research were mainly to analyze and identify relationships of time, cost, and environmental impact, in terms of CO2 emissions, at different levels of a building: material level, component level, and building level, at the pre-use phase, including manufacturing and construction, and the relationships of life cycle cost and life cycle CO2 emissions at the usage phase. Additionally, this research aimed to develop a robust simulation-based multi-objective decision-support tool, called SimulEICon, which took construction data uncertainty into account, and was capable of incorporating life cycle assessment information to the decision-making process. The findings of this research supported the trade-off relationship between time and cost at different building levels. Moreover, the time and CO2 emissions relationship presented trade-off behavior at the pre-use phase. The results of the relationship between cost and CO2 emissions were interestingly proportional at the pre-use phase. The same pattern continually presented after the construction to the usage phase. Understanding the relationships between those objectives is a key in successfully planning and designing environmentally sustainable construction projects.
Resumo:
In this thesis, we consider N quantum particles coupled to collective thermal quantum environments. The coupling is energy conserving and scaled in the mean field way. There is no direct interaction between the particles, they only interact via the common reservoir. It is well known that an initially disentangled state of the N particles will remain disentangled at times in the limit N -> [infinity]. In this thesis, we evaluate the η-body reduced density matrix (tracing over the reservoirs and the N - η remaining particles). We identify the main disentangled part of the reduced density matrix and obtain the first order correction term in 1/N. We show that this correction term is entangled. We also estimate the speed of convergence of the reduced density matrix as N -> [infinity]. Our model is exactly solvable and it is not based on numerical approximation.
Resumo:
We demonstrate an ultra-compact, room-Temperature, continuous-wave, broadly-Tunable dual-wavelength InAs/GaAs quantum-dot external-cavity diode laser in the spectral region between 1150 nm and 1301 nm with maximum output power of 280 mW. This laser source generating two modes with tunable difference-frequency (300 GHz-30 THz) has a great potential to replace commonly used bulky lasers for THz generation in photomixer devices.
Resumo:
We present a theoretical description of the generation of ultra-short, high-energy pulses in two laser cavities driven by periodic spectral filtering or dispersion management. Critical in driving the intra-cavity dynamics is the nontrivial phase profiles generated and their periodic modification from either spectral filtering or dispersion management. For laser cavities with a spectral filter, the theory gives a simple geometrical description of the intra-cavity dynamics and provides a simple and efficient method for optimizing the laser cavity performance. In the dispersion managed cavity, analysis shows the generated self-similar behavior to be governed by the porous media equation with a rapidly-varying, mean-zero diffusion coefficient whose solution is the well-known Barenblatt similarity solution with parabolic profile. © 2010 Copyright SPIE - The International Society for Optical Engineering.
Resumo:
We present a theoretical description of the generation of ultra-short, high-energy pulses in two laser cavities driven by periodic spectral filtering or dispersion management. Critical in driving the intra-cavity dynamics is the nontrivial phase profiles generated and their periodic modification from either spectral filtering or dispersion management. For laser cavities with a spectral filter, the theory gives a simple geometrical description of the intra-cavity dynamics and provides a simple and efficient method for optimizing the laser cavity performance. In the dispersion managed cavity, analysis shows the generated self-similar behavior to be governed by the porous media equation with a rapidly-varying, mean-zero diffusion coefficient whose solution is the well-known Barenblatt similarity solution with parabolic profile. © 2010 American Institute of Physics.
Resumo:
We present a theoretical description of the generation of ultra-short, high-energy pulses in two laser cavities driven by periodic spectral filtering or dispersion management. Critical in driving the intra-cavity dynamics is the nontrivial phase profiles generated and their periodic modification from either spectral filtering or dispersion management. For laser cavities with a spectral filter, the theory gives a simple geometrical description of the intra-cavity dynamics and provides a simple and efficient method for optimizing the laser cavity performance. In the dispersion managed cavity, analysis shows the generated self-similar behavior to be governed by the porous media equation with a rapidly-varying, mean-zero diffusion coefficient whose solution is the well-known Barenblatt similarity solution with parabolic profile.
Resumo:
Although people frequently pursue multiple goals simultaneously, these goals often conflict with each other. For instance, consumers may have both a healthy eating goal and a goal to have an enjoyable eating experience. In this dissertation, I focus on two sources of enjoyment in eating experiences that may conflict with healthy eating: consuming tasty food (Essay 1) and affiliating with indulging dining companions (Essay 2). In both essays, I examine solutions and strategies that decrease the conflict between healthy eating and these aspects of enjoyment in the eating experience, thereby enabling consumers to resolve such goal conflicts.
Essay 1 focuses on the well-established conflict between having healthy food and having tasty food and introduces a novel product offering (“vice-virtue bundles”) that can help consumers simultaneously address both health and taste goals. Through several experiments, I demonstrate that consumers often choose vice-virtue bundles with small proportions (¼) of vice and that they view such bundles as healthier than but equally tasty as bundles with larger vice proportions, indicating that “healthier” does not always have to equal “less tasty.”
Essay 2 focuses on a conflict between healthy eating and affiliation with indulging dining companions. The first set of experiments provides evidence of this conflict and examine why it arises (Studies 1 to 3). Based on this conflict’s origins, the second set of experiments tests strategies that consumers can use to decrease the conflict between healthy eating and affiliation with an indulging dining companion (Studies 4 and 5), such that they can make healthy food choices while still being liked by an indulging dining companion. Thus, Essay 2 broadens the existing picture of goals that conflict with the healthy eating goal and, together with Essay 1, identifies solutions to such goal conflicts.
Resumo:
This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others.
This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system.
Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity.
Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
Quantitative Structure-Activity Relationship (QSAR) has been applied extensively in predicting toxicity of Disinfection By-Products (DBPs) in drinking water. Among many toxicological properties, acute and chronic toxicities of DBPs have been widely used in health risk assessment of DBPs. These toxicities are correlated with molecular properties, which are usually correlated with molecular descriptors. The primary goals of this thesis are: 1) to investigate the effects of molecular descriptors (e.g., chlorine number) on molecular properties such as energy of the lowest unoccupied molecular orbital (ELUMO) via QSAR modelling and analysis; 2) to validate the models by using internal and external cross-validation techniques; 3) to quantify the model uncertainties through Taylor and Monte Carlo Simulation. One of the very important ways to predict molecular properties such as ELUMO is using QSAR analysis. In this study, number of chlorine (NCl) and number of carbon (NC) as well as energy of the highest occupied molecular orbital (EHOMO) are used as molecular descriptors. There are typically three approaches used in QSAR model development: 1) Linear or Multi-linear Regression (MLR); 2) Partial Least Squares (PLS); and 3) Principle Component Regression (PCR). In QSAR analysis, a very critical step is model validation after QSAR models are established and before applying them to toxicity prediction. The DBPs to be studied include five chemical classes: chlorinated alkanes, alkenes, and aromatics. In addition, validated QSARs are developed to describe the toxicity of selected groups (i.e., chloro-alkane and aromatic compounds with a nitro- or cyano group) of DBP chemicals to three types of organisms (e.g., Fish, T. pyriformis, and P.pyosphoreum) based on experimental toxicity data from the literature. The results show that: 1) QSAR models to predict molecular property built by MLR, PLS or PCR can be used either to select valid data points or to eliminate outliers; 2) The Leave-One-Out Cross-Validation procedure by itself is not enough to give a reliable representation of the predictive ability of the QSAR models, however, Leave-Many-Out/K-fold cross-validation and external validation can be applied together to achieve more reliable results; 3) ELUMO are shown to correlate highly with the NCl for several classes of DBPs; and 4) According to uncertainty analysis using Taylor method, the uncertainty of QSAR models is contributed mostly from NCl for all DBP classes.