856 resultados para penalty-based genetic algorithm
Resumo:
This article presents a laser tracker position optimization code based on the tracker uncertainty model developed by the National Physical Laboratory (NPL). The code is able to find the optimal tracker positions for generic measurements involving one or a network of many trackers, and an arbitrary set of targets. The optimization is performed using pattern search or optionally, genetic algorithm (GA) or particle swarm optimization (PSO). Different objective function weightings for the uncertainties of individual points, distance uncertainties between point pairs, and the angular uncertainties between three points can be defined. Constraints for tracker position limits and minimum measurement distances have also been implemented. Furthermore, position optimization taking into account of lines-of-sight (LOS) within complex CAD geometry have also been demonstrated. The code is simple to use and can be a valuable measurement planning tool.
Resumo:
This article presents a laser tracker position optimization code based on the tracker uncertainty model developed by the National Physical Laboratory (NPL). The code is able to find the optimal tracker positions for generic measurements involving one or a network of many trackers, and an arbitrary set of targets. The optimization is performed using pattern search or optionally, genetic algorithm (GA) or particle swarm optimization (PSO). Different objective function weightings for the uncertainties of individual points, distance uncertainties between point pairs, and the angular uncertainties between three points can be defined. Constraints for tracker position limits and minimum measurement distances have also been implemented. Furthermore, position optimization taking into account of lines-of-sight (LOS) within complex CAD geometry have also been demonstrated. The code is simple to use and can be a valuable measurement planning tool.
Resumo:
Fluorescence-enhanced optical imaging is an emerging non-invasive and non-ionizing modality towards breast cancer diagnosis. Various optical imaging systems are currently available, although most of them are limited by bulky instrumentation, or their inability to flexibly image different tissue volumes and shapes. Hand-held based optical imaging systems are a recent development for its improved portability, but are currently limited only to surface mapping. Herein, a novel optical imager, consisting primarily of a hand-held probe and a gain-modulated intensified charge coupled device (ICCD) detector, is developed towards both surface and tomographic breast imaging. The unique features of this hand-held probe based optical imager are its ability to; (i) image large tissue areas (5×10 sq. cm) in a single scan, (ii) reduce overall imaging time using a unique measurement geometry, and (iii) perform tomographic imaging for tumor three-dimensional (3-D) localization. Frequency-domain based experimental phantom studies have been performed on slab geometries (650 ml) under different target depths (1-2.5 cm), target volumes (0.45, 0.23 and 0.10 cc), fluorescence absorption contrast ratios (1:0, 1000:1 to 5:1), and number of targets (up to 3), using Indocyanine Green (ICG) as fluorescence contrast agents. An approximate extended Kalman filter based inverse algorithm has been adapted towards 3-D tomographic reconstructions. Single fluorescence target(s) was reconstructed when located: (i) up to 2.5 cm deep (at 1:0 contrast ratio) and 1.5 cm deep (up to 10:1 contrast ratio) for 0.45 cc-target; and (ii) 1.5 cm deep for target as small as 0.10 cc at 1:0 contrast ratio. In the case of multiple targets, two targets as close as 0.7 cm were tomographically resolved when located 1.5 cm deep. It was observed that performing multi-projection (here dual) based tomographic imaging using a priori target information from surface images, improved the target depth recovery over using single projection based imaging. From a total of 98 experimental phantom studies, the sensitivity and specificity of the imager was estimated as 81-86% and 43-50%, respectively. With 3-D tomographic imaging successfully demonstrated for the first time using a hand-held based optical imager, the clinical translation of this technology is promising upon further experimental validation from in-vitro and in-vivo studies.
Resumo:
The profitability of momentum portfolios in the equity markets is derived from the continuation of stock returns over medium time horizons. The empirical evidence of momentum, however, is significantly different across markets around the world. The purpose of this dissertation is to: (1) help global investors determine the optimal selection and holding periods for momentum portfolios, (2) evaluate the profitability of the optimized momentum portfolios in different time periods and market states, (3) assess the investment strategy profits after considering transaction costs, and (4) interpret momentum returns within the framework of prior studies on investors’ behavior. Improving on the traditional practice of selecting arbitrary selection and holding periods, a genetic algorithm (GA) is employed. The GA performs a thorough and structured search to capture the return continuations and reversals patterns of momentum portfolios. Three portfolio formation methods are used: price momentum, earnings momentum, and earnings and price momentum and a non-linear optimization procedure (GA). The focus is on common equity of the U.S. and a select number of countries, including Australia, France, Germany, Japan, the Netherlands, Sweden, Switzerland and the United Kingdom. The findings suggest that the evolutionary algorithm increases the annualized profits of the U.S. momentum portfolios. However, the difference in mean returns is statistically significant only in certain cases. In addition, after considering transaction costs, both price and earnings and price momentum portfolios do not appear to generate abnormal returns. Positive risk-adjusted returns net of trading costs are documented solely during “up” markets for a portfolio long in prior winners only. The results on the international momentum effects indicate that the GA improves the momentum returns by 2 to 5% on an annual basis. In addition, the relation between momentum returns and exchange rate appreciation/depreciation is examined. The currency appreciation does not appear to influence significantly momentum profits. Further, the influence of the market state on momentum returns is not uniform across the countries considered. The implications of the above findings are discussed with a focus on the practical aspects of momentum investing, both in the U.S. and globally.
Resumo:
This dissertation presents a system-wide approach, based on genetic algorithms, for the optimization of transfer times for an entire bus transit system. Optimization of transfer times in a transit system is a complicated problem because of the large set of binary and discrete values involved. The combinatorial nature of the problem imposes a computational burden and makes it difficult to solve by classical mathematical programming methods. ^ The genetic algorithm proposed in this research attempts to find an optimal solution for the transfer time optimization problem by searching for a combination of adjustments to the timetable for all the routes in the system. It makes use of existing scheduled timetables, ridership demand at all transfer locations, and takes into consideration the randomness of bus arrivals. ^ Data from Broward County Transit are used to compute total transfer times. The proposed genetic algorithm-based approach proves to be capable of producing substantial time savings compared to the existing transfer times in a reasonable amount of time. ^ The dissertation also addresses the issues related to spatial and temporal modeling, variability in bus arrival and departure times, walking time, as well as the integration of scheduling and ridership data. ^
Resumo:
This research aims at a study of the hybrid flow shop problem which has parallel batch-processing machines in one stage and discrete-processing machines in other stages to process jobs of arbitrary sizes. The objective is to minimize the makespan for a set of jobs. The problem is denoted as: FF: batch1,sj:Cmax. The problem is formulated as a mixed-integer linear program. The commercial solver, AMPL/CPLEX, is used to solve problem instances to their optimality. Experimental results show that AMPL/CPLEX requires considerable time to find the optimal solution for even a small size problem, i.e., a 6-job instance requires 2 hours in average. A bottleneck-first-decomposition heuristic (BFD) is proposed in this study to overcome the computational (time) problem encountered while using the commercial solver. The proposed BFD heuristic is inspired by the shifting bottleneck heuristic. It decomposes the entire problem into three sub-problems, and schedules the sub-problems one by one. The proposed BFD heuristic consists of four major steps: formulating sub-problems, prioritizing sub-problems, solving sub-problems and re-scheduling. For solving the sub-problems, two heuristic algorithms are proposed; one for scheduling a hybrid flow shop with discrete processing machines, and the other for scheduling parallel batching machines (single stage). Both consider job arrival and delivery times. An experiment design is conducted to evaluate the effectiveness of the proposed BFD, which is further evaluated against a set of common heuristics including a randomized greedy heuristic and five dispatching rules. The results show that the proposed BFD heuristic outperforms all these algorithms. To evaluate the quality of the heuristic solution, a procedure is developed to calculate a lower bound of makespan for the problem under study. The lower bound obtained is tighter than other bounds developed for related problems in literature. A meta-search approach based on the Genetic Algorithm concept is developed to evaluate the significance of further improving the solution obtained from the proposed BFD heuristic. The experiment indicates that it reduces the makespan by 1.93 % in average within a negligible time when problem size is less than 50 jobs.
Resumo:
The purpose of this thesis was to identify the optimal design parameters for a jet nozzle which obtains a local maximum shear stress while maximizing the average shear stress on the floor of a fluid filled system. This research examined how geometric parameters of a jet nozzle, such as the nozzle's angle, height, and orifice, influence the shear stress created on the bottom surface of a tank. Simulations were run using a Computational Fluid Dynamics (CFD) software package to determine shear stress values for a parameterized geometric domain including the jet nozzle. A response surface was created based on the shear stress values obtained from 112 simulated designs. A multi-objective optimization software utilized the response surface to generate designs with the best combination of parameters to achieve maximum shear stress and maximum average shear stress. The optimal configuration of parameters achieved larger shear stress values over a commercially available design.
Resumo:
The profitability of momentum portfolios in the equity markets is derived from the continuation of stock returns over medium time horizons. The empirical evidence of momentum, however, is significantly different across markets around the world. The purpose of this dissertation is to: 1) help global investors determine the optimal selection and holding periods for momentum portfolios, 2) evaluate the profitability of the optimized momentum portfolios in different time periods and market states, 3) assess the investment strategy profits after considering transaction costs, and 4) interpret momentum returns within the framework of prior studies on investors’ behavior. Improving on the traditional practice of selecting arbitrary selection and holding periods, a genetic algorithm (GA) is employed. The GA performs a thorough and structured search to capture the return continuations and reversals patterns of momentum portfolios. Three portfolio formation methods are used: price momentum, earnings momentum, and earnings and price momentum and a non-linear optimization procedure (GA). The focus is on common equity of the U.S. and a select number of countries, including Australia, France, Germany, Japan, the Netherlands, Sweden, Switzerland and the United Kingdom. The findings suggest that the evolutionary algorithm increases the annualized profits of the U.S. momentum portfolios. However, the difference in mean returns is statistically significant only in certain cases. In addition, after considering transaction costs, both price and earnings and price momentum portfolios do not appear to generate abnormal returns. Positive risk-adjusted returns net of trading costs are documented solely during “up” markets for a portfolio long in prior winners only. The results on the international momentum effects indicate that the GA improves the momentum returns by 2 to 5% on an annual basis. In addition, the relation between momentum returns and exchange rate appreciation/depreciation is examined. The currency appreciation does not appear to influence significantly momentum profits. Further, the influence of the market state on momentum returns is not uniform across the countries considered. The implications of the above findings are discussed with a focus on the practical aspects of momentum investing, both in the U.S. and globally.
Resumo:
Significant advances have emerged in research related to the topic of Classifier Committees. The models that receive the most attention in the literature are those of the static nature, also known as ensembles. The algorithms that are part of this class, we highlight the methods that using techniques of resampling of the training data: Bagging, Boosting and Multiboosting. The choice of the architecture and base components to be recruited is not a trivial task and has motivated new proposals in an attempt to build such models automatically, and many of them are based on optimization methods. Many of these contributions have not shown satisfactory results when applied to more complex problems with different nature. In contrast, the thesis presented here, proposes three new hybrid approaches for automatic construction for ensembles: Increment of Diversity, Adaptive-fitness Function and Meta-learning for the development of systems for automatic configuration of parameters for models of ensemble. In the first one approach, we propose a solution that combines different diversity techniques in a single conceptual framework, in attempt to achieve higher levels of diversity in ensembles, and with it, the better the performance of such systems. In the second one approach, using a genetic algorithm for automatic design of ensembles. The contribution is to combine the techniques of filter and wrapper adaptively to evolve a better distribution of the feature space to be presented for the components of ensemble. Finally, the last one approach, which proposes new techniques for recommendation of architecture and based components on ensemble, by techniques of traditional meta-learning and multi-label meta-learning. In general, the results are encouraging and corroborate with the thesis that hybrid tools are a powerful solution in building effective ensembles for pattern classification problems.
Resumo:
This thesis presents a hybrid technique of frequency selective surfaces project (FSS) on a isotropic dielectric layer, considering various geometries for the elements of the unit cell. Specifically, the hybrid technique uses the equivalent circuit method in conjunction with genetic algorithm, aiming at the synthesis of structures with response single-band and dual-band. The equivalent circuit method allows you to model the structure by using an equivalent circuit and also obtaining circuits for different geometries. From the obtaining of the parameters of these circuits, you can get the transmission and reflection characteristics of patterned structures. For the optimization of patterned structures, according to the desired frequency response, Matlab™ optimization tool named optimtool proved to be easy to use, allowing you to explore important results on the optimization analysis. In this thesis, numeric and experimental results are presented for the different characteristics of the analyzed geometries. For this, it was determined a technique to obtain the parameter N, which is based on genetic algorithms and differential geometry, to obtain the algebraic rational models that determine values of N more accurate, facilitating new projects of FSS with these geometries. The optimal results of N are grouped according to the occupancy factor of the cell and the thickness of the dielectric, for modeling of the structures by means of rational algebraic equations. Furthermore, for the proposed hybrid model was developed a fitness function for the purpose of calculating the error occurred in the definitions of FSS bandwidths with transmission features single band and dual band. This thesis deals with the construction of prototypes of FSS with frequency settings and band widths obtained with the use of this function. The FSS were initially reviewed through simulations performed with the commercial software Ansoft Designer ™, followed by simulation with the equivalent circuit method for obtaining a value of N in order to converge the resonance frequency and the bandwidth of the FSS analyzed, then the results obtained were compared. The methodology applied is validated with the construction and measurement of prototypes with different geometries of the cells of the arrays of FSS.
Resumo:
This thesis presents a hybrid technique of frequency selective surfaces project (FSS) on a isotropic dielectric layer, considering various geometries for the elements of the unit cell. Specifically, the hybrid technique uses the equivalent circuit method in conjunction with genetic algorithm, aiming at the synthesis of structures with response single-band and dual-band. The equivalent circuit method allows you to model the structure by using an equivalent circuit and also obtaining circuits for different geometries. From the obtaining of the parameters of these circuits, you can get the transmission and reflection characteristics of patterned structures. For the optimization of patterned structures, according to the desired frequency response, Matlab™ optimization tool named optimtool proved to be easy to use, allowing you to explore important results on the optimization analysis. In this thesis, numeric and experimental results are presented for the different characteristics of the analyzed geometries. For this, it was determined a technique to obtain the parameter N, which is based on genetic algorithms and differential geometry, to obtain the algebraic rational models that determine values of N more accurate, facilitating new projects of FSS with these geometries. The optimal results of N are grouped according to the occupancy factor of the cell and the thickness of the dielectric, for modeling of the structures by means of rational algebraic equations. Furthermore, for the proposed hybrid model was developed a fitness function for the purpose of calculating the error occurred in the definitions of FSS bandwidths with transmission features single band and dual band. This thesis deals with the construction of prototypes of FSS with frequency settings and band widths obtained with the use of this function. The FSS were initially reviewed through simulations performed with the commercial software Ansoft Designer ™, followed by simulation with the equivalent circuit method for obtaining a value of N in order to converge the resonance frequency and the bandwidth of the FSS analyzed, then the results obtained were compared. The methodology applied is validated with the construction and measurement of prototypes with different geometries of the cells of the arrays of FSS.
Resumo:
Water-alternating-gas (WAG) is an enhanced oil recovery method combining the improved macroscopic sweep of water flooding with the improved microscopic displacement of gas injection. The optimal design of the WAG parameters is usually based on numerical reservoir simulation via trial and error, limited by the reservoir engineer’s availability. Employing optimisation techniques can guide the simulation runs and reduce the number of function evaluations. In this study, robust evolutionary algorithms are utilized to optimise hydrocarbon WAG performance in the E-segment of the Norne field. The first objective function is selected to be the net present value (NPV) and two global semi-random search strategies, a genetic algorithm (GA) and particle swarm optimisation (PSO) are tested on different case studies with different numbers of controlling variables which are sampled from the set of water and gas injection rates, bottom-hole pressures of the oil production wells, cycle ratio, cycle time, the composition of the injected hydrocarbon gas (miscible/immiscible WAG) and the total WAG period. In progressive experiments, the number of decision-making variables is increased, increasing the problem complexity while potentially improving the efficacy of the WAG process. The second objective function is selected to be the incremental recovery factor (IRF) within a fixed total WAG simulation time and it is optimised using the same optimisation algorithms. The results from the two optimisation techniques are analyzed and their performance, convergence speed and the quality of the optimal solutions found by the algorithms in multiple trials are compared for each experiment. The distinctions between the optimal WAG parameters resulting from NPV and oil recovery optimisation are also examined. This is the first known work optimising over this complete set of WAG variables. The first use of PSO to optimise a WAG project at the field scale is also illustrated. Compared to the reference cases, the best overall values of the objective functions found by GA and PSO were 13.8% and 14.2% higher, respectively, if NPV is optimised over all the above variables, and 14.2% and 16.2% higher, respectively, if IRF is optimised.
Resumo:
The authors would like to express their gratitude to organizations and people that supported this research. Piotr Omenzetter’s work within the Lloyd’s Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research. Ben Ryder of Aurecon and Graeme Cummings of HEB Construction assisted in obtaining access to the bridge and information for modelling. Luke Williams and Graham Bougen, undergraduate research students, assisted with testing.
Resumo:
Piotr Omenzetter and Simon Hoell’s work within the Lloyd’s Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.