950 resultados para Quality Function Deployment malli


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article demonstrates the use of embedded fibre Bragg gratings as vector bending sensor to monitor two-dimensional shape deformation of a shape memory polymer plate. The shape memory polymer plate was made by using thermal-responsive epoxy-based shape memory polymer materials, and the two fibre Bragg grating sensors were orthogonally embedded, one on the top and the other on the bottom layer of the plate, in order to measure the strain distribution in both longitudinal and transverse directions separately and also with temperature reference. When the shape memory polymer plate was bent at different angles, the Bragg wavelengths of the embedded fibre Bragg gratings showed a red-shift of 50 pm/°caused by the bent-induced tensile strain on the plate surface. The finite element method was used to analyse the stress distribution for the whole shape recovery process. The strain transfer rate between the shape memory polymer and optical fibre was also calculated from the finite element method and determined by experimental results, which was around 0.25. During the experiment, the embedded fibre Bragg gratings showed very high temperature sensitivity due to the high thermal expansion coefficient of the shape memory polymer, which was around 108.24 pm/°C below the glass transition temperature (Tg) and 47.29 pm/°C above Tg. Therefore, the orthogonal arrangement of the two fibre Bragg grating sensors could provide a temperature compensation function, as one of the fibre Bragg gratings only measures the temperature while the other is subjected to the directional deformation. © The Author(s) 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research is focused on the optimisation of resource utilisation in wireless mobile networks with the consideration of the users’ experienced quality of video streaming services. The study specifically considers the new generation of mobile communication networks, i.e. 4G-LTE, as the main research context. The background study provides an overview of the main properties of the relevant technologies investigated. These include video streaming protocols and networks, video service quality assessment methods, the infrastructure and related functionalities of LTE, and resource allocation algorithms in mobile communication systems. A mathematical model based on an objective and no-reference quality assessment metric for video streaming, namely Pause Intensity, is developed in this work for the evaluation of the continuity of streaming services. The analytical model is verified by extensive simulation and subjective testing on the joint impairment effects of the pause duration and pause frequency. Various types of the video contents and different levels of the impairments have been used in the process of validation tests. It has been shown that Pause Intensity is closely correlated with the subjective quality measurement in terms of the Mean Opinion Score and this correlation property is content independent. Based on the Pause Intensity metric, an optimised resource allocation approach is proposed for the given user requirements, communication system specifications and network performances. This approach concerns both system efficiency and fairness when establishing appropriate resource allocation algorithms, together with the consideration of the correlation between the required and allocated data rates per user. Pause Intensity plays a key role here, representing the required level of Quality of Experience (QoE) to ensure the best balance between system efficiency and fairness. The 3GPP Long Term Evolution (LTE) system is used as the main application environment where the proposed research framework is examined and the results are compared with existing scheduling methods on the achievable fairness, efficiency and correlation. Adaptive video streaming technologies are also investigated and combined with our initiatives on determining the distribution of QoE performance across the network. The resulting scheduling process is controlled through the prioritization of users by considering their perceived quality for the services received. Meanwhile, a trade-off between fairness and efficiency is maintained through an online adjustment of the scheduler’s parameters. Furthermore, Pause Intensity is applied to act as a regulator to realise the rate adaptation function during the end user’s playback of the adaptive streaming service. The adaptive rates under various channel conditions and the shape of the QoE distribution amongst the users for different scheduling policies have been demonstrated in the context of LTE. Finally, the work for interworking between mobile communication system at the macro-cell level and the different deployments of WiFi technologies throughout the macro-cell is presented. A QoEdriven approach is proposed to analyse the offloading mechanism of the user’s data (e.g. video traffic) while the new rate distribution algorithm reshapes the network capacity across the macrocell. The scheduling policy derived is used to regulate the performance of the resource allocation across the fair-efficient spectrum. The associated offloading mechanism can properly control the number of the users within the coverages of the macro-cell base station and each of the WiFi access points involved. The performance of the non-seamless and user-controlled mobile traffic offloading (through the mobile WiFi devices) has been evaluated and compared with that of the standard operator-controlled WiFi hotspots.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Local air quality was one of the main stimulants for low carbon vehicle development during the 1990s. Issues of national fuel security and global air quality (climate change) have added pressure for their development, stimulating schemes to facilitate their deployment in the UK. In this case study, Coventry City Council aimed to adopt an in-house fleet of electric and hybrid-electric vehicles to replace business mileage paid for in employee's private vehicles. This study made comparisons between the proposed vehicle technologies, in terms of costs and air quality, over projected scenarios of typical use. The study found that under 2009 conditions, the electric and hybrid fleet could not compete on cost with the current business model because of untested assumptions, but certain emissions were significantly reduced >50%. Climate change gas emissions were most drastically reduced where electric vehicles were adopted because the electricity supply was generated by renewable energy sources. The study identified the key cost barriers and benefits to adoption of low-emission vehicles in current conditions in the Coventry fleet. Low-emission vehicles achieved significant air pollution-associated health cost and atmospheric emission reductions per vehicle, and widespread adoption in cities could deliver significant change. © The Author 2011. Published by Oxford University Press. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To examine visual outcomes following bilateral implantation of the FineVision trifocal intraocular lens (IOL; PhysIOL, Liège, Belgium). Methods: 26 patients undergoing routine cataract surgery were implanted bilaterally with the FineVision Trifocal IOL and followed up post-operatively for 3 months. The FineVision optic features a combination of 2 diffractive structures, resulting in distance, intermediate (+1.75 D add) and near vision (+3.50 D add) zones. Apodization of the optic surface increases far vision dominance with pupil aperture. Data collected at the 3 month visit included uncorrected and corrected distance (CDVA) and near vision; subjective refraction; defocus curve testing (photopic and mesopic); contrast sensitivity (CSV-1000); halometry glare testing and a questionnaire (NAVQ) to gauge near vision function and patient satisfaction. Results: The cohort comprised 15 males and 11 females, aged 52.5–82.4 years (mean 70.6 ± 8.2 years). Mean post-operative UDVA was 0.22 ± 0.14 logMAR, with a mean spherical equivalent refraction of +0.02 ± 0.35 D. Mean CDVA was 0.13 ± 0.10 logMAR monocularly, and 0.09 ± 0.07 logMAR binocularly. Defocus curve testing showed an extensive range of clear vision in both photopic and mesopic conditions. Patients showed high levels of satisfaction with their near vision (mean ± 0.9 ± 0.6, where 0 = completely satisfied, and 4 = completely unsatisfied) and demonstrated good spectacle independence. Conclusion: The FineVision IOL can be considered in patients seeking spectacle dependence following cataract surgery, and provide good patient satisfaction with uncorrected vision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to low cost and easy deployment, multi-hop wireless networks become a very attractive communication paradigm. However, IEEE 802.11 medium access control (MAC) protocol widely used in wireless LANs was not designed for multi-hop wireless networks. Although it can support some kinds of ad hoc network architecture, it does not function efficiently in those wireless networks with multi-hop connectivity. Therefore, our research is focused on studying the medium access control in multi-hop wireless networks. The objective is to design practical MAC layer protocols for supporting multihop wireless networks. Particularly, we try to prolong the network lifetime without degrading performances with small battery-powered devices and improve the system throughput with poor quality channels. ^ In this dissertation, we design two MAC protocols. The first one is aimed at minimizing energy-consumption without deteriorating communication activities, which provides energy efficiency, latency guarantee, adaptability and scalability in one type of multi-hop wireless networks (i.e. wireless sensor network). Methodologically, inspired by the phase transition phenomena in distributed networks, we define the wake-up probability, which maintained by each node. By using this probability, we can control the number of wireless connectivity within a local area. More specifically, we can adaptively adjust the wake-up probability based on the local network conditions to reduce energy consumption without increasing transmission latency. The second one is a cooperative MAC layer protocol for multi-hop wireless networks, which leverages multi-rate capability by cooperative transmission among multiple neighboring nodes. Moreover, for bidirectional traffic, the network throughput can be further increased by using the network coding technique. It is a very helpful complement for current rate-adaptive MAC protocols under the poor channel conditions of direct link. Finally, we give an analytical model to analyze impacts of cooperative node on the system throughput. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Altered freshwater inflows have affected circulation, salinity, and water quality patterns of Florida Bay, in turn altering the structure and function of this estuary. Changes in water quality and salinity and associated loss of dense turtle grass and other submerged aquatic vegetation (SAV) in Florida Bay have created a condition in the bay where sediments and nutrients have been regularly disturbed, frequently causing large and dense phytoplankton blooms. These algal and cyanobacterial blooms in turn often cause further loss of more recently established SAV, exacerbating the conditions causing the blooms. Chlorophyll a (CHLA) was selected as an indicator of water quality because it is an indicator of phytoplankton biomass, with concentrations reflecting the integrated effect of many of the water quality factors that may be altered by restoration activities. Overall, we assessed the CHLA indicator as being (1) relevant and reflecting the state of the Florida Bay ecosystem, (2) sensitive to ecosystem drivers (stressors, especially nutrient loading), (3) feasible to monitor, and (4) scientifically defensible. Distinct zones within the bay were defined according to statistical and consensual information. Threshold levels of CHLA for each zone were defined using historical data and scientific consensus. A presentation template of condition of the bay using these thresholds is shown as an example of an outreach product.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From 8/95 to 2/01, we investigated the ecological effects of intra- and inter-annual variability in freshwater flow through Taylor Creek in southeastern Everglades National Park. Continuous monitoring and intensive sampling studies overlapped with an array of pulsed weather events that impacted physical, chemical, and biological attributes of this region. We quantified the effects of three events representing a range of characteristics (duration, amount of precipitation, storm intensity, wind direction) on the hydraulic connectivity, nutrient and sediment dynamics, and vegetation structure of the SE Everglades estuarine ecotone. These events included a strong winter storm in November 1996, Tropical Storm Harvey in September 1999, and Hurricane Irene in October 1999. Continuous hydrologic and daily water sample data were used to examine the effects of these events on the physical forcing and quality of water in Taylor Creek. A high resolution, flow-through sampling and mapping approach was used to characterize water quality in the adjacent bay. To understand the effects of these events on vegetation communities, we measured mangrove litter production and estimated seagrass cover in the bay at monthly intervals. We also quantified sediment deposition associated with Hurricane Irene's flood surge along the Buttonwood Ridge. These three events resulted in dramatic changes in surface water movement and chemistry in Taylor Creek and adjacent regions of Florida Bay as well as increased mangrove litterfall and flood surge scouring of seagrass beds. Up to 5 cm of bay-derived mud was deposited along the ridge adjacent to the creek in this single pulsed event. These short-term events can account for a substantial proportion of the annual flux of freshwater and materials between the mangrove zone and Florida Bay. Our findings shed light on the capacity of these storm events, especially when in succession, to have far reaching and long lasting effects on coastal ecosystems such as the estuarine ecotone of the SE Everglades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Extensive data sets on water quality and seagrass distributions in Florida Bay have been assembled under complementary, but independent, monitoring programs. This paper presents the landscape-scale results from these monitoring programs and outlines a method for exploring the relationships between two such data sets. Seagrass species occurrence and abundance data were used to define eight benthic habitat classes from 677 sampling locations in Florida Bay. Water quality data from 28 monitoring stations spread across the Bay were used to construct a discriminant function model that assigned a probability of a given benthic habitat class occurring for a given combination of water quality variables. Mean salinity, salinity variability, the amount of light reaching the benthos, sediment depth, and mean nutrient concentrations were important predictor variables in the discriminant function model. Using a cross-validated classification scheme, this discriminant function identified the most likely benthic habitat type as the actual habitat type in most cases. The model predicted that the distribution of benthic habitat types in Florida Bay would likely change if water quality and water delivery were changed by human engineering of freshwater discharge from the Everglades. Specifically, an increase in the seasonal delivery of freshwater to Florida Bay should cause an expansion of seagrass beds dominated by Ruppia maritima and Halodule wrightii at the expense of the Thalassia testudinum-dominated community that now occurs in northeast Florida Bay. These statistical techniques should prove useful for predicting landscape-scale changes in community composition in diverse systems where communities are in quasi-equilibrium with environmental drivers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Date of Acceptance: 02/03/2015

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Date of Acceptance: 02/03/2015

Relevância:

30.00% 30.00%

Publicador:

Resumo:

© The European Society of Cardiology 2015. Funding The project was funded by the Sir Halley Stewart Trust. MINAP is funded by the Health Quality Improvement Partnership (HQIP).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to verify the association between some mobility items of the International Classification Functionality (ICF), with the evaluations Gross Motor Function Measure (GMFM-88), 1-minute walk test (1MWT) and if the motor impairment influences the quality of life in children with Cerebral Palsy (PC), by using the Paediatric Quality of Life Inventory (PedsQL 4.0 versions for children and parents). The study included 22 children with cerebral palsy spastic, classified in levels I, II, and III on the Gross Motor Function Classification System (GMFCS), with age group of 9.9 years old. Among those who have participated, seven of them were level I, eight of them were level II and seven of them were level III. All of the children and teenagers were rated by using check list ICF (mobility item), GMFM-88, 1-minute walk test and PedsQL 4.0 questionnaires for children and parents. It was observed a strong correlation between GMFM-88 with check list ICF (mobility item), but moderate correlation between GMFM-88 and 1-minute walk test (1MWT). It was also moderate the correlation between the walking test and the check list ICF (mobility item). The correlation between PedsQl 4.0 questionnaires for children and parents was weak, as well as the correlation of both with GMFM, ICF (mobility item) and the walking test. The lack of interrelation between physical function tests and quality of life, indicates that, regardless of the severity of the motor impairment and the difficulty with mobility, children and teenagers suffering of PC spastic, functional level I, II and III GMFCS and their parents have a varied opinion regarding the perception of well being and life satisfaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims The purpose of this study was to examine the effect of a pelvic floor muscle (PFM) rehabilitation program on incontinence symptoms, PFM function, and morphology in older women with SUI. Methods Women 60 years old and older with at least weekly episodes of SUI were recruited. Participants were evaluated before and after a 12-week group PFM rehabilitation intervention. The evaluations included 3-day bladder diaries, symptom, and quality of life questionnaires, PFM function testing with dynamometry (force) and electromyography (activation) during seven tasks: rest, PFM maximum voluntary contraction (MVC), straining, rapid-repeated PFM contractions, a 60 sec sustained PFM contraction, a single cough and three repeated coughs, and sagittal MRI recorded at rest, during PFM MVCs and during straining to assess PFM morphology. Results Seventeen women (68.9 ± 5.5 years) participated. Following the intervention the frequency of urine leakage decreased and disease-specific quality of life improved significantly. PFM function improved significantly: the participants were able to perform more rapid-repeated PFM contractions; they activated their PFMs sooner when coughing and they were better able to maintain a PFM contraction between repeated coughs. Pelvic organ support improved significantly: the anorectal angle was decreased and the urethrovescial junction was higher at rest, during contraction and while straining. Conclusions This study indicated that improvements in urine leakage were produced along with improvements in PFM co-ordination (demonstrated by the increased number of rapid PFM contractions and the earlier PFM activation when coughing), motor-control, pelvic organ support.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the cell therapy industry continuing to grow, the ability to preserve clinical grade cells, including mesenchymal stem cells (MSCs), whilst retaining cell viability and function remains critical for the generation of off-the-shelf therapies. Cryopreservation of MSCs, using slow freezing, is an established process at lab scale. However, the cytotoxicity of cryoprotectants, like Me2SO, raises questions about the impact of prolonged cell exposure to cryoprotectant at temperatures >0 °C during processing of large cell batches for allogenic therapies prior to rapid cooling in a controlled rate freezer or in the clinic prior to administration. Here we show that exposure of human bone marrow derived MSCs to Me2SO for ≥1 h before freezing, or after thawing, degrades membrane integrity, short-term cell attachment efficiency and alters cell immunophenotype. After 2 h's exposure to Me2SO at 37 °C post-thaw, membrane integrity dropped to ∼70% and only ∼50% of cells retained the ability to adhere to tissue culture plastic. Furthermore, only 70% of the recovered MSCs retained an immunophenotype consistent with the ISCT minimal criteria after exposure. We also saw a similar loss of membrane integrity and attachment efficiency after exposing osteoblast (HOS TE85) cells to Me2SO before, and after, cryopreservation. Overall, these results show that freezing medium exposure is a critical determinant of product quality as process scale increases. Defining and reporting cell sensitivity to freezing medium exposure, both before and after cryopreservation, enables a fair judgement of how scalable a particular cryopreservation process can be, and consequently whether the therapy has commercial feasibility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.