787 resultados para Real and imaginary journeys
Resumo:
This article analyses a range of different meanings attached to images of erotic dance, with a particular focus on the 'impression management' (Goffman 1959) enacted by dancers. It presents a visual analysis of the work of a female erotic performer in a lesbian erotic dance venue in the UK. Still photographs, along with observational data and interviews, convey the complexity and skill of an erotic dancer's diverse gendered and sexualised performances. The visual data highlights the extensive 'aesthetic labour' (Nickson et al. 2001) and 'emotional labour' (Hochschild 1983) the dancer must put in to constructing her work 'self'. However, a more ambitious use of the visual is identified: the dancer's own use of images of her work. This use of the visual by dancers themselves highlights a more complex 'impression management' strategy undertaken by a dancer and brings into question the separation of 'real' and 'work' 'selves' in erotic dance. © Sociological Research Online, 1996-2012.
Resumo:
The focus of our work is the verification of tight functional properties of numerical programs, such as showing that a floating-point implementation of Riemann integration computes a close approximation of the exact integral. Programmers and engineers writing such programs will benefit from verification tools that support an expressive specification language and that are highly automated. Our work provides a new method for verification of numerical software, supporting a substantially more expressive language for specifications than other publicly available automated tools. The additional expressivity in the specification language is provided by two constructs. First, the specification can feature inclusions between interval arithmetic expressions. Second, the integral operator from classical analysis can be used in the specifications, where the integration bounds can be arbitrary expressions over real variables. To support our claim of expressivity, we outline the verification of four example programs, including the integration example mentioned earlier. A key component of our method is an algorithm for proving numerical theorems. This algorithm is based on automatic polynomial approximation of non-linear real and real-interval functions defined by expressions. The PolyPaver tool is our implementation of the algorithm and its source code is publicly available. In this paper we report on experiments using PolyPaver that indicate that the additional expressivity does not come at a performance cost when comparing with other publicly available state-of-the-art provers. We also include a scalability study that explores the limits of PolyPaver in proving tight functional specifications of progressively larger randomly generated programs. © 2014 Springer International Publishing Switzerland.
Resumo:
From the accusation of plagiarism in The Da Vinci Code, to the infamous hoaxer in the Yorkshire Ripper case, the use of linguistic evidence in court and the number of linguists called to act as expert witnesses in court trials has increased rapidly in the past fifteen years. An Introduction to Forensic Linguistics: Language in Evidence provides a timely and accessible introduction to this rapidly expanding subject. Using knowledge and experience gained in legal settings – Malcolm Coulthard in his work as an expert witness and Alison Johnson in her work as a West Midlands police officer – the two authors combine an array of perspectives into a distinctly unified textbook, focusing throughout on evidence from real and often high profile cases including serial killer Harold Shipman, the Bridgewater Four and the Birmingham Six. Divided into two sections, 'The Language of the Legal Process' and 'Language as Evidence', the book covers the key topics of the field. The first section looks at legal language, the structures of legal genres and the collection and testing of evidence from the initial police interview through to examination and cross-examination in the courtroom. The second section focuses on the role of the forensic linguist, the forensic phonetician and the document examiner, as well as examining in detail the linguistic investigation of authorship and plagiarism. With research tasks, suggested reading and website references provided at the end of each chapter, An Introduction to Forensic Linguistics: Language in Evidence is the essential textbook for courses in forensic linguistics and language of the law.
Resumo:
Riemann’s memoir is devoted to the function π(x) defined as the number of prime numbers less or equal to the real and positive number x. This is really the fact, but the “main role” in it is played by the already mentioned zeta-function.
Resumo:
Augmented reality is the latest among information technologies in modern electronics industry. The essence is in the addition of advanced computer graphics in real and/or digitized images. This paper gives a brief analysis of the concept and the approaches to implementing augmented reality for an expanded presentation of a digitized object of national cultural and/or scientific heritage. ACM Computing Classification System (1998): H.5.1, H.5.3, I.3.7.
Resumo:
It is unlikely that the newly elected government of Dilma Rousseff will make any fundamental changes to the major imperatives that underlie Brazilian policy: that is, macroeconomic stability and poverty alleviation. These policy imperatives have set the country on the road to good governance and have provided former presidents a chance to claim continuity. While President Rousseff of the Worker’s Party (PT) may have a distinct style, personality, and set of leadership skills compared to her predecessors, she is expected to maintain the core macroeconomic stability and social policies that are currently in place. Many who expected Rousseff to be former president Luiz Inácio “Lula” da Silva’s carbon copy are discovering that from day one she has showcased a different governing style than her mentor. She has emphasized her commanding authority and has brought about fresh approaches to delicate matters, which entail domestic economic issues and foreign policy. For example, her administration has aggressively applied a set of macro-prudential measures to counter inflationary pressures on the Brazilian currency (Real). And in foreign policy, she has steadfastly recalibrated Itamarity’s stance on the controversial issues, such as Iran, and now appears to have refocused its short-term efforts on cementing Brazil’s leadership role in the region’s southern cone.
Resumo:
The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.
Resumo:
Italianità on Tour is a cultural history of Italian consciousness in Italy and Southeast Florida from 1896 to 1939. This dissertation examines literary works, folktales, folksongs, artworks, buildings and urban planning as imprints and cultural constructions of Italianità on both sides of the Atlantic, with a special emphasis on the transformations experienced on that journey. The real and/or imagined geo-cultural similarities between the Mediterranean and the Caribbean encouraged pioneers in Southeast Florida to conjure in their new setting an idea of Italianità , regardless of the presence of Italians in the area. Therefore, assessing Italianità, constitutes an important feature in understanding cultural constructions of identities in Miami and neighboring areas. This study, seeks to add Southeast Florida's Caribbean-Italian identity to the existing scholarship on several Italian diaspora representations, whether from a cultural ethnic perspective or from a sense of national belonging. More generally, it will show that there was no quintessential Italian national culture, but only representations of it that élites in Italy and South Florida manufactured, and on the other hand, immigrants imagined and performed upon arrival to America.
Resumo:
In this work the degradation of real and synthetic wastewater was studied using electrochemical processes such as oxidation via hydroxyl radicals, mediated oxidation via active chlorine and electrocoagulation. The real effluent used was collected in the decanter tank of the Federal University of Rio Grande do Norte (ETE-UFRN) of Effluent Treatment Plant and the other a textile effluent dye Ácido Blue 113 (AB 113) was synthesized in the laboratory. In the electrochemical process, the effects of anode material, current density, the presence and concentration of chloride as well as the active chlorine species on site generated were evaluated. Electrodes of different compositions, Ti/Pt, Ti/Ru0,3Ti0,7O2, BDD, Pb/PbO2 and Ti/TiO2-nanotubes/PbO2 were used as anodes. These electrodes were subjected to electroanalytical analysis with the goal of checking how happen the anodic and cathodic processes across the concentrations of NaCl and supporting electrolyte used. The potential of oxygen evolution reaction were also checked. The effect of active chlorine species formed under the process efficiency was evaluated by removing the organic matter in the effluent-ETE UFRN. The wastewater treatment ETE-UFRN using Ti/Pt, DDB and Ti/Ru0,3Ti0,7O2 electrodes was evaluated, obtaining good performances. The electrochemical degradation of effluent-UFRN was able to promote the reduction of the concentration of TOC and COD in all tested anodes. However, Ti/Ru0,3Ti0,7O2 showed a considerable degradation due to active chlorine species generated on site. The results obtained from the electrochemical process in the presence of chloride were more satisfactory than those obtained in the absence. The addition of 0.021 M NaCl resulted in a faster removal of organic matter. Secondly, was prepared and characterized the electrode Ti/TiO2-nanotubes/PbO2 according to what the literature reports, however their preparation was to disk (10 cm diameter) with surface area and higher than that described by the same authors, aiming at application to textile effluent AB 113 dye. SEM images were taken to observe the growth of TiO2 nanotubes and confirm the electrodeposition of PbO2. Atomic Force Microscope was also used to confirm the formation of these nanotubes. Furthermore, was tested and found a high electrochemical stability of the electrode Ti/TiO2-nanotubes/PbO2 for applications such as long-term indicating a good electrocatalytic material. The electrochemical oxidation of AB 113 using Ti/Pt, Pb/PbO2 and Ti/TiO2-nanotubes/PbO2 and Al/Al (electrocoagulation) was also studied. However, the best color removal and COD decay were obtained when Ti/TiO2-nanotubes/PbO2 was used as the anode, removing up to 98% of color and 92,5% of COD decay. Analysis of GC/MS were performed in order to identify possible intermediates formed in the degradation of AB 113.
Resumo:
Studies have shown that a person's socioeconomic status (SES) and the environment in which they are inserted modulate their pro-sociality. While children studying in schools with a more affluent student body tend to be more generous, adults with high SES in both real and experimental situations tend to be more selfish, greedy and individualistic. Another factor that influences pro-sociality is monitoring. When we do something under the supervision of another person, we tend to be more generous and cooperative, compared to situations in which no one is watching, even if the "observer" is a drawing of eyes. This monitoring effect occurs in both adults and children. To date, no studies have investigated whether the SES and the environment influence the pro-sociality of the children. There have also been no studies on how the monitoring effect might be influenced by SES and the environment (in this case, whether the environment is a public or private school). Given this context, our main objective was to investigate whether the generosity and cooperation of monitored and unmonitored kids is modulated by these factors. To this end, we did eight matches of the public goods, under monitoring and control conditions, with 249 children from the ages of 7 to 10 years enrolled in public and private schools in Natal, state of Rio Grande do Norte (Brazil). The SES of each child's family was assessed according to the Economic Classification Criterion of Brazil (2013). Contrary to our predictions, SES, school environment and experimental conditions did not significantly influence cooperation and generosity behavior when analyzed separately. We discuss whether the influences of resource and experimental design adopted for the current study and the historical and economic conditions of Brazil might explain these observations. Interestingly, when SES and school environment were analyzed together, an effect of monitoring on generosity and cooperation was detected. More specifically, monitoring had the effect of decreasing generosity among children with greater SES in private schools; and increased cooperation among children with greater SES in public schools. These results suggest that there is an influence of monitoring on the pro-sociality of children in relation to their SES and acquaintanceship environments. We argue that these observations may be explained by different preoccupations with reputation, according to the environment in which a child is inserted.
Resumo:
Solar activity indicators, each as sunspot numbers, sunspot area and flares, over the Sun’s photosphere are not considered to be symmetric between the northern and southern hemispheres of the Sun. This behavior is also known as the North-South Asymmetry of the different solar indices. Among the different conclusions obtained by several authors, we can point that the N-S asymmetry is a real and systematic phenomenon and is not due to random variability. In the present work, the probability distributions from the Marshall Space Flight Centre (MSFC) database are investigated using a statistical tool arises from well-known Non-Extensive Statistical Mechanics proposed by C. Tsallis in 1988. We present our results and discuss their physical implications with the help of theoretical model and observations. We obtained that there is a strong dependence between the nonextensive entropic parameter q and long-term solar variability presents in the sunspot area data. Among the most important results, we highlight that the asymmetry index q reveals the dominance of the North against the South. This behavior has been discussed and confirmed by several authors, but in no time they have given such behavior to a statistical model property. Thus, we conclude that this parameter can be considered as an effective measure for diagnosing long-term variations of solar dynamo. Finally, our dissertation opens a new approach for investigating time series in astrophysics from the perspective of non-extensivity.
Resumo:
The knowledge is only possible due to we exist bodily. However, during the educational experience, the epistemic potency of the body is neglected, declining the registers of the intelligibility. The current thesis approaches that problem obliquely: from a body and image philosophy which has revealed other ways of doing those registers in the modernity – understood not as period itself, but as a qualification for the negotiations between the real and the intelligible. The referred ways are explored through Merleau- Ponty’s and Michel Foucault’s works, which offer a spectrum about that new negotiation of the real. In order to approach the studied problem, the visibility and the human body motricity in the cinema are taken as analysis object. The mentioned objects have been analyzed through a corpus of movies of which plots are centered at the formal education and they require from the characters and the spectators engagement into a visual performance. Aiming to approach the object, it is questioned how the Education phenomenon is represented by the cinema; how the body is exposed and how spectators can see it. Analyzing the corpus and articulating Merleau- Ponty’s and Michel Foucault’s theories, it has been possible to state the following thesis: the cinema as an education of the gaze. The general objective of this study is to reveal the educational potency of the filmic experience, which provides a new path of intelligibility for Education. In that sense, the body as a visual operator widens the capacity of understanding the real. The current work is divided in three chapters. The first one brings the methodological approach: it is pointed how the theoretical articulation is properly arranged; it explains the method of using the images as indirect language as part of the reality description; the filmic corpus is presented, as well the criteria for the films choices and for the construction of instrument adopted during the object analysis are described. In the second chapter, it is problematized the incapacity of the western society of formulating the real discursively by debating Merleau-Ponty’s and Foucault’s theoretical contributions about the visual performance displayed on the images while the films are watched and analyzed. In the third chapter, the implications of the education of the gaze provided by the cinema are developed, mainly concerning about the place attributed to the visibility during the formulation of the real. Finally, paths are designed for the construction of another approach for the visibility in Education. Assuming the gaze as an experience of knowledge, this study aims to present other ways of being, seeing, thinking and feeling the world. Therefore, it is a proposal to reset the epistemic and subjectification patterns at the educational context.
Resumo:
This study aims to investigate the relationship between terms of trade and the long-term growth of Brazilian economy, from the perspective of external constraint, between the period 1994 to 2014. For this purpose, it is based on Thirlwall's (1979) original contribution, in order to empirically test the terms of trade contribution for determining the Brazilian growth potential product equivalent with Balance of Payments equilibriun. Using cointegration method, which seeks to analyze the long-term relationship between the variables, and subdividing the period into two sub-periods, 1994-2004 and 2004-2014, we estimate and compare real and hypothetical income elasticities and predicted and observed growth rates, with and without the terms of trade, for each period. The obteined results show that the inclusion of terms of trade in the empirical procedure to test the validity of Thirlwall's Law lead to higher growth rates obtained by the model (hypothetical), for the entire period 1994-2014 and for the sub-period 2004 -2014. This "theoretical" relaxation of the external constraint, caused by the inclusion of the terms of trade in traditional Thirlwall's rule, overestimated the average real growth rate for these periods, while the traditional Thirlwall's Law - without terms of trade - has adapted better to the real behavior of Brazilian economy. Thus, despite having contributed potentially for the relaxation of external constraint on Brazilian growth, the effect of terms of trade may have been offset by the negative performance of other Balance of Payments components, as capital flows and interest, profits and dividends payments abroad.
Resumo:
O Vale do Amanhecer é um movimento religioso genuinamente brasileiro que surgiu a partir da década de 1960 no Distrito Federal. A pesquisa tem como objetivo investigar a cultura visual religiosa do Vale do Amanhecer como elemento-chave da sua interpretação e construção da sua narrativa religiosa pós-moderna. Parte-se da hipótese de que a sua iconografia enquanto se utiliza elementos da ficção científica representa uma forma nova e rara de se situar na contemporaneidade considerando aspectos da cosmologia do século XX, para construir uma narrativa religiosa adaptada. Como referencial teórico utilizamos a abordagem de Edgar Morin sobre a intersecção de cinema e imaginário, a reflexão de Joseph Campbell como modelo da jornada do herói ou monomito. Como metodologia propõem-se partir da proposta por Gillian Rose para a interpretação da cultura visual. Espera-se evidenciar a importância das narrativas literárias espíritas e cinematográficas de ficção científica na constituição da narrativa pictórica do Vale do Amanhecer e como isso transformou este movimento em um dos principais fenômenos religiosos que assuiram a nova cosmologia
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.