1000 resultados para X-ray patterns


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Morphology of three samples of alumina are investigated. Infrared spectra are analysed by use of their morphology through the theory of average dielectric constant. Crystal shape is obtained from X-ray diffraction patterns by reflection intensity ratio. In the case of electron scanning microscopy, shape factor was obtained by an average axial ratio of the particles. Comparison of results show that there is agreement among these techniques and infrared spectra can be used to determine the morphology of alumina particles from 2.7 to 10 mu m, even for heterogeneous samples. (C) 1999 Elsevier B.V. B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Zein films plasticized with oleic acid were formed by solution casting, by the stretching of moldable resins, and by blown film extrusion. The effects of the forming process on film structure were investigated by X-ray diffraction. Wide-angle X-ray scattering (WAXS) patterns showed d-spacings at 4.5 and 10 angstrom, which were attributed to the zein alpha-helix backbone and inter-helix packing, respectively. The 4.5.angstrom d-spacing remained stable under processing while the 10 angstrom d-spacing varied with processing treatment. Small-angle X-ray scattering (SAXS) detected a long-range periodicity for the formed films but not for unprocessed zein, which suggests that the forming process-promoted film structure development is possibly aided by oleic acid. The SAXS d-spacing varied among the samples (130-238 angstrom) according to zein origin and film-forming method. X-ray scattering data suggest that the zein molecular structure resists processing but the zein supramolecular arrangements in the formed films are dependent on processing methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nominally pure Gd2O3 C-form structure from basic carbonate fine spherical particles and its differences concerning the XRD data among literature patterns using Rietveld method is reported. Gd2O3: Eu3+ from basic carbonate and Gd2O3 from oxalate were also investigated. All samples, except the one from oxalate precursor, are narrow sized, 100-200 nm. Only non-doped Gd2O3 from basic carbonate presents XRD data with smaller d(hkl) values than the literature ones. From Rietveld refinement, non-doped Gd2O3 from basic carbonate has the smallest crystallite size and from oxalate shows the greatest one. Also, the unit cell parameters indicate a plan contraction of the Gd2O3 from basic carbonate. The presence of Eu3+ increases crystallite size when basic carbonate precursor is used to prepare Gd2O3 and avoids plan contraction. The structural differences observed among Gd2O3 samples obtained are related to the type of precursor and to the presence or not of doping ion. (C) 2003 Elsevier B.V. (USA). All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seyfert galaxies are the closest active galactic nuclei. As such, we can use them to test the physical properties of the entire class of objects. To investigate their general properties, I took advantage of different methods of data analysis. In particular I used three different samples of objects, that, despite frequent overlaps, have been chosen to best tackle different topics: the heterogeneous BeppoS AX sample was thought to be optimized to test the average hard X-ray (E above 10 keV) properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to compare the properties of low-luminosity sources to the ones of higher luminosity and, thus, it was also used to test the emission mechanism models; finally, the XMM–Newton sample was extracted from the X-CfA sample so as to ensure a truly unbiased and well defined sample of objects to define the average properties of Seyfert galaxies. Taking advantage of the broad-band coverage of the BeppoS AX MECS and PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (~1.8), the high-energy cut-off (~290 keV), and the relative amount of cold reflection (~1.0). Moreover the unified scheme for active galactic nuclei was positively tested. The distribution of isotropic indicators used here (photon index, relative amount of reflection, high-energy cut-off and narrow FeK energy centroid) are similar in type I and type II objects while the absorbing column and the iron line equivalent width significantly differ between the two classes of sources with type II objects displaying larger absorbing columns. Taking advantage of the XMM–Newton and X–CfA samples I also deduced from measurements that 30 to 50% of type II Seyfert galaxies are Compton thick. Confirming previous results, the narrow FeK line is consistent, in Seyfert 2 galaxies, with being produced in the same matter responsible for the observed obscuration. These results support the basic picture of the unified model. Moreover, the presence of a X-ray Baldwin effect in type I sources has been measured using for the first time the 20-100 keV luminosity (EW proportional to L(20-100)^(−0.22±0.05)). This finding suggests that the torus covering factor may be a function of source luminosity, thereby suggesting a refinement of the baseline version of the unifed model itself. Using the BeppoSAX sample, it has been also recorded a possible correlation between the photon index and the amount of cold reflection in both type I and II sources. At a first glance this confirms the thermal Comptonization as the most likely origin of the high energy emission for the active galactic nuclei. This relation, in fact, naturally emerges supposing that the accretion disk penetrates, depending to the accretion rate, the central corona at different depths (Merloni et al. 2006): the higher accreting systems hosting disks down to the last stable orbit while the lower accreting systems hosting truncated disks. On the contrary, the study of the well defined X–C f A sample of Seyfert galaxies has proved that the intrinsic X-ray luminosity of nearby Seyfert galaxies can span values between 10^(38−43) erg s^−1, i.e. covering a huge range of accretion rates. The less efficient systems have been supposed to host ADAF systems without accretion disk. However, the study of the X–CfA sample has also proved the existence of correlations between optical emission lines and X-ray luminosity in the entire range of L_(X) covered by the sample. These relations are similar to the ones obtained if high-L objects are considered. Thus the emission mechanism must be similar in luminous and weak systems. A possible scenario to reconcile these somehow opposite indications is assuming that the ADAF and the two phase mechanism co-exist with different relative importance moving from low-to-high accretion systems (as suggested by the Gamma vs. R relation). The present data require that no abrupt transition between the two regimes is present. As mentioned above, the possible presence of an accretion disk has been tested using samples of nearby Seyfert galaxies. Here, to deeply investigate the flow patterns close to super-massive black-holes, three case study objects for which enough counts statistics is available have been analysed using deep X-ray observations taken with XMM–Newton. The obtained results have shown that the accretion flow can significantly differ between the objects when it is analyzed with the appropriate detail. For instance the accretion disk is well established down to the last stable orbit in a Kerr system for IRAS 13197-1627 where strong light bending effect have been measured. The accretion disk seems to be formed spiraling in the inner ~10-30 gravitational radii in NGC 3783 where time dependent and recursive modulation have been measured both in the continuum emission and in the broad emission line component. Finally, the accretion disk seems to be only weakly detectable in rk 509, with its weak broad emission line component. Finally, blueshifted resonant absorption lines have been detected in all three objects. This seems to demonstrate that, around super-massive black-holes, there is matter which is not confined in the accretion disk and moves along the line of sight with velocities as large as v~0.01-0.4c (whre c is the speed of light). Wether this matter forms winds or blobs is still matter of debate together with the assessment of the real statistical significance of the measured absorption lines. Nonetheless, if confirmed, these phenomena are of outstanding interest because they offer new potential probes for the dynamics of the innermost regions of accretion flows, to tackle the formation of ejecta/jets and to place constraints on the rate of kinetic energy injected by AGNs into the ISM and IGM. Future high energy missions (such as the planned Simbol-X and IXO) will likely allow an exciting step forward in our understanding of the flow dynamics around black holes and the formation of the highest velocity outflows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this work was the understanding of microbeam radiation therapy at the ESRF in order to find the best compromise between curing of tumors and sparing of normal tissues, to obtain a better understanding of survival curves and to report its efficiency. This method uses synchrotron-generated x-ray microbeams. Rats were implanted with 9L gliosarcomas and the tumors were diagnosed by MRI. They were irradiated 14 days after implantation by arrays of 25 microm wide microbeams in unidirectional mode, with a skin entrance dose of 625 Gy. The effect of using 200 or 100 microm center-to-center spacing between the microbeams was compared. The median survival time (post-implantation) was 40 and 67 days at 200 and 100 microm spacing, respectively. However, 72% of rats irradiated at 100 microm spacing showed abnormal clinical signs and weight patterns, whereas only 12% of rats were affected at 200 microm spacing. In parallel, histological lesions of the normal brain were found in the 100 microm series only. Although the increase in lifespan was equal to 273% and 102% for the 100 and 200 microm series, respectively, the 200 microm spacing protocol provides a better sparing of healthy tissue and may prove useful in combination with other radiation modalities or additional drugs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present study the challenge of analyzing complex micro X-ray diffraction (microXRD) patterns from cement–clay interfaces has been addressed. In order to extract the maximum information concerning both the spatial distribution and the crystal structure type associated with each of the many diffracting grains in heterogeneous, polycrystalline samples, an approach has been developed in which microXRD was applied to thin sections which were rotated in the X-ray beam. The data analysis, performed on microXRD patterns collected from a filled vein of a cement–clay interface from the natural analogue in Maqarin (Jordan), and a sample from a two-year-old altered interface between cement and argillaceous rock, demonstrate the potential of this method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The combination of scaled analogue experiments, material mechanics, X-ray computed tomography (XRCT) and Digital Volume Correlation techniques (DVC) is a powerful new tool not only to examine the 3 dimensional structure and kinematic evolution of complex deformation structures in scaled analogue experiments, but also to fully quantify their spatial strain distribution and complete strain history. Digital image correlation (DIC) is an important advance in quantitative physical modelling and helps to understand non-linear deformation processes. Optical non-intrusive (DIC) techniques enable the quantification of localised and distributed deformation in analogue experiments based either on images taken through transparent sidewalls (2D DIC) or on surface views (3D DIC). X-ray computed tomography (XRCT) analysis permits the non-destructive visualisation of the internal structure and kinematic evolution of scaled analogue experiments simulating tectonic evolution of complex geological structures. The combination of XRCT sectional image data of analogue experiments with 2D DIC only allows quantification of 2D displacement and strain components in section direction. This completely omits the potential of CT experiments for full 3D strain analysis of complex, non-cylindrical deformation structures. In this study, we apply digital volume correlation (DVC) techniques on XRCT scan data of “solid” analogue experiments to fully quantify the internal displacement and strain in 3 dimensions over time. Our first results indicate that the application of DVC techniques on XRCT volume data can successfully be used to quantify the 3D spatial and temporal strain patterns inside analogue experiments. We demonstrate the potential of combining DVC techniques and XRCT volume imaging for 3D strain analysis of a contractional experiment simulating the development of a non-cylindrical pop-up structure. Furthermore, we discuss various options for optimisation of granular materials, pattern generation, and data acquisition for increased resolution and accuracy of the strain results. Three-dimensional strain analysis of analogue models is of particular interest for geological and seismic interpretations of complex, non-cylindrical geological structures. The volume strain data enable the analysis of the large-scale and small-scale strain history of geological structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current standard treatment for head and neck cancer at our institution uses intensity-modulated x-ray therapy (IMRT), which improves target coverage and sparing of critical structures by delivering complex fluence patterns from a variety of beam directions to conform dose distributions to the shape of the target volume. The standard treatment for breast patients is field-in-field forward-planned IMRT, with initial tangential fields and additional reduced-weight tangents with blocking to minimize hot spots. For these treatment sites, the addition of electrons has the potential of improving target coverage and sparing of critical structures due to rapid dose falloff with depth and reduced exit dose. In this work, the use of mixed-beam therapy (MBT), i.e., combined intensity-modulated electron and x-ray beams using the x-ray multi-leaf collimator (MLC), was explored. The hypothesis of this study was that addition of intensity-modulated electron beams to existing clinical IMRT plans would produce MBT plans that were superior to the original IMRT plans for at least 50% of selected head and neck and 50% of breast cases. Dose calculations for electron beams collimated by the MLC were performed with Monte Carlo methods. An automation system was created to facilitate communication between the dose calculation engine and the treatment planning system. Energy and intensity modulation of the electron beams was accomplished by dividing the electron beams into 2x2-cm2 beamlets, which were then beam-weight optimized along with intensity-modulated x-ray beams. Treatment plans were optimized to obtain equivalent target dose coverage, and then compared with the original treatment plans. MBT treatment plans were evaluated by participating physicians with respect to target coverage, normal structure dose, and overall plan quality in comparison with original clinical plans. The physician evaluations did not support the hypothesis for either site, with MBT selected as superior in 1 out of the 15 head and neck cases (p=1) and 6 out of 18 breast cases (p=0.95). While MBT was not shown to be superior to IMRT, reductions were observed in doses to critical structures distal to the target along the electron beam direction and to non-target tissues, at the expense of target coverage and dose homogeneity. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fine-grained sediment depocenters on continental shelves are of increased scientific interest since they record environmental changes sensitively. A north-south elongated mud depocenter extends along the Senegalese coast in mid-shelf position. Shallow-acoustic profiling was carried out to determine extent, geometry and internal structures of this sedimentary body. In addition, four sediment cores were retrieved with the main aim to identify how paleoclimatic signals and coastal changes have controlled the formation of this mud depocenter. A general paleoclimatic pattern in terms of fluvial input appears to be recorded in this depositional archive. Intervals characterized by high terrigenous input, high sedimentation rates and fine grain sizes occur roughly contemporaneously in all cores and are interpreted as corresponding to intensified river discharge related to more humid conditions in the hinterland. From 2750 to 1900 and from 1000 to 700 cal a BP, wetter conditions are recorded off Senegal, an observation which is in accordance with other records from NW-Africa. Nevertheless, the three employed proxies (sedimentation rate, grain size and elemental distribution) do not always display consistent inter-core patterns. Major differences between the individual core records are attributed to sediment remobilization which was linked to local hydrographic variations as well as reorganizations of the coastal system. The Senegal mud belt is a layered inhomogeneous sedimentary body deposited on an irregular erosive surface. Early Holocene deceleration in the rate of the sea-level rise could have enabled initial mud deposition on the shelf. These favorable conditions for mud deposition occur coevally with a humid period over NW-Africa, thus, high river discharge. Sedimentation started preferentially in the northern areas of the mud belt. During mid-Holocene, a marine incursion led to the formation of an embayment. Afterwards, sedimentation in the north was interrupted in association with a remarkable southward shift in the location of the active depocenter as it is reflected by the sedimentary architecture and confirmed by radiocarbon dates. These sub-recent shifts in depocenters location are caused by migrations of the Senegal River mouth. During late Holocene times, the weakening of river discharge allowed the longshore currents to build up a chain of beach barriers which have forced the river mouth to shift southwards.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies the fracturing process in low-porous rocks during uniaxial compressive tests considering the original defects and the new mechanical cracks in the material. For this purpose, five different kinds of rocks have been chosen with carbonate mineralogy and low porosity (lower than 2%). The characterization of the fracture damage is carried out using three different techniques: ultrasounds, mercury porosimetry and X-ray computed tomography. The proposed methodology allows quantifying the evolution of the porous system as well as studying the location of new cracks in the rock samples. Intercrystalline porosity (the smallest pores with pore radius < 1 μm) shows a limited development during loading, disappearing rapidly from the porosimetry curves and it is directly related to the initial plastic behaviour in the stress–strain patterns. However, the biggest pores (corresponding to the cracks) suffer a continuous enlargement until the unstable propagation of fractures. The measured crack initiation stress varies between 0.25 σp and 0.50 σp for marbles and between 0.50 σp and 0.85 σp for micrite limestone. The unstable propagation of cracks is assumed to occur very close to the peak strength. Crack propagation through the sample is completely independent of pre-existing defects (porous bands, stylolites, fractures and veins). The ultrasonic response in the time-domain is less sensitive to the fracture damage than the frequency-domain. P-wave velocity increases during loading test until the beginning of the unstable crack propagation. This increase is higher for marbles (between 15% and 30% from initial vp values) and lower for micrite limestones (between 5% and 10%). When the mechanical cracks propagate unstably, the velocity stops to increase and decreases only when rock damage is very high. Frequency analysis of the ultrasonic signals shows clear changes during the loading process. The spectrum of treated waveforms shows two main frequency peaks centred at low (~ 20 kHz) and high (~ 35 kHz) values. When new fractures appear and grow the amplitude of the high-frequency peak decreases, while that of the low-frequency peak increases. Besides, a slight frequency shift is observed towards higher frequencies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Eocene sediments drilled at the East Tasman Plateau (ETP) exhibit well-defined cycles, high-resolution magnetic stratigraphy, and environmentally-controlled dinoflagellate and diatom distribution patterns. We derive a cyclostratigraphy from the spectral analysis of high-resolution elemental concentration records (Ca, Fe) for this shallow marine time series spanning the middle to early late Eocene (C16n.2n - C21). Changes in carbonate content, the ratio between Gonyaulacoid and Peridinioid dinocysts, and relative abundance of "oligotrophic" diatoms serve as proxies for a high-resolution climatic and sea-level history with high values representing high sea-level stands and decreased eutrophy of surface waters. Changing ratios between high latitude dinocysts versus cosmopolitan species provide clues on sea surface temperature trends and water mass exchange. Our results show that the relatively shallow-water middle Eocene environments of the ETP are influenced by orbitally-forced climatic cycles superimposed on third order relative sea-level changes. Changes in the dominance of Milankovitch frequency at ~38.6 Ma (late Eocene) is related to an initial deepening-step within the Tasmanian Gateway prior to the major deepening during the middle late Eocene (~35.5 Ma). Decreasing sedimentation rates at 38 Ma and 37.2 Ma reflect winnowing associated with sea-level fall. This episode is followed by renewed transgression. Dinocyst distribution patterns indicate high latitude, probably cool temperate surface water conditions throughout, with the exception of a sudden surge in cosmopolitan species near the base of subchron C18.2r, at ~41 Ma; this event is tentatively correlated to the Middle Eocene Climatic Optimum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on a well-established stratigraphic framework and 47 AMS-14C dated sediment cores, the distribution of facies types on the NW Iberian margin is analysed in response to the last deglacial sea-level rise, thus providing a case study on the sedimentary evolution of a high-energy, low-accumulation shelf system. Altogether, four main types of sedimentary facies are defined. (1) A gravel-dominated facies occurs mostly as time-transgressive ravinement beds, which initially developed as shoreface and storm deposits in shallow waters on the outer shelf during the last sea-level lowstand; (2) A widespread, time-transgressive mixed siliceous/biogenic-carbonaceous sand facies indicates areas of moderate hydrodynamic regimes, high contribution of reworked shelf material, and fluvial supply to the shelf; (3) A glaucony-containing sand facies in a stationary position on the outer shelf formed mostly during the last-glacial sea-level rise by reworking of older deposits as well as authigenic mineral formation; and (4) A mud facies is mostly restricted to confined Holocene fine-grained depocentres, which are located in mid-shelf position. The observed spatial and temporal distribution of these facies types on the high-energy, low-accumulation NW Iberian shelf was essentially controlled by the local interplay of sediment supply, shelf morphology, and strength of the hydrodynamic system. These patterns are in contrast to high-accumulation systems where extensive sediment supply is the dominant factor on the facies distribution. This study emphasises the importance of large-scale erosion and material recycling on the sedimentary buildup during the deglacial drowning of the shelf. The presence of a homogenous and up to 15-m thick transgressive cover above a lag horizon contradicts the common assumption of sparse and laterally confined sediment accumulation on high-energy shelf systems during deglacial sea-level rise. In contrast to this extensive sand cover, laterally very confined and maximal 4-m thin mud depocentres developed during the Holocene sea-level highstand. This restricted formation of fine-grained depocentres was related to the combination of: (1) frequently occurring high-energy hydrodynamic conditions; (2) low overall terrigenous input by the adjacent rivers; and (3) the large distance of the Galicia Mud Belt to its main sediment supplier.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The continental shelf adjacent to the Río de la Plata (RdlP) exhibits extremely complex hydrographic and ecological characteristics which are of great socioeconomic importance. Since the long-term environmental variations related to the atmospheric (wind fields), hydrologic (freshwater plume), and oceanographic (currents and fronts) regimes are little known, the aim of this study is to reconstruct the changes in the terrigenous input into the inner continental shelf during the late Holocene period (associated with the RdlP sediment discharge) and to unravel the climatic forcing mechanisms behind them. To achieve this, we retrieved a 10 m long sediment core from the RdlP mud depocenter at 57 m water depth (GeoB 13813-4). The radiocarbon age control indicated an extremely high sedimentation rate of 0.8 cm per year, encompassing the past 1200 years (AD 750-2000). We used element ratios (Ti / Ca, Fe / Ca, Ti / Al, Fe / K) as regional proxies for the fluvial input signal and the variations in relative abundance of salinity-indicative diatom groups (freshwater versus marine-brackish) to assess the variability in terrigenous freshwater and sediment discharges. Ti / Ca, Fe / Ca, Ti / Al, Fe / K and the freshwater diatom group showed the lowest values between AD 850 and 1300, while the highest values occurred between AD 1300 and 1850. The variations in the sedimentary record can be attributed to the Medieval Climatic Anomaly (MCA) and the Little Ice Age (LIA), both of which had a significant impact on rainfall and wind patterns over the region. During the MCA, a weakening of the South American summer monsoon system (SAMS) and the South Atlantic Convergence Zone (SACZ), could explain the lowest element ratios (indicative of a lower terrigenous input) and a marine-dominated diatom record, both indicative of a reduced RdlP freshwater plume. In contrast, during the LIA, a strengthening of SAMS and SACZ may have led to an expansion of the RdlP river plume to the far north, as indicated by higher element ratios and a marked freshwater diatom signal. Furthermore, a possible multidecadal oscillation probably associated with Atlantic Multidecadal Oscillation (AMO) since AD 1300 reflects the variability in both the SAMS and SACZ systems.