870 resultados para Peak-to-average Ratio (par)
Resumo:
The need to incorporate advanced engineering tools in biology, biochemistry and medicine is in great demand. Many of the existing instruments and tools are usually expensive and require special facilities.^ With the advent of nanotechnology in the past decade, new approaches to develop devices and tools have been generated by academia and industry. ^ One such technology, NMR spectroscopy, has been used by biochemists for more than 2 decades to study the molecular structure of chemical compounds. However, NMR spectrometers are very expensive and require special laboratory rooms for their proper operation. High magnetic fields with strengths in the order of several Tesla make these instruments unaffordable to most research groups.^ This doctoral research proposes a new technology to develop NMR spectrometers that can operate at field strengths of less than 0.5 Tesla using an inexpensive permanent magnet and spin dependent nanoscale magnetic devices. This portable NMR system is intended to analyze samples as small as a few nanoliters.^ The main problem to resolve when downscaling the variables is to obtain an NMR signal with high Signal-To-Noise-Ratio (SNR). A special Tunneling Magneto-Resistive (TMR) sensor design was developed to achieve this goal. The minimum specifications for each component of the proposed NMR system were established. A complete NMR system was designed based on these minimum requirements. The goat was always to find cost effective realistic components. The novel design of the NMR system uses technologies such as Direct Digital Synthesis (DDS), Digital Signal Processing (DSP) and a special Backpropagation Neural Network that finds the best match of the NMR spectrum. The system was designed, calculated and simulated with excellent results.^ In addition, a general method to design TMR Sensors was developed. The technique was automated and a computer program was written to help the designer perform this task interactively.^
Resumo:
Zinc oxide and graphene nanostructures are important technological materials because of their unique properties and potential applications in future generation of electronic and sensing devices. This dissertation investigates a brief account of the strategies to grow zinc oxide nanostructures (thin film and nanowire) and graphene, and their applications as enhanced field effect transistors, chemical sensors and transparent flexible electrodes. Nanostructured zinc oxide (ZnO) and low-gallium doped zinc oxide (GZO) thin films were synthesized by a magnetron sputtering process. Zinc oxide nanowires (ZNWs) were grown by a chemical vapor deposition method. Field effect transistors (FETs) of ZnO and GZO thin films and ZNWs were fabricated by standard photo and electron beam lithography processes. Electrical characteristics of these devices were investigated by nondestructive surface cleaning, ultraviolet irradiation treatment at high temperature and under vacuum. GZO thin film transistors showed a mobility of ∼5.7 cm2/V·s at low operation voltage of <5 V and a low turn-on voltage of ∼0.5 V with a sub threshold swing of ∼85 mV/decade. Bottom gated FET fabricated from ZNWs exhibit a very high on-to-off ratio (∼106) and mobility (∼28 cm2/V·s). A bottom gated FET showed large hysteresis of ∼5.0 to 8.0 V which was significantly reduced to ∼1.0 V by the surface treatment process. The results demonstrate charge transport in ZnO nanostructures strongly depends on its surface environmental conditions and can be explained by formation of depletion layer at the surface by various surface states. A nitric oxide (NO) gas sensor using single ZNW, functionalized with Cr nanoparticles was developed. The sensor exhibited average sensitivity of ∼46% and a minimum detection limit of ∼1.5 ppm for NO gas. The sensor also is selective towards NO gas as demonstrated by a cross sensitivity test with N2, CO and CO2 gases. Graphene film on copper foil was synthesized by chemical vapor deposition method. A hot press lamination process was developed for transferring graphene film to flexible polymer substrate. The graphene/polymer film exhibited a high quality, flexible transparent conductive structure with unique electrical-mechanical properties; ∼88.80% light transmittance and ∼1.1742Ω/sq k sheet resistance. The application of a graphene/polymer film as a flexible and transparent electrode for field emission displays was demonstrated.
Resumo:
A report from the National Institutes of Health defines a disease biomarker as a “characteristic that is objectively measured and evaluated as an indicator of normal biologic processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention.” Early diagnosis is a crucial factor for incurable disease such as cancer and Alzheimer’s disease (AD). During the last decade researchers have discovered that biochemical changes caused by a disease can be detected considerably earlier as compared to physical manifestations/symptoms. In this dissertation electrochemical detection was utilized as the detection strategy as it offers high sensitivity/specificity, ease of operation, and capability of miniaturization and multiplexed detection. Electrochemical detection of biological analytes is an established field, and has matured at a rapid pace during the last 50 years and adapted itself to advances in micro/nanofabrication procedures. Carbon fiber microelectrodes were utilized as the platform sensor due to their high signal to noise ratio, ease and low-cost of fabrication, biocompatibility, and active carbon surface which allows conjugation with biorecognition moieties. This dissertation specifically focuses on the detection of 3 extensively validated biomarkers for cancer and AD. Firstly, vascular endothelial growth factor (VEGF) a cancer biomarker was detected using a one-step, reagentless immunosensing strategy. The immunosensing strategy allowed a rapid and sensitive means of VEGF detection with a detection limit of about 38 pg/mL with a linear dynamic range of 0–100 pg/mL. Direct detection of AD-related biomarker amyloid beta (Aβ) was achieved by exploiting its inherent electroactivity. The quantification of the ratio of Aβ1-40/42 (or Aβ ratio) has been established as a reliable test to diagnose AD through human clinical trials. Triple barrel carbon fiber microelectrodes were used to simultaneously detect Aβ1-40 and Aβ1-42 in cerebrospinal fluid from rats within a detection range of 100nM to 1.2μM and 400nM to 1μM respectively. In addition, the release of DNA damage/repair biomarker 8-hydroxydeoxyguanine (8-OHdG) under the influence of reactive oxidative stress from single lung endothelial cell was monitored using an activated carbon fiber microelectrode. The sensor was used to test the influence of nicotine, which is one of the most biologically active chemicals present in cigarette smoke and smokeless tobacco.
Resumo:
In an article entitled - The Specialist: Coming Soon To Your Local Hotel - by Stan Bromley, Regional Vice President and General Manager, Four Seasons Clift Hotel, San Francisco, the author’s introduction states: “An experienced hotelier discusses the importance of the delivery of a high “quality-to-value” ratio consistently to guests, particularly as the hotel market becomes specialized and a distinction is drawn between a “property” and a “hotel.” The author’s primary intention is to make you, the reader, aware of changes in the hospitality/hotel marketplace. From the embryo to the contemporary, the hotel market has consistently evolved; this includes but is not limited to mission statement, marketing, management, facilities, and all the tangibles and intangibles of the total hotel experience. “Although we are knocking ourselves out trying to be everything to everyone, I don't think hotel consumers are as interested in “mixing and matching” as they were in the past,” Bromley says. “Today's hotel guest is looking for “specialized care,” and is increasingly skeptical of our industry-wide hotel ads and promises of greatness.” As an example Bromley makes an analogy using retail outlets such as Macy’s, Saks, and Sears, which cater to their own unique market segment. Hotels now follow the same outline, he allows. “In my view, two key factors will make a hotel a success,” advises Bromley. “First, know your specialty and market to that segment. Second, make sure you consistently offer a high quality-to-value ratio. That means every day.” To emphasize that second point, Bromley offers this bolstering thought, “The second factor that will make or break your business is your ability to deliver a high "quality/value" ratio-and to do so consistently.” The author evidently considers quality-to-value ratio to be an important element. Bromley emphasizes the importance of convention and trade show business to the hotel industry. That business element cannot be over-estimated in his opinion. This doesn’t mean an operator who can accommodate that type of business should exclude other client opportunities outside the target market. It does mean, however, these secondary opportunities should only be addressed after pursuing the primary target strategy. After all, the largest profit margin lies in the center of the target. To amplify the above statement, and in reference to his own experience, Bromley says, “Being in the luxury end of the business I, on the other hand, need to uncover and book individuals and small corporate meetings more than convention or association business.
Resumo:
The two-photon exchange phenomenon is believed to be responsible for the discrepancy observed between the ratio of proton electric and magnetic form factors, measured by the Rosenbluth and polarization transfer methods. This disagreement is about a factor of three at Q 2 of 5.6 GeV2. The precise knowledge of the proton form factors is of critical importance in understanding the structure of this nucleon. The theoretical models that estimate the size of the two-photon exchange (TPE) radiative correction are poorly constrained. This factor was found to be directly measurable by taking the ratio of the electron-proton and positron-proton elastic scattering cross sections, as the TPE effect changes sign with respect to the charge of the incident particle. A test run of a modified beamline has been conducted with the CEBAF Large Acceptance Spectrometer (CLAS) at Thomas Jefferson National Accelerator Facility. This test run demonstrated the feasibility of producing a mixed electron/positron beam of good quality. Extensive simulations performed prior to the run were used to reduce the background rate that limits the production luminosity. A 3.3 GeV primary electron beam was used that resulted in an average secondary lepton beam of 1 GeV. As a result, the elastic scattering data of both lepton types were obtained at scattering angles up to 40 degrees for Q2 up to 1.5 GeV2. The cross section ratio displayed an &epsis; dependence that was Q2 dependent at smaller Q2 limits. The magnitude of the average ratio as a function of &epsis; was consistent with the previous measurements, and the elastic (Blunden) model to within the experimental uncertainties. Ultimately, higher luminosity is needed to extend the data range to lower &epsis; where the TPE effect is predicted to be largest.
Resumo:
Hydrology drives the carbon balance of wetlands by controlling the uptake and release of CO2 and CH4. Longer dry periods in between heavier precipitation events predicted for the Everglades region, may alter the stability of large carbon pools in this wetland's ecosystems. To determine the effects of drought on CO2 fluxes and CH4 emissions, we simulated changes in hydroperiod with three scenarios that differed in the onset rate of drought (gradual, intermediate, and rapid transition into drought) on 18 freshwater wetland monoliths collected from an Everglades short-hydroperiod marsh. Simulated drought, regardless of the onset rate, resulted in higher net CO2 losses net ecosystem exchange (NEE) over the 22-week manipulation. Drought caused extensive vegetation dieback, increased ecosystem respiration (Reco), and reduced carbon uptake gross ecosystem exchange (GEE). Photosynthetic potential measured by reflective indices (photochemical reflectance index, water index, normalized phaeophytinization index, and the normalized difference vegetation index) indicated that water stress limited GEE and inhibited Reco. As a result of drought-induced dieback, NEE did not offset methane production during periods of inundation. The average ratio of net CH4 to NEE over the study period was 0.06, surpassing the 100-year greenhouse warming compensation point for CH4 (0.04). Drought-induced diebacks of sawgrass (C3) led to the establishment of the invasive species torpedograss (C4) when water was resupplied. These changes in the structure and function indicate that freshwater marsh ecosystems can become a net source of CO2 and CH4 to the atmosphere, even following an extended drought. Future changes in precipitation patterns and drought occurrence/duration can change the carbon storage capacity of freshwater marshes from sinks to sources of carbon to the atmosphere. Therefore, climate change will impact the carbon storage capacity of freshwater marshes by influencing water availability and the potential for positive feedbacks on radiative forcing.
Resumo:
Compararam-se o ganho médio diário (GMD), índice de conversão alimentar (IC) e classificação da carcaça (CC) entre dois tipos de cruzamento comercial (A e B), ambos obtidos pelo cruzamento terminal de varrascos Piétrain com porcas F1 Large White x Landrace. Procurou-se identificar os principais efeitos ambientais que influenciam os referidos carateres. Utilizou-se um total de 200 suínos (machos e fêmeas), de dois cruzamentos comerciais distintos, provenientes de duas unidades de multiplicação comerciais. Determinou-se o GMD e o IC em dois períodos diferentes (63-119 e 120-158 dias de vida). No final do ensaio, efetuou-se a CC, segundo o sistema SEUROP. Procedeu-se a uma análise de variância com o objetivo de identificar os principais efeitos ambientais que influenciam o GMD, IC e CC. Determinou-se o coeficiente de regressão do GMD no peso vivo no início da engorda. Globalmente, observou-se uma superioridade do cruzado B no que concerne ao GMD (+74,6 g) (p<0,01) e ao IC (-0,07) (p<0,05). O GMD dos animais registou um acréscimo médio de 5,8 g por kg de acréscimo do PV no início do ensaio. A superioridade do cruzado B foi ainda evidenciada na classificação de carcaça SEUROP, com um acréscimo significativo (p<0,01) de 2,3% em carne magra.
Resumo:
In this work it was developed mathematical resolutions taking as parameter maximum intensity values for the interference analysis of electric and magnetic fields and was given two virtual computer system that supports families of CDMA and WCDMA technologies. The first family were developed computational resources to solve electric and magnetic field calculations and power densities in Radio Base stations , with the use of CDMA technology in the 800 MHz band , taking into account the permissible values referenced by the Commission International Protection on non-Ionizing Radiation . The first family is divided into two segments of calculation carried out in virtual operation. In the first segment to compute the interference field radiated by the base station with input information such as radio channel power; Gain antenna; Radio channel number; Operating frequency; Losses in the cable; Attenuation of direction; Minimum Distance; Reflections. Said computing system allows to quickly and without the need of implementing instruments for measurements, meet the following calculated values: Effective Radiated Power; Sector Power Density; Electric field in the sector; Magnetic field in the sector; Magnetic flux density; point of maximum permissible exposure of electric field and power density. The results are shown in charts for clarity of view of power density in the industry, as well as the coverage area definition. The computer module also includes folders specifications antennas, cables and towers used in cellular telephony, the following manufacturers: RFS World, Andrew, Karthein and BRASILSAT. Many are presented "links" network access "Internet" to supplement the cable specifications, antennas, etc. . In the second segment of the first family work with more variables , seeking to perform calculations quickly and safely assisting in obtaining results of radio signal loss produced by ERB . This module displays screens representing propagation systems denominated "A" and "B". By propagating "A" are obtained radio signal attenuation calculations in areas of urban models , dense urban , suburban , and rural open . In reflection calculations are present the reflection coefficients , the standing wave ratio , return loss , the reflected power ratio , as well as the loss of the signal by mismatch impedance. With the spread " B" seek radio signal losses in the survey line and not targeted , the effective area , the power density , the received power , the coverage radius , the conversion levels and the gain conversion systems radiant . The second family of virtual computing system consists of 7 modules of which 5 are geared towards the design of WCDMA and 2 technology for calculation of telephone traffic serving CDMA and WCDMA . It includes a portfolio of radiant systems used on the site. In the virtual operation of the module 1 is compute-: distance frequency reuse, channel capacity with noise and without noise, Doppler frequency, modulation rate and channel efficiency; Module 2 includes computes the cell area, thermal noise, noise power (dB), noise figure, signal to noise ratio, bit of power (dBm); with the module 3 reaches the calculation: breakpoint, processing gain (dB) loss in the space of BTS, noise power (w), chip period and frequency reuse factor. Module 4 scales effective radiated power, sectorization gain, voice activity and load effect. The module 5 performs the calculation processing gain (Hz / bps) bit time, bit energy (Ws). Module 6 deals with the telephone traffic and scales 1: traffic volume, occupancy intensity, average time of occupancy, traffic intensity, calls completed, congestion. Module 7 deals with two telephone traffic and allows calculating call completion and not completed in HMM. Tests were performed on the mobile network performance field for the calculation of data relating to: CINP , CPI , RSRP , RSRQ , EARFCN , Drop Call , Block Call , Pilot , Data Bler , RSCP , Short Call, Long Call and Data Call ; ECIO - Short Call and Long Call , Data Call Troughput . As survey were conducted surveys of electric and magnetic field in an ERB , trying to observe the degree of exposure to non-ionizing radiation they are exposed to the general public and occupational element. The results were compared to permissible values for health endorsed by the ICNIRP and the CENELEC .
Resumo:
The hydrocycloning operation has a goal to separate solid-liquid suspensions and liquid-liquid emulsions through the centrifugal force action. Hydrocyclones are equipment with reduced size and used in both clarification and thickening. This device is used in many areas, like petrochemical and minerals process, and accumulate advantages like versatility and low cost of maintenance. However, the demand to improve the process and to reduce the costs has motivated several studies of equipment optimization. The filtering hydrocyclone is a non-conventional equipment developed at FEQUI/UFU with objective to improve the hydrocycloning separation efficiency. The purpose of this study is to evaluate the operating conditions of feed concentration and underflow diameter on the performance of a filtering geometry optimized to minimization of energy costs. The filtration effect was investigated through the comparison between the performance of the Optimized Filtering Hydrocyclone (HCOF) and the Optimized Concentrator Hydrocyclone (HCO). Because of the resemblance of hydrocyclones performance, the filtration did not represent significant effect on the performance of the HCOF. It was found that in this geometry the decrease of the variable underflow diameter was very favorable to thickening operation. The suspension concentration of quartzite at 1.0% of solids in volume was increased about 42 times when the 3 mm underflow diameter was used. The increase on the feed solid percentage was good for decreasing the energy spent, so that a minimum number of Euler of 730 was achieved at CVA = 10.0%v. However, a greater amount of solids in suspension leads to a lower efficiency of the equipment. Therefore, to minimize the underflow-to-throughput ratio and keep a high efficiency level, it is indicated to work with dilute suspension (CVA = 1.0%) and 3 mm underflow diameter (η = 67%). But if it is necessary to work with high feed concentration, the use of 5 mm underflow diameter provides a rise in the efficiency. The HCO hydrocyclone was compared to the traditional family of hydrocyclones Rietema and presented advantages like higher efficiency (34% higher in average) and lower energy costs (20% lower in average). Finally, the efficiency curves and project equation have been raised for the HCO hydrocyclone each with satisfactory adjust.
Resumo:
Aims: To reassess the utilisation rate of urinary albumin to creatinine ratio (ACR) screening in our centre; and the rate of repeat testing, where appropriate. To look at risk factors for albuminuria in our outpatient population. Methods: All patients attending one of our two weekly diabetes outpatient clinics in 2011–2012 were enrolled in this study. Demographic and relevant clinical data were extracted from electronic care records and analysed using SPSS 21. Results: Our study cohort comprised 998 people (51.4% men;59.6% White, 30.5% Southeast Asian, 9.9% Afro-Caribbean),most of whom had Type 2 diabetes (82.6%). The ACR testing rate in our centre was 62.8% (2012–2013 data; previously 62.4%). The incidence of initial albuminuria was 32.2% in women vs42.8% in men. Just 48.7% of patients (44.4% of women, 51.8% of men) with initial albuminuria were retested: 36.4% of women and 19.7% of men with initial albuminuria had no evidence of this on follow-up. Logistic regression modelling confirmed an association of high systolic blood pressure with albuminuria [odds ratio1.92 (1.01–3.70 in women, 1.08–3.57 in men)]. Treatment with anangiotens in converting enzyme inhibitor (ACEi) or angiotens in 2 receptor blocker (A2RB) was negatively associated with albuminuria in men [odds ratio 0.42 (0.20–0.89)], but not in women. Conclusions: A relatively high, albeit suboptimal, albuminuria screening rate in our outpatient population has been sustained.High systolic blood pressure was confirmed as a risk factor foralbuminuria. The incidence of albuminuria was higher in men, who had a lower rate of negative repeat testing and appeared to benefit more from ACEi/A2RB therapy. More rigorous screening for albuminuria is warranted to identify at-risk individuals.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
A high-resolution carbon isotope profile through the uppermost Neoproterozoic-Lower Cambrian part of the Sukharikha section at the northwestern margin of the Siberian platform shows prominent secular oscillations of d13C with peak-to-peak range of 6-10 ?. There are six minima, 1n-6n, and seven maxima 1p-7p, in the Sukharikha Formation and a rising trend of d13C from the minimum 1n of -8.6 ? to maximum 6p of +6.4 ?. The trough 1n probably coincides with the isotopic minimum at the Precambrian-Cambrian boundary worldwide. Highly positive d13C values of peaks 5p and 6p are typical of the upper portion of the Precambrian-Cambrian transitional beds just beneath the Tommotian Stage in Siberia. A second rising trend of d13C is observed through the Krasnoporog and lower Shumny formations. It consists of four excursions with four major maxima that can be cor related with Tommotian-Botomian peaks II, IV, V, and VII of the reference profile from the southeastern Siberian platform. According to the chemostratigraphic cor relation, the first appearances of the index forms of archaeocyaths are earlier in the Sukharikha section than in the Lena-Aldan region.
Resumo:
Twenty-one core samples from DSDP/IPOD Leg 63 were analyzed for products of chlorophyll diagenesis. In addition to the tetrapyrrole pigments, perylene and carotenoid pigments were isolated and identified. The 16 core samples from the San Miguel Gap site (467) and the five from the Baja California borderland location (471) afforded the unique opportunity of examining tetrapyrrole diagenesis in clay-rich marine sediments that are very high in total organic matter. The chelation reaction, whereby free-base porphyrins give rise to metalloporphyrins (viz., nickel), is well documented within the downhole sequence of sediments from the San Miguel Gap (Site 467). Recognition of unique arrays of highly dealkylated copper and nickel ETIO-porphyrins, exhibiting nearly identical carbonnumber homologies (viz., C-23 to C-30; mode = C-26), enabled subtraction of this component (thought to be derived from an allochthonous source) and thus permitted description of the actual in situ diagenesis of autochthonous chlorophyll derivatives.
Resumo:
Aimed at year-round recording of the chemical aerosol composition in central Antarctica, an unattended operating aerosol sampler was successfully deployed at the EPICA deep drilling site in Dronning Maud Land (Kohnen Station). Analyses of teflon/nylon filter packs consecutively collected over bi-weekly intervals during the February 2003 to December 2005 period allowed to evaluate seasonal concentration variations of methane sulphonate (MS), Cl-, NO3-, non-sea salt (nss-)SO4**2- and Na+, while NH4+ and mineral dust related ion results remained below detection limits. For MS and nss-SO4**2 distinct late summer maxima around 44 and 200 ng/m**3, respectively, were found, while (total) NO3- showed a broad November maximum of about 52 ng m**-3. In contrast, the highest concentrations of Na+ with peak values of up to 160 ng/m**3 were observed during the winter half year. The seasonality of these species broadly coincided with long-term observations at the coastal Neumayer Station, including surprisingly comparable NO3- levels. However, the biogenic sulphur and sea salt concentrations were lower at Kohnen by typically a factor of 2-3 and 10, respectively. The arrival of sea ice derived sea salt particles at Kohnen could not clearly detected, since even during mid-winter the nss-SO4**2- to Na+ ratio was generally too high to unambiguously identify a sulphur depleted sea salt SO4**2- fraction.