951 resultados para noise exposure level
Resumo:
We present a comparative study of the influence of dispersion induced phase noise for n-level PSK systems. From the analysis, we conclude that the phase noise influence for classical homodyne/heterodyne PSK systems is entirely determined by the modulation complexity (expressed in terms of constellation diagram) and the analogue demodulation format. On the other hand, the use of digital signal processing (DSP) in homodyne/intradyne systems renders a fiber length dependence originating from the generation of equalization enhanced phase noise. For future high capacity systems, high constellations must be used in order to lower the symbol rate to practically manageable speeds, and this fact puts severe requirements to the signal and local oscillator (LO) linewidths. Our results for the bit-error-rate (BER) floor caused by the phase noise influence in the case of QPSK, 16PSK and 64PSK systems outline tolerance limitations for the LO performance: 5 MHz linewidth (at 3-dB level) for 100 Gbit/s QPSK; 1 MHz for 400 Gbit/s QPSK; 0.1 MHz for 400 Gbit/s 16PSK and 1 Tbit/s 64PSK systems. This defines design constrains for the phase noise impact in distributed-feed-back (DFB) or distributed-Bragg-reflector (DBR) semiconductor lasers, that would allow moving the system capacity from 100 Gbit/s system capacity to 400 Gbit/s in 3 years (1 Tbit/s in 5 years). It is imperative at the same time to increase the analogue to digital conversion (ADC) speed such that the single quadrature symbol rate goes from today's 25 GS/s to 100 GS/s (using two samples per symbol). © 2014 by Walter de Gruyter Berlin/Boston.
Resumo:
Forward error correction (FEC) plays a vital role in coherent optical systems employing multi-level modulation. However, much of coding theory assumes that additive white Gaussian noise (AWGN) is dominant, whereas coherent optical systems have significant phase noise (PN) in addition to AWGN. This changes the error statistics and impacts FEC performance. In this paper, we propose a novel semianalytical method for dimensioning binary Bose-Chaudhuri-Hocquenghem (BCH) codes for systems with PN. Our method involves extracting statistics from pre-FEC bit error rate (BER) simulations. We use these statistics to parameterize a bivariate binomial model that describes the distribution of bit errors. In this way, we relate pre-FEC statistics to post-FEC BER and BCH codes. Our method is applicable to pre-FEC BER around 10-3 and any post-FEC BER. Using numerical simulations, we evaluate the accuracy of our approach for a target post-FEC BER of 10-5. Codes dimensioned with our bivariate binomial model meet the target within 0.2-dB signal-to-noise ratio.
Resumo:
We present a performance evaluation of a non-conventional approach to implement phase noise tolerant optical systems with multilevel modulation formats. The performance of normalized Viterbi-Viterbi carrier phase estimation (V-V CPE) is investigated in detail for circular m-level quadrature amplitude modulation (C-mQAM) signals. The intrinsic property of C-mQAM constellation points with a uniform phase separation allows a straightforward employment of V-V CPE without the need to adapt constellation. Compared with conventional feed-forward CPE for square QAM signals, the simulated results show an enhanced tolerance of linewidth symbol duration product (ΔvTs) at a low sensitivity penalty by using feed-forward CPE structure with C-mQAM. This scheme can be easily upgraded to higher order modulations without inducing considerable complexity.
Resumo:
With the advantages and popularity of Permanent Magnet (PM) motors due to their high power density, there is an increasing incentive to use them in variety of applications including electric actuation. These applications have strict noise emission standards. The generation of audible noise and associated vibration modes are characteristics of all electric motors, it is especially problematic in low speed sensorless control rotary actuation applications using high frequency voltage injection technique. This dissertation is aimed at solving the problem of optimizing the sensorless control algorithm for low noise and vibration while achieving at least 12 bit absolute accuracy for speed and position control. The low speed sensorless algorithm is simulated using an improved Phase Variable Model, developed and implemented in a hardware-in-the-loop prototyping environment. Two experimental testbeds were developed and built to test and verify the algorithm in real time.^ A neural network based modeling approach was used to predict the audible noise due to the high frequency injected carrier signal. This model was created based on noise measurements in an especially built chamber. The developed noise model is then integrated into the high frequency based sensorless control scheme so that appropriate tradeoffs and mitigation techniques can be devised. This will improve the position estimation and control performance while keeping the noise below a certain level. Genetic algorithms were used for including the noise optimization parameters into the developed control algorithm.^ A novel wavelet based filtering approach was proposed in this dissertation for the sensorless control algorithm at low speed. This novel filter was capable of extracting the position information at low values of injection voltage where conventional filters fail. This filtering approach can be used in practice to reduce the injected voltage in sensorless control algorithm resulting in significant reduction of noise and vibration.^ Online optimization of sensorless position estimation algorithm was performed to reduce vibration and to improve the position estimation performance. The results obtained are important and represent original contributions that can be helpful in choosing optimal parameters for sensorless control algorithm in many practical applications.^
Resumo:
Experimental and theoretical studies regarding noise processes in various kinds of AlGaAs/GaAs heterostructures with a quantum well are reported. The measurement processes, involving a Fast Fourier Transform and analog wave analyzer in the frequency range from 10 Hz to 1 MHz, a computerized data storage and processing system, and cryostat in the temperature range from 78 K to 300 K are described in detail. The current noise spectra are obtained with the “three-point method”, using a Quan-Tech and avalanche noise source for calibration. ^ The properties of both GaAs and AlGaAs materials and field effect transistors, based on the two-dimensional electron gas in the interface quantum well, are discussed. Extensive measurements are performed in three types of heterostructures, viz., Hall structures with a large spacer layer, modulation-doped non-gated FETs, and more standard gated FETs; all structures are grown by MBE techniques. ^ The Hall structures show Lorentzian generation-recombination noise spectra with near temperature independent relaxation times. This noise is attributed to g-r processes in the 2D electron gas. For the TEGFET structures, we observe several Lorentzian g-r noise components which have strongly temperature dependent relaxation times. This noise is attributed to trapping processes in the doped AlGaAs layer. The trap level energies are determined from an Arrhenius plot of log (τT2) versus 1/T as well as from the plateau values. The theory to interpret these measurements and to extract the defect level data is reviewed and further developed. Good agreement with the data is found for all reported devices. ^
Resumo:
Electronic noise has been investigated in AlxGa1−x N/GaN Modulation-Doped Field Effect Transistors (MODFETs) of submicron dimensions, grown for us by MBE (Molecular Beam Epitaxy) techniques at Virginia Commonwealth University by Dr. H. Morkoç and coworkers. Some 20 devices were grown on a GaN substrate, four of which have leads bonded to source (S), drain (D), and gate (G) pads, respectively. Conduction takes place in the quasi-2D layer of the junction (xy plane) which is perpendicular to the quantum well (z-direction) of average triangular width ∼3 nm. A non-doped intrinsic buffer layer of ∼5 nm separates the Si-doped donors in the AlxGa1−xN layer from the 2D-transistor plane, which affords a very high electron mobility, thus enabling high-speed devices. Since all contacts (S, D, and G) must reach through the AlxGa1−xN layer to connect internally to the 2D plane, parallel conduction through this layer is a feature of all modulation-doped devices. While the shunting effect may account for no more than a few percent of the current IDS, it is responsible for most excess noise, over and above thermal noise of the device. ^ The excess noise has been analyzed as a sum of Lorentzian spectra and 1/f noise. The Lorentzian noise has been ascribed to trapping of the carriers in the AlxGa1−xN layer. A detailed, multitrapping generation-recombination noise theory is presented, which shows that an exponential relationship exists for the time constants obtained from the spectral components as a function of 1/kT. The trap depths have been obtained from Arrhenius plots of log (τT2) vs. 1000/T. Comparison with previous noise results for GaAs devices shows that: (a) many more trapping levels are present in these nitride-based devices; (b) the traps are deeper (farther below the conduction band) than for GaAs. Furthermore, the magnitude of the noise is strongly dependent on the level of depletion of the AlxGa1−xN donor layer, which can be altered by a negative or positive gate bias VGS. ^ Altogether, these frontier nitride-based devices are promising for bluish light optoelectronic devices and lasers; however, the noise, though well understood, indicates that the purity of the constituent layers should be greatly improved for future technological applications. ^
Resumo:
Persistence of HIV-1 reservoirs within the Central Nervous System (CNS) remains a significant challenge to the efficacy of potent anti-HIV-1 drugs. The primary human Brain Microvascular Endothelial Cells (HBMVEC) constitutes the Blood Brain Barrier (BBB) which interferes with anti-HIV drug delivery into the CNS. The ATP binding cassette (ABC) transporters expressed on HBMVEC can efflux HIV-1 protease inhibitors (HPI), enabling the persistence of HIV-1 in CNS. Constitutive low level expression of several ABC-transporters, such as MDR1 (a.k.a. P-gp) and MRPs are documented in HBMVEC. Although it is recognized that inflammatory cytokines and exposure to xenobiotic drug substrates (e.g HPI) can augment the expression of these transporters, it is not known whether concomitant exposure to virus and anti-retroviral drugs can increase drug-efflux functions in HBMVEC. Our in vitro studies showed that exposure of HBMVEC to HIV-1 significantly up-regulates both MDR1 gene expression and protein levels; however, no significant increases in either MRP-1 or MRP-2 were observed. Furthermore, calcein-AM dye-efflux assays using HBMVEC showed that, compared to virus exposure alone, the MDR1 mediated drug-efflux function was significantly induced following concomitant exposure to both HIV-1 and saquinavir (SQV). This increase in MDR1 mediated drug-efflux was further substantiated via increased intracellular retention of radiolabeled [3H-] SQV. The crucial role of MDR1 in 3H-SQV efflux from HBMVEC was further confirmed by using both a MDR1 specific blocker (PSC-833) and MDR1 specific siRNAs. Therefore, MDR1 specific drug-efflux function increases in HBMVEC following co-exposure to HIV-1 and SQV which can reduce the penetration of HPIs into the infected brain reservoirs of HIV-1. A targeted suppression of MDR1 in the BBB may thus provide a novel strategy to suppress residual viral replication in the CNS, by augmenting the therapeutic efficacy of HAART drugs.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
Acute exposures to some individual polycyclic aromatic hydrocarbons (PAHs) and complex PAH mixtures are known to cause cardiac malformations and edema in the developing fish embryo. However, the heart is not the only organ impacted by developmental PAH exposure. The developing brain is also affected, resulting in lasting behavioral dysfunction. While acute exposures to some PAHs are teratogenically lethal in fish, little is known about the later life consequences of early life, lower dose subteratogenic PAH exposures. We sought to determine and characterize the long-term behavioral consequences of subteratogenic developmental PAH mixture exposure in both naive killifish and PAH-adapted killifish using sediment pore water derived from the Atlantic Wood Industries Superfund Site. Killifish offspring were embryonically treated with two low-level PAH mixture dilutions of Elizabeth River sediment extract (ERSE) (TPAH 5.04 μg/L and 50.4 μg/L) at 24h post fertilization. Following exposure, killifish were raised to larval, juvenile, and adult life stages and subjected to a series of behavioral tests including: a locomotor activity test (4 days post-hatch), a sensorimotor response tap/habituation test (3 months post hatch), and a novel tank diving and exploration test (3months post hatch). Killifish were also monitored for survival at 1, 2, and 5 months over 5-month rearing period. Developmental PAH exposure caused short-term as well as persistent behavioral impairments in naive killifish. In contrast, the PAH-adapted killifish did not show behavioral alterations following PAH exposure. PAH mixture exposure caused increased mortality in reference killifish over time; yet, the PAH-adapted killifish, while demonstrating long-term rearing mortality, had no significant changes in mortality associated with ERSE exposure. This study demonstrated that early embryonic exposure to PAH-contaminated sediment pore water caused long-term locomotor and behavioral alterations in killifish, and that locomotor alterations could be observed in early larval stages. Additionally, our study highlights the resistance to behavioral alterations caused by low-level PAH mixture exposure in the adapted killifish population. Furthermore, this is the first longitudinal behavioral study to use killifish, an environmentally important estuarine teleost fish, and this testing framework can be used for future contaminant assessment.
Resumo:
X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Resumo:
Flame retardants (FRs) are added to materials to enhance the fire safety level of readily combustible polymers. Although they have been purported to aid in preventing fires in some cases, they have also become a significant cause for concern given the vast data on environmental persistence and human and animal adverse health effects. Evidence since the 1980s has shown that Canadian, American and Europeans have detectable levels of FRs in their bodies. North Americans in particular have high levels of these chemicals due to stringent flammability standards and the higher use of polybrominated diphenyl ethers (PBDEs) in North America as opposed to Europe. FRs have been detected in household dust and some evidence suggests that TVs could be a significant source of exposure to FRs. It is imperative to re-visit the flammability standard (UL94V) that allows for FR use in TVs plastic materials by providing a risk versus benefit analysis to determine if this standard provides a fire safety benefit and if it plays a major role in FR exposure. This report first examined the history of televisions and the progression to the UL94V flammability test standard to understand why FRs were first added to polymers used in the manufacturing of TVs. It has been demonstrated to be due to fire hazards resulting from the use of plastic materials in cathode-ray tube (CRT) TVs that had an “instant-on” feature and high voltage and operating temperatures. In providing a risk versus benefit analysis, this paper presents the argument that 1) by providing a market survey the current flammability test standard (UL94V) is outdated and lacks relevance to current technology as flat, thin, energy efficient Liquid Crystal Displays (LCDs) dominate over traditionally used heavy, bulky and energy-intensive CRTs; 2) FRs do not impart fire safety benefits considering that there is a lack of valid fire safety concern, such as reduced internal and external ignition and fire hazard, and a lack of valid fire data and hazard for television fires in general and finally; 3) the standard is overly stringent as it does not consider the risk due to exposure to FRs in household dust due to the proliferation and greater use of televisions in households. Therefore, this report argues that the UL94V standard has become trapped in history and needs to be updated as it may play a major role in FR exposure.
Resumo:
[EN]The number of dietary exposure assessment studies focussing on children is very limited. Children are however a vulnerable group due to their higher food consumption level per kg body weight. Therefore, the EXPOCHI project aims [1] to create a relational network of individual food consumption databases in children, covering different geographical areas within Europe, and [2] to use these data to assess the usual intake of lead, chromium, selenium and food colours.
Resumo:
Purpose: To evaluate if physical measures of noise predict image quality at high and low noise levels. Method: Twenty-four images were acquired on a DR system using a Pehamed DIGRAD phantom at three kVp settings (60, 70 and 81) across a range of mAs values. The image acquisition setup consisted of 14 cm of PMMA slabs with the phantom placed in the middle at 120 cm SID. Signal-to-noise ratio (SNR) and Contrast-tonoise ratio (CNR) were calculated for each of the images using ImageJ software and 14 observers performed image scoring. Images were scored according to the observer`s evaluation of objects visualized within the phantom. Results: The R2 values of the non-linear relationship between objective visibility score and CNR (60kVp R2 = 0.902; 70Kvp R2 = 0.913; 80kVp R2 = 0.757) demonstrate a better fit for all 3 kVp settings than the linear R2 values. As CNR increases for all kVp settings the Object Visibility also increases. The largest increase for SNR at low exposure values (up to 2 mGy) is observed at 60kVp, when compared with 70 or 81kVp.CNR response to exposure is similar. Pearson r was calculated to assess the correlation between Score, OV, SNR and CNR. None of the correlations reached a level of statistical significance (p>0.01). Conclusion: For object visibility and SNR, tube potential variations may play a role in object visibility. Higher energy X-ray beam settings give lower SNR but higher object visibility. Object visibility and CNR at all three tube potentials are similar, resulting in a strong positive relationship between CNR and object visibility score. At low doses the impact of radiographic noise does not have a strong influence on object visibility scores because in noisy images objects could still be identified.
Resumo:
Risk assessment considerations - The concept that “safe levels of exposure” for humans can be identified for individual chemicals is central to the risk assessment of compounds with known toxicological profiles. Selection of agents for combination chemotherapy regimens involves minimize overlapping of mechanisms of action, antitumor activity and toxicity profile. Although the toxicological profile and mechanism of action of each individual drug is well characterized, the toxicological interactions between drugs are likely, but poorly established at occupational exposure context. The synergistic nature of interactions may help in understanding the adverse health effects observed in healthcare workers, where exposure situations are characterized by complex mixtures of chemical agents, and the levels of individual exposing agents are often not sufficiently high to explain the health complaints. However, if a substance is a genotoxic carcinogen, this would be the “lead effect”; normally, no OEL based on a NOEL would be derived and the level would be set so low that it would be unlikely that other effects would be expected. Aim of the study - Recently research project developed in Portuguese Hospitals characterize the occupational exposure to antineoplastic agents and the health effects related. The project aimed to assess exposure of the different risk groups that handle antineoplastic agents in the hospital setting, namely during preparation and administration of these drugs. Here it is presented and discussed the results in a study developed in two hospitals from Lisbon.
Resumo:
Human exposure to Bisphenol A (BPA) results mainly from ingestion of food and beverages. Information regarding BPA effects on colon cancer, one of the major causes of death in developed countries, is still scarce. Likewise, little is known about BPA drug interactions although its potential role in doxorubicin (DOX) chemoresistance has been suggested. This study aims to assess potential interactions between BPA and DOX on HT29 colon cancer cells. HT29 cell response was evaluated after exposure to BPA, DOX, or co-exposure to both chemicals. Transcriptional analysis of several cancer-associated genes (c-fos, AURKA, p21, bcl-xl and CLU) shows that BPA exposure induces slight up-regulation exclusively of bcl-xl without affecting cell viability. On the other hand, a sub-therapeutic DOX concentration (40 nM) results in highly altered c-fos, bcl-xl, and CLU transcript levels, and this is not affected by co-exposure with BPA. Conversely, DOX at a therapeutic concentration (4 μM) results in distinct and very severe transcriptional alterations of c-fos, AURKA, p21 and CLU that are counteracted by co-exposure with BPA resulting in transcript levels similar to those of control. Co-exposure with BPA slightly decreases apoptosis in relation to DOX 4 μM alone without affecting DOX-induced loss of cell viability. These results suggest that BPA exposure can influence chemotherapy outcomes and therefore emphasize the necessity of a better understanding of BPA interactions with chemotherapeutic agents in the context of risk assessment.