963 resultados para multi-source noise
Resumo:
We propose a mathematically well-founded approach for locating the source (initial state) of density functions evolved within a nonlinear reaction-diffusion model. The reconstruction of the initial source is an ill-posed inverse problem since the solution is highly unstable with respect to measurement noise. To address this instability problem, we introduce a regularization procedure based on the nonlinear Landweber method for the stable determination of the source location. This amounts to solving a sequence of well-posed forward reaction-diffusion problems. The developed framework is general, and as a special instance we consider the problem of source localization of brain tumors. We show numerically that the source of the initial densities of tumor cells are reconstructed well on both imaging data consisting of simple and complex geometric structures.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
Advancements in retinal imaging technologies have drastically improved the quality of eye care in the past couple decades. Scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) are two examples of critical imaging modalities for the diagnosis of retinal pathologies. However current-generation SLO and OCT systems have limitations in diagnostic capability due to the following factors: the use of bulky tabletop systems, monochromatic imaging, and resolution degradation due to ocular aberrations and diffraction.
Bulky tabletop SLO and OCT systems are incapable of imaging patients that are supine, under anesthesia, or otherwise unable to maintain the required posture and fixation. Monochromatic SLO and OCT imaging prevents the identification of various color-specific diagnostic markers visible with color fundus photography like those of neovascular age-related macular degeneration. Resolution degradation due to ocular aberrations and diffraction has prevented the imaging of photoreceptors close to the fovea without the use of adaptive optics (AO), which require bulky and expensive components that limit the potential for widespread clinical use.
In this dissertation, techniques for extending the diagnostic capability of SLO and OCT systems are developed. These techniques include design strategies for miniaturizing and combining SLO and OCT to permit multi-modal, lightweight handheld probes to extend high quality retinal imaging to pediatric eye care. In addition, a method for extending true color retinal imaging to SLO to enable high-contrast, depth-resolved, high-fidelity color fundus imaging is demonstrated using a supercontinuum light source. Finally, the development and combination of SLO with a super-resolution confocal microscopy technique known as optical photon reassignment (OPRA) is demonstrated to enable high-resolution imaging of retinal photoreceptors without the use of adaptive optics.
Resumo:
As a device, the laser is an elegant conglomerate of elementary physical theories and state-of-the-art techniques ranging from quantum mechanics, thermal and statistical physics, material growth and non-linear mathematics. The laser has been a commercial success in medicine and telecommunication while driving the development of highly optimised devices specifically designed for a plethora of uses. Due to their low-cost and large-scale predictability many aspects of modern life would not function without the lasers. However, the laser is also a window into a system that is strongly emulated by non-linear mathematical systems and are an exceptional apparatus in the development of non-linear dynamics and is often used in the teaching of non-trivial mathematics. While single-mode semiconductor lasers have been well studied, a unified comparison of single and two-mode lasers is still needed to extend the knowledge of semiconductor lasers, as well as testing the limits of current model. Secondly, this work aims to utilise the optically injected semiconductor laser as a tool so study non-linear phenomena in other fields of study, namely ’Rogue waves’ that have been previously witnessed in oceanography and are suspected as having non-linear origins. The first half of this thesis includes a reliable and fast technique to categorise the dynamical state of optically injected two mode and single mode lasers. Analysis of the experimentally obtained time-traces revealed regions of various dynamics and allowed the automatic identification of their respective stability. The impact of this method is also extended to the detection regions containing bi-stabilities. The second half of the thesis presents an investigation into the origins of Rogue Waves in single mode lasers. After confirming their existence in single mode lasers, their distribution in time and sudden appearance in the time-series is studied to justify their name. An examination is also performed into the existence of paths that make Rogue Waves possible and the impact of noise on their distribution is also studied.
Resumo:
Complex network theory is a framework increasingly used in the study of air transport networks, thanks to its ability to describe the structures created by networks of flights, and their influence in dynamical processes such as delay propagation. While many works consider only a fraction of the network, created by major airports or airlines, for example, it is not clear if and how such sampling process bias the observed structures and processes. In this contribution, we tackle this problem by studying how some observed topological metrics depend on the way the network is reconstructed, i.e. on the rules used to sample nodes and connections. Both structural and simple dynamical properties are considered, for eight major air networks and different source datasets. Results indicate that using a subset of airports strongly distorts our perception of the network, even when just small ones are discarded; at the same time, considering a subset of airlines yields a better and more stable representation. This allows us to provide some general guidelines on the way airports and connections should be sampled.
Resumo:
The branched vs. isoprenoid tetraether (BIT) index is based on the relative abundance of branched tetraether lipids (brGDGTs) and the isoprenoidal GDGT crenarchaeol. In Lake Challa sediments the BIT index has been applied as a proxy for local monsoon precipitation on the assumption that the primary source of brGDGTs is soil washed in from the lake's catchment. Since then, microbial production within the water column has been identified as the primary source of brGDGTs in Lake Challa sediments, meaning that either an alternative mechanism links BIT index variation with rainfall or that the proxy's application must be reconsidered. We investigated GDGT concentrations and BIT index variation in Lake Challa sediments at a decadal resolution over the past 2200 years, in combination with GDGT time-series data from 45 monthly sediment-trap samples and a chronosequence of profundal surface sediments.
Our 2200-year geochemical record reveals high-frequency variability in GDGT concentrations, and therefore in the BIT index, superimposed on distinct lower-frequency fluctuations at multi-decadal to century timescales. These changes in BIT index are correlated with changes in the concentration of crenarchaeol but not with those of the brGDGTs. A clue for understanding the indirect link between rainfall and crenarchaeol concentration (and thus thaumarchaeotal abundance) was provided by the observation that surface sediments collected in January 2010 show a distinct shift in GDGT composition relative to sediments collected in August 2007. This shift is associated with increased bulk flux of settling mineral particles with high Ti / Al ratios during March–April 2008, reflecting an event of unusually high detrital input to Lake Challa concurrent with intense precipitation at the onset of the principal rain season that year. Although brGDGT distributions in the settling material are initially unaffected, this soil-erosion event is succeeded by a massive dry-season diatom bloom in July–September 2008 and a concurrent increase in the flux of GDGT-0. Complete absence of crenarchaeol in settling particles during the austral summer following this bloom indicates that no Thaumarchaeota bloom developed at that time. We suggest that increased nutrient availability, derived from the eroded soil washed into the lake, caused the massive bloom of diatoms and that the higher concentrations of ammonium (formed from breakdown of this algal matter) resulted in a replacement of nitrifying Thaumarchaeota, which in typical years prosper during the austral summer, by nitrifying bacteria. The decomposing dead diatoms passing through the suboxic zone of the water column probably also formed a substrate for GDGT-0-producing archaea. Hence, through a cascade of events, intensive rainfall affects thaumarchaeotal abundance, resulting in high BIT index values.
Decade-scale BIT index fluctuations in Lake Challa sediments exactly match the timing of three known episodes of prolonged regional drought within the past 250 years. Additionally, the principal trends of inferred rainfall variability over the past two millennia are consistent with the hydroclimatic history of equatorial East Africa, as has been documented from other (but less well dated) regional lake records. We therefore propose that variation in GDGT production originating from the episodic recurrence of strong soil-erosion events, when integrated over (multi-)decadal and longer timescales, generates a stable positive relationship between the sedimentary BIT index and monsoon rainfall at Lake Challa. Application of this paleoprecipitation proxy at other sites requires ascertaining the local processes which affect the productivity of crenarchaeol by Thaumarchaeota and brGDGTs.
Resumo:
Background and aims: Machine learning techniques for the text mining of cancer-related clinical documents have not been sufficiently explored. Here some techniques are presented for the pre-processing of free-text breast cancer pathology reports, with the aim of facilitating the extraction of information relevant to cancer staging.
Materials and methods: The first technique was implemented using the freely available software RapidMiner to classify the reports according to their general layout: ‘semi-structured’ and ‘unstructured’. The second technique was developed using the open source language engineering framework GATE and aimed at the prediction of chunks of the report text containing information pertaining to the cancer morphology, the tumour size, its hormone receptor status and the number of positive nodes. The classifiers were trained and tested respectively on sets of 635 and 163 manually classified or annotated reports, from the Northern Ireland Cancer Registry.
Results: The best result of 99.4% accuracy – which included only one semi-structured report predicted as unstructured – was produced by the layout classifier with the k nearest algorithm, using the binary term occurrence word vector type with stopword filter and pruning. For chunk recognition, the best results were found using the PAUM algorithm with the same parameters for all cases, except for the prediction of chunks containing cancer morphology. For semi-structured reports the performance ranged from 0.97 to 0.94 and from 0.92 to 0.83 in precision and recall, while for unstructured reports performance ranged from 0.91 to 0.64 and from 0.68 to 0.41 in precision and recall. Poor results were found when the classifier was trained on semi-structured reports but tested on unstructured.
Conclusions: These results show that it is possible and beneficial to predict the layout of reports and that the accuracy of prediction of which segments of a report may contain certain information is sensitive to the report layout and the type of information sought.
Resumo:
We investigate the secrecy performance of dualhop amplify-and-forward (AF) multi-antenna relaying systems over Rayleigh fading channels, by taking into account the direct link between the source and destination. In order to exploit the available direct link and the multiple antennas for secrecy improvement, different linear processing schemes at the relay and different diversity combining techniques at the destination are proposed, namely, 1) Zero-forcing/Maximal ratio combining (ZF/MRC), 2) ZF/Selection combining (ZF/SC), 3) Maximal ratio transmission/MRC (MRT/MRC) and 4) MRT/Selection combining (MRT/SC). For all these schemes, we present new closed-form approximations for the secrecy outage probability. Moreover, we investigate a benchmark scheme, i.e., cooperative jamming/ZF (CJ/ZF), where the secrecy outage probability is obtained in exact closed-form. In addition, we present asymptotic secrecy outage expressions for all the proposed schemes in the high signal-to-noise ratio (SNR) regime, in order to characterize key design parameters, such as secrecy diversity order and secrecy array gain. The outcomes of this paper can be summarized as follows: a) MRT/MRC and MRT/SC achieve a full diversity order of M + 1, ZF/MRC and ZF/SC achieve a diversity order of M, while CJ/ZF only achieves unit diversity order, where M is the number of antennas at the relay. b) ZF/MRC (ZF/SC) outperforms the corresponding MRT/MRC (MRT/SC) in the low SNR regime, while becomes inferior to the corresponding MRT/MRC (MRT/SC) in the high SNR. c) All of the proposed schemes tend to outperform the CJ/ZF with moderate number of antennas, and linear processing schemes with MRC attain better performance than those with SC.
Resumo:
This paper considers a wirelessly powered wiretap channel, where an energy constrained multi-antenna information source, powered by a dedicated power beacon, communicates with a legitimate user in the presence of a passive eavesdropper. Based on a simple time-switching protocol where power transfer and information transmission are separated in time, we investigate two popular multi-antenna transmission schemes at the information source, namely maximum ratio transmission (MRT) and transmit antenna selection (TAS). Closed-form expressions are derived for the achievable secrecy outage probability and average secrecy rate for both schemes. In addition, simple approximations are obtained at the high signal-to-noise ratio (SNR) regime. Our results demonstrate that by exploiting the full knowledge of channel state information (CSI), we can achieve a better secrecy performance, e.g., with full CSI of the main channel, the system can achieve substantial secrecy diversity gain. On the other hand, without the CSI of the main channel, no diversity gain can be attained. Moreover, we show that the additional level of randomness induced by wireless power transfer does not affect the secrecy performance in the high SNR regime. Finally, our theoretical claims are validated by the numerical results.
Resumo:
Combustion noise is becoming increasingly important as a major noise source in aeroengines and ground based gas turbines. This is partially because advances in design have reduced the other noise sources, and partially because next generation combustion modes burn more unsteadily, resulting in increased external noise from the combustion. This review reports recent progress made in understanding combustion noise by theoretical, numerical and experimental investigations. We first discuss the fundamentals of the sound emission from a combustion region. Then the noise of open turbulent flames is summarized. We subsequently address the effects of confinement on combustion noise. In this case not only is the sound generated by the combustion influenced by its transmission through the boundaries of the combustion chamber, there is also the possibility of a significant additional source, the so-called ‘indirect’ combustion noise. This involves hot spots (entropy fluctuations) or vorticity perturbations produced by temporal variations in combustion, which generate pressure waves (sound) as they accelerate through any restriction at the exit of the combustor. We describe the general characteristics of direct and indirect noise. To gain further insight into the physical phenomena of direct and indirect sound, we investigate a simple configuration consisting of a cylindrical or annular combustor with a low Mach number flow in which a flame zone burns unsteadily. Using a low Mach number approximation, algebraic exact solutions are developed so that the parameters controlling the generation of acoustic, entropic and vortical waves can be investigated. The validity of the low Mach number approximation is then verified by solving the linearized Euler equations numerically for a wide range of inlet Mach numbers, stagnation temperature ratios, frequency and mode number of heat release fluctuations. The effects of these parameters on the magnitude of the waves produced by the unsteady combustion are investigated. In particular the magnitude of the indirect and direct noise generated in a model combustor with a choked outlet is analyzed for a wide range of frequencies, inlet Mach numbers and stagnation temperature ratios. Finally, we summarize some of the unsolved questions that need to be the focus of future research
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Current hearing-assistive technology performs poorly in noisy multi-talker conditions. The goal of this thesis was to establish the feasibility of using EEG to guide acoustic processing in such conditions. To attain this goal, this research developed a model via the constructive research method, relying on literature review. Several approaches have revealed improvements in the performance of hearing-assistive devices under multi-talker conditions, namely beamforming spatial filtering, model-based sparse coding shrinkage, and onset enhancement of the speech signal. Prior research has shown that electroencephalography (EEG) signals contain information that concerns whether the person is actively listening, what the listener is listening to, and where the attended sound source is. This thesis constructed a model for using EEG information to control beamforming, model-based sparse coding shrinkage, and onset enhancement of the speech signal. The purpose of this model is to propose a framework for using EEG signals to control sound processing to select a single talker in a noisy environment containing multiple talkers speaking simultaneously. On a theoretical level, the model showed that EEG can control acoustical processing. An analysis of the model identified a requirement for real-time processing and that the model inherits the computationally intensive properties of acoustical processing, although the model itself is low complexity placing a relatively small load on computational resources. A research priority is to develop a prototype that controls hearing-assistive devices with EEG. This thesis concludes highlighting challenges for future research.