914 resultados para power spectrum peak
Resumo:
We present a model for detection of the states of a coupled quantum dots (qubit) by a quantum point contact. Most proposals for measurements of states of quantum systems are idealized. However in a real laboratory the measurements cannot be perfect due to practical devices and circuits. The models using ideal devices are not sufficient for describing the detection information of the states of the quantum systems. Our model therefore includes the extension to a non-ideal measurement device case using an equivalent circuit. We derive a quantum trajectory that describes the stochastic evolution of the state of the system of the qubit and the measuring device. We calculate the noise power spectrum of tunnelling events in an ideal and a non-ideal quantum point contact measurement respectively. We found that, for the strong coupling case it is difficult to obtain information of the quantum processes in the qubit by measurements using a non-ideal quantum point contact. The noise spectra can also be used to estimate the limits of applicability of the ideal model.
Resumo:
Boyd's SBS model which includes distributed thermal acoustic noise (DTAN) has been enhanced to enable the Stokes-spontaneous density depletion noise (SSDDN) component of the transmitted optical field to be simulated, probably for the first time, as well as the full transmitted field. SSDDN would not be generated from previous SBS models in which a Stokes seed replaces DTAN. SSDDN becomes the dominant form of transmitted SBS noise as model fibre length (MFL) is increased but its optical power spectrum remains independent of MFL. Simulations of the full transmitted field and SSDDN for different MFLs allow prediction of the optical power spectrum, or system performance parameters which depend on this, for typical communication link lengths which are too long for direct simulation. The SBS model has also been innovatively improved by allowing the Brillouin Shift Frequency (BS) to vary over the model fibre length, for the nonuniform fibre model (NFM) mode, or to remain constant, for the uniform fibre model (UFM) mode. The assumption of a Gaussian probability density function (pdf) for the BSF in the NFM has been confirmed by means of an analysis of reported Brillouin amplified power spectral measurements for the simple case of a nominally step-index single-mode pure silica core fibre. The BSF pdf could be modified to match the Brillouin gain spectra of other fibre types if required. For both models, simulated backscattered and output powers as functions of input power agree well with those from a reported experiment for fitting Brillouin gain coefficients close to theoretical. The NFM and UFM Brillouin gain spectra are then very similar from half to full maximum but diverge at lower values. Consequently, NFM and UFM transmitted SBS noise powers inferred for long MFLs differ by 1-2 dB over the input power range of 0.15 dBm. This difference could be significant for AM-VSB CATV links at some channel frequencies. The modelled characteristic of Carrier-to-Noise Ratio (CNR) as a function of input power for a single intensity modulated subcarrier is in good agreement with the characteristic reported for an experiment when either the UFM or NFM is used. The difference between the two modelled characteristics would have been more noticeable for a higher fibre length or a lower subcarrier frequency.
Resumo:
Cardiotocographic data provide physicians information about foetal development and permit to assess conditions such as foetal distress. An incorrect evaluation of the foetal status can be of course very dangerous. To improve interpretation of cardiotocographic recordings, great interest has been dedicated to foetal heart rate variability spectral analysis. It is worth reminding, however, that foetal heart rate is intrinsically an uneven series, so in order to produce an evenly sampled series a zero-order, linear or cubic spline interpolation can be employed. This is not suitable for frequency analyses because interpolation introduces alterations in the foetal heart rate power spectrum. In particular, interpolation process can produce alterations of the power spectral density that, for example, affects the estimation of the sympatho-vagal balance (computed as low-frequency/high-frequency ratio), which represents an important clinical parameter. In order to estimate the frequency spectrum alterations of the foetal heart rate variability signal due to interpolation and cardiotocographic storage rates, in this work, we simulated uneven foetal heart rate series with set characteristics, their evenly spaced versions (with different orders of interpolation and storage rates) and computed the sympatho-vagal balance values by power spectral density. For power spectral density estimation, we chose the Lomb method, as suggested by other authors to study the uneven heart rate series in adults. Summarising, the obtained results show that the evaluation of SVB values on the evenly spaced FHR series provides its overestimation due to the interpolation process and to the storage rate. However, cubic spline interpolation produces more robust and accurate results. © 2010 Elsevier Ltd. All rights reserved.
Resumo:
Cardiotocographic data provide physicians information about foetal development and, through assessment of specific parameters (like accelerations, uterine contractions, ...), permit to assess conditions such as foetal distress. An incorrect evaluation of foetal status can be of course very dangerous. In the last decades, to improve interpretation of cardiotocographic recordings, great interest has been dedicated to FHRV spectral analysis. It is worth reminding that FHR is intrinsically an uneven series and that to obtain evenly sampled series, many commercial cardiotocographs use a zero-order interpolation (storage rate of CTG data equal to 4 Hz). This is not suitable for frequency analyses because interpolation introduces alterations in the FHR power spectrum. In particular, this interpolation process can produce artifacts and an attenuation of the high-frequency components of the PSD that, for example, affects the estimation of the sympatho-vagal balance (SVB - computed as low-frequency/high-frequency ratio), which represents an important clinical parameter. In order to estimate the frequency spectrum alterations due to zero-order interpolation and other CTG storage rates, in this work, we simulated uneven FHR series with set characteristics, their evenly spaced versions (with different storage rates) and computed SVB values by PSD. For PSD estimation, we chose the Lomb method, as suggested by other authors in application to uneven HR series. ©2009 IEEE.
Resumo:
Nicotine administration in humans and rodents enhances memory and attention, and also has a positive effect in Alzheimer's Disease. The Medial Septum / Diagonal Band of Broca complex (MS/DBB) – a main cholinergic system – massively projects to the hippocampus through the fimbria-fornix, and this pathway is called the septohippocampal pathway. It has been demonstrated that the MS/DBB acts directly on the local field potential (LFP) rhythmic organization of the hippocampus, especially in the rhythmogenesis of Theta (4-8Hz) – an oscillation intrinsically linked to hippocampus mnemonic function. In vitro experiments gave evidence that nicotine applied to the MS/DBB generates a local network Theta rhythm within the MS/DBB. Thus, the present study proposes to elucidate the function of nicotine in the MS/DBB on the septo-hippocampal pathway. In vivo experiments compared the effect of MS/DBB microinfusion of saline (n=5) and nicotine (n=8) on Ketamine/Xylazine anaesthetized mice. We observed power spectrum density in the Gamma range (35 to 55 Hz) increasing in both structures (Wilcoxon Rank-Sum test, p=0.038) but with no change in coherence between these structures in the same range (Wilcoxon Rank-Sum test, p=0.60). There was also a decrease in power of the ketamineinduced Delta oscillation (1 to 3 Hz). We also performed in vitro experiments on the effect of nicotine on membrane voltage and action potential. We patch-clamped 22 neurons in current-clamp mode; 12 neurons were responsive to nicotine, half of them increased firing rate and other 6 decreased, and they significantly differed in action potential threshold (-47.3±0.9 mV vs. -41±1.9 mV, respectively, p=0.007) and halfwidth time (1.6±0.08 ms vs. 2±0.12 ms, respectively, p=0.01). Furthermore, we performed another set of in vitro experiments concerning the connectivity of the three major neuronal populations of MS/DBB that use acetylcholine, GABA or glutamate as neurotransmitter. Paired patch-clamp recordings found that glutamatergic and GABAergic neurons realize intra-septal connections that produce sizable currents in MS/DBB postsynaptic neurons. The probability of connectivity between different neuronal populations gave rise to a MS/DBB topology that was implemented in a realistic model, which corroborates that the network is highly sensitive to the generation of Gamma rhythm. Together, the data available in the full set of experiments suggests that nicotine may act as a cognitive enhancer, by inducing gamma oscillation in the local circuitry of the MS/DBB.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
The analysis and calculation of the compensation for the phase mismatch of the frequency-doubling using the frequency space chirp introduced from prisms are made. The result shows that suitable lens can compensate the phase mismatch in a certain extent resulting from wide femtosecond spectrum when the spectrum is space chirped. By means of this method, the experiment of second harmonic generation is carried out using a home-made femtosecond KLM Ti:sapphire laser and BBO crystal. The conversion efficiency of SHG is 63 %. The average output power of blue light is 320 mW. The central wavelength is 420 nm. The spectrum bandwidth is 5.5 nm. It can sustain the pulse width of 33.6 fs. The tuning range of blue light is 404-420 nm,when the femtosecond Ti:sapphire optical pulse is tuned using the prisms in the cavity.
Resumo:
Parallel combinatory orthogonal frequency division multiplexing (PC-OFDM yields lower maximum peak-to-average power ratio (PAR), high bandwidth efficiency and lower bit error rate (BER) on Gaussian channels compared to OFDM systems. However, PC-OFDM does not improve the statistics of PAR significantly. In this chapter, the use of a set of fixed permutations to improve the statistics of the PAR of a PC-OFDM signal is presented. For this technique, interleavers are used to produce K-1 permuted sequences from the same information sequence. The sequence with the lowest PAR, among K sequences is chosen for the transmission. The PAR of a PC-OFDM signal can be further reduced by 3-4 dB by this technique. Mathematical expressions for the complementary cumulative density function (CCDF)of PAR of PC-OFDM signal and interleaved PC-OFDM signal are also presented.
Resumo:
The concept of moving block signallings (MBS) has been adopted in a few mass transit railway systems. When a dense queue of trains begins to move from a complete stop, the trains can re-start in very close succession under MBS. The feeding substations nearby are likely to be overloaded and the service will inevitably be disturbed unless substations of higher power rating are used. By introducing starting time delays among the trains or limiting the trains’ acceleration rate to a certain extent, the peak energy demand can be contained. However, delay is introduced and quality of service is degraded. An expert system approach is presented to provide a supervisory tool for the operators. As the knowledge base is vital for the quality of decisions to be made, the study focuses on its formulation with a balance between delay and peak power demand.
Resumo:
Results of experimental investigations on the relationship between nanoscale morphology of carbon doped hydrogenated silicon-oxide (SiOCH) low-k films and their electron spectrum of defect states are presented. The SiOCH films have been deposited using trimethylsilane (3MS) - oxygen mixture in a 13.56 MHz plasma enhanced chemical vapor deposition (PECVD) system at variable RF power densities (from 1.3 to 2.6 W/cm2) and gas pressures of 3, 4, and 5 Torr. The atomic structure of the SiOCH films is a mixture of amorphous-nanocrystalline SiO2-like and SiC-like phases. Results of the FTIR spectroscopy and atomic force microscopy suggest that the volume fraction of the SiC-like phase increases from ∼0.2 to 0.4 with RF power. The average size of the nanoscale surface morphology elements of the SiO2-like matrix can be controlled by the RF power density and source gas flow rates. Electron density of the defect states N(E) of the SiOCH films has been investigated with the DLTS technique in the energy range up to 0.6 eV from the bottom of the conduction band. Distinct N(E) peaks at 0.25 - 0.35 eV and 0.42 - 0.52 eV below the conduction band bottom have been observed. The first N(E) peak is identified as originated from E1-like centers in the SiC-like phase. The volume density of the defects can vary from 1011 - 1017 cm-3 depending on specific conditions of the PECVD process.
Resumo:
Scan circuit generally causes excessive switching activity compared to normal circuit operation. The higher switching activity in turn causes higher peak power supply current which results into supply, voltage droop and eventually yield loss. This paper proposes an efficient methodology for test vector re-ordering to achieve minimum peak power supported by the given test vector set. The proposed methodology also minimizes average power under the minimum peak power constraint. A methodology to further reduce the peak power below the minimum supported peak power, by inclusion of minimum additional vectors is also discussed. The paper defines the lower bound on peak power for a given test set. The results on several benchmarks shows that it can reduce peak power by up to 27%.
Resumo:
We consider an optimal power and rate scheduling problem for a multiaccess fading wireless channel with the objective of minimising a weighted sum of mean packet transmission delay subject to a peak power constraint. The base station acts as a controller which, depending upon the buffer lengths and the channel state of each user, allocates transmission rate and power to individual users. We assume perfect channel state information at the transmitter and the receiver. We also assume a Markov model for the fading and packet arrival processes. The policy obtained represents a form of Indexability.
Resumo:
Scan circuit is widely practiced DFT technology. The scan testing procedure consist of state initialization, test application, response capture and observation process. During the state initialization process the scan vectors are shifted into the scan cells and simultaneously the responses captured in last cycle are shifted out. During this shift operation the transitions that arise in the scan cells are propagated to the combinational circuit, which inturn create many more toggling activities in the combinational block and hence increases the dynamic power consumption. The dynamic power consumed during scan shift operation is much more higher than that of normal mode operation.
Resumo:
This paper proposes a method of short term load forecasting with limited data, applicable even at 11 kV substation levels where total power demand is relatively low and somewhat random and weather data are usually not available as in most developing countries. Kalman filtering technique has been modified and used to forecast daily and hourly load. Planning generation and interstate energy exchange schedule at load dispatch centre and decentralized Demand Side Management at substation level are intended to be carried out with the help of this short term load forecasting technique especially to achieve peak power control without enforcing load-shedding.