970 resultados para POWER SPECTRUM


Relevância:

60.00% 60.00%

Publicador:

Resumo:

A prática do ioga tem se tornado cada vez mais popular, não apenas pelos benefícios físicos, mas principalmente pelo bem-estar psicológico trazido pela sua prática. Um dos componentes do ioga é o Prãnãyama, ou controle da respiração. A atenção e a respiração são dois mecanismos fisiológicos e involuntários requeridos para a execução do Prãnãyama. O principal objetivo desse estudo foi verificar se variáveis contínuas do EEG (potência de diferentes faixas que o compõem) seriam moduladas pelo controle respiratório, comparando-se separadamente as duas fases do ciclo respiratório (inspiração e expiração), na situação de respiração espontânea e controlada. Fizeram parte do estudo 19 sujeitos (7 homens/12 mulheres, idade média de 36,89 e DP = ± 14,46) que foram convidados a participar da pesquisa nas dependências da Faculdade de Saúde da Universidade Metodista de São Paulo. Para o registro do eletroencefalograma foi utilizado um sistema de posicionamento de cinco eletrodos Ag AgCl (FPz, Fz, Cz, Pz e Oz) fixados a uma touca de posicionamento rápido (Quick-Cap, Neuromedical Supplies®), em sistema 10-20. Foram obtidos valores de máxima amplitude de potência (espectro de potência no domínio da frequência) nas frequências teta, alfa e beta e delta e calculada a razão teta/beta nas diferentes fases do ciclo respiratório (inspiração e expiração), separadamente, nas condições de respiração espontânea e de controle respiratório. Para o registro do ciclo respiratório, foi utilizada uma cinta de esforço respiratório M01 (Pletismógrafo). Os resultados mostram diferenças significativas entre as condições de respiração espontânea e de controle com valores das médias da razão teta/beta menores na respiração controlada do que na respiração espontânea e valores de média da potência alfa sempre maiores no controle respiratório. Diferenças significativas foram encontradas na comparação entre inspiração e expiração da respiração controlada com diminuição dos valores das médias da razão teta/beta na inspiração e aumento nos valores das médias da potência alfa, sobretudo na expiração. Os achados deste estudo trazem evidências de que o controle respiratório modula variáveis eletrofisiológicas relativas à atenção refletindo um estado de alerta, porém mais relaxado do que na situação de respiração espontânea.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Boyd's SBS model which includes distributed thermal acoustic noise (DTAN) has been enhanced to enable the Stokes-spontaneous density depletion noise (SSDDN) component of the transmitted optical field to be simulated, probably for the first time, as well as the full transmitted field. SSDDN would not be generated from previous SBS models in which a Stokes seed replaces DTAN. SSDDN becomes the dominant form of transmitted SBS noise as model fibre length (MFL) is increased but its optical power spectrum remains independent of MFL. Simulations of the full transmitted field and SSDDN for different MFLs allow prediction of the optical power spectrum, or system performance parameters which depend on this, for typical communication link lengths which are too long for direct simulation. The SBS model has also been innovatively improved by allowing the Brillouin Shift Frequency (BS) to vary over the model fibre length, for the nonuniform fibre model (NFM) mode, or to remain constant, for the uniform fibre model (UFM) mode. The assumption of a Gaussian probability density function (pdf) for the BSF in the NFM has been confirmed by means of an analysis of reported Brillouin amplified power spectral measurements for the simple case of a nominally step-index single-mode pure silica core fibre. The BSF pdf could be modified to match the Brillouin gain spectra of other fibre types if required. For both models, simulated backscattered and output powers as functions of input power agree well with those from a reported experiment for fitting Brillouin gain coefficients close to theoretical. The NFM and UFM Brillouin gain spectra are then very similar from half to full maximum but diverge at lower values. Consequently, NFM and UFM transmitted SBS noise powers inferred for long MFLs differ by 1-2 dB over the input power range of 0.15 dBm. This difference could be significant for AM-VSB CATV links at some channel frequencies. The modelled characteristic of Carrier-to-Noise Ratio (CNR) as a function of input power for a single intensity modulated subcarrier is in good agreement with the characteristic reported for an experiment when either the UFM or NFM is used. The difference between the two modelled characteristics would have been more noticeable for a higher fibre length or a lower subcarrier frequency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cardiotocographic data provide physicians information about foetal development and permit to assess conditions such as foetal distress. An incorrect evaluation of the foetal status can be of course very dangerous. To improve interpretation of cardiotocographic recordings, great interest has been dedicated to foetal heart rate variability spectral analysis. It is worth reminding, however, that foetal heart rate is intrinsically an uneven series, so in order to produce an evenly sampled series a zero-order, linear or cubic spline interpolation can be employed. This is not suitable for frequency analyses because interpolation introduces alterations in the foetal heart rate power spectrum. In particular, interpolation process can produce alterations of the power spectral density that, for example, affects the estimation of the sympatho-vagal balance (computed as low-frequency/high-frequency ratio), which represents an important clinical parameter. In order to estimate the frequency spectrum alterations of the foetal heart rate variability signal due to interpolation and cardiotocographic storage rates, in this work, we simulated uneven foetal heart rate series with set characteristics, their evenly spaced versions (with different orders of interpolation and storage rates) and computed the sympatho-vagal balance values by power spectral density. For power spectral density estimation, we chose the Lomb method, as suggested by other authors to study the uneven heart rate series in adults. Summarising, the obtained results show that the evaluation of SVB values on the evenly spaced FHR series provides its overestimation due to the interpolation process and to the storage rate. However, cubic spline interpolation produces more robust and accurate results. © 2010 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cardiotocographic data provide physicians information about foetal development and, through assessment of specific parameters (like accelerations, uterine contractions, ...), permit to assess conditions such as foetal distress. An incorrect evaluation of foetal status can be of course very dangerous. In the last decades, to improve interpretation of cardiotocographic recordings, great interest has been dedicated to FHRV spectral analysis. It is worth reminding that FHR is intrinsically an uneven series and that to obtain evenly sampled series, many commercial cardiotocographs use a zero-order interpolation (storage rate of CTG data equal to 4 Hz). This is not suitable for frequency analyses because interpolation introduces alterations in the FHR power spectrum. In particular, this interpolation process can produce artifacts and an attenuation of the high-frequency components of the PSD that, for example, affects the estimation of the sympatho-vagal balance (SVB - computed as low-frequency/high-frequency ratio), which represents an important clinical parameter. In order to estimate the frequency spectrum alterations due to zero-order interpolation and other CTG storage rates, in this work, we simulated uneven FHR series with set characteristics, their evenly spaced versions (with different storage rates) and computed SVB values by PSD. For PSD estimation, we chose the Lomb method, as suggested by other authors in application to uneven HR series. ©2009 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nicotine administration in humans and rodents enhances memory and attention, and also has a positive effect in Alzheimer's Disease. The Medial Septum / Diagonal Band of Broca complex (MS/DBB) – a main cholinergic system – massively projects to the hippocampus through the fimbria-fornix, and this pathway is called the septohippocampal pathway. It has been demonstrated that the MS/DBB acts directly on the local field potential (LFP) rhythmic organization of the hippocampus, especially in the rhythmogenesis of Theta (4-8Hz) – an oscillation intrinsically linked to hippocampus mnemonic function. In vitro experiments gave evidence that nicotine applied to the MS/DBB generates a local network Theta rhythm within the MS/DBB. Thus, the present study proposes to elucidate the function of nicotine in the MS/DBB on the septo-hippocampal pathway. In vivo experiments compared the effect of MS/DBB microinfusion of saline (n=5) and nicotine (n=8) on Ketamine/Xylazine anaesthetized mice. We observed power spectrum density in the Gamma range (35 to 55 Hz) increasing in both structures (Wilcoxon Rank-Sum test, p=0.038) but with no change in coherence between these structures in the same range (Wilcoxon Rank-Sum test, p=0.60). There was also a decrease in power of the ketamineinduced Delta oscillation (1 to 3 Hz). We also performed in vitro experiments on the effect of nicotine on membrane voltage and action potential. We patch-clamped 22 neurons in current-clamp mode; 12 neurons were responsive to nicotine, half of them increased firing rate and other 6 decreased, and they significantly differed in action potential threshold (-47.3±0.9 mV vs. -41±1.9 mV, respectively, p=0.007) and halfwidth time (1.6±0.08 ms vs. 2±0.12 ms, respectively, p=0.01). Furthermore, we performed another set of in vitro experiments concerning the connectivity of the three major neuronal populations of MS/DBB that use acetylcholine, GABA or glutamate as neurotransmitter. Paired patch-clamp recordings found that glutamatergic and GABAergic neurons realize intra-septal connections that produce sizable currents in MS/DBB postsynaptic neurons. The probability of connectivity between different neuronal populations gave rise to a MS/DBB topology that was implemented in a realistic model, which corroborates that the network is highly sensitive to the generation of Gamma rhythm. Together, the data available in the full set of experiments suggests that nicotine may act as a cognitive enhancer, by inducing gamma oscillation in the local circuitry of the MS/DBB.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes a novel hierarchical data fusion technique for the non-destructive testing (NDT) and condition assessment of timber utility poles. The new method analyzes stress wave data from multisensor and multiexcitation guided wave testing using a hierarchical data fusion model consisting of feature extraction, data compression, pattern recognition, and decision fusion algorithms. The researchers validate the proposed technique using guided wave tests of a sample of in situ timber poles. The actual health states of these poles are known from autopsies conducted after the testing, forming a ground-truth for supervised classification. In the proposed method, a data fusion level extracts the main features from the sampled stress wave signals using power spectrum density (PSD) estimation, wavelet packet transform (WPT), and empirical mode decomposition (EMD). These features are then compiled to a feature vector via real-number encoding and sent to the next level for further processing. Principal component analysis (PCA) is also adopted for feature compression and to minimize information redundancy and noise interference. In the feature fusion level, two classifiers based on support vector machine (SVM) are applied to sensor separated data of the two excitation types and the pole condition is identified. In the decision making fusion level, the Dempster–Shafer (D-S) evidence theory is employed to integrate the results from the individual sensors obtaining a final decision. The results of the in situ timber pole testing show that the proposed hierarchical data fusion model was able to distinguish between healthy and faulty poles, demonstrating the effectiveness of the new method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One of the most important factors that affects the performance of energy detection (ED) is the fading channel between the wireless nodes. This article investigates the performance of ED-based spectrum sensing, for cognitive radio (CR), over two-wave with diffuse power (TWDP) fading channels. The TWDP fading model characterizes a variety of fading channels, including well-known canonical fading distributions, such as Rayleigh and Rician, as well as worse than Rayleigh fading conditions modeled by the two-ray fading model. Novel analytic expressions for the average probability of detection over TWDP fading that account for single-user and cooperative spectrum sensing as well as square law selection diversity reception are derived. These expressions are used to analyze the behavior of ED-based spectrum sensing over moderate, severe and extreme fading conditions, and to investigate the use of cooperation and diversity as a means of mitigating the fading effects. Our results indicate that TWDP fading conditions can significantly degrade the sensing performance; however, it is shown that detection performance can be improved when cooperation and diversity are employed. The presented outcomes enable us to identify the limits of ED-based spectrum sensing and quantify the trade-offs between detection performance and energy efficiency for cognitive radio systems deployed within confined environments such as in-vehicular wireless networks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The simplest model for a description of the random distributed feedback (RDFB) Raman fiber laser is a power balance model describing the evolution of the intensities of the waves over the fiber length. The model predicts well the power performances of the RDFB fiber laser including the generation threshold, the output power and pump and generation wave intensity distributions along the fiber. In the present work, we extend the power balance model and modify equations in such a way that they describe now frequency dependent spectral power density instead of integral over the spectrum intensities. We calculate the generation spectrum by using the depleted pump wave longitudinal distribution derived from the conventional power balance model. We found the spectral balance model to be sufficient to account for the spectral narrowing in the RDFB laser above the threshold of the generation. © 2014 SPIE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper provides a new general approach for defining coherent generators in power systems based on the coherency in low frequency inter-area modes. The disturbance is considered to be distributed in the network by applying random load changes which is the random walk representation of real loads instead of a single fault and coherent generators are obtained by spectrum analysis of the generators velocity variations. In order to find the coherent areas and their borders in the inter-connected networks, non-generating buses are assigned to each group of coherent generator using similar coherency detection techniques. The method is evaluated on two test systems and coherent generators and areas are obtained for different operating points to provide a more accurate grouping approach which is valid across a range of realistic operating points of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Like many cautionary tales, The Hunger Games takes as its major premise an observation about contemporary society, measuring its ballistic arc in order to present graphically its logical conclusions. The Hunger Games gazes back to the panem et circenses of Ancient Rome, staring equally cynically forward, following the trajectory of reality television to its unbearably barbaric end point – a sadistic voyeurism for an effete elite of consumers. At each end of the historical spectrum (and in the present), the prevailing social form is Arendt’s animal laborans. Consumer or consumed, Panem’s population is (with the exception of the inner circle) either deprived of the possibility of, or distracted from, political action. Within the confines of the Games themselves, Law is abandoned or de‐realised: Law – an elided Other in the pseudo‐Hobbesian nightmare that is the Arena. The Games are played out, as were gladiatorial combats and other diversions of the Roman Empire, against a background resonant of Juvenal’s concern for his contemporaries’ attachment to short term gratification at the expense the civic virtues of justice and caring which are (or would be) constitutive of a contemporary form of Arendt’s homo politicus. While the Games are, on their face, ‘reality’ they are (like the realities presented in contemporary reality television) a simulated reality, de‐realised in a Foucauldian set design constructed as a distraction for Capitol, and for the residents of the Districts, a constant reminder of their subservience to Capitol. Yet contemporary Western culture, for which manipulative reality TV is but a symptom of an underlying malaise, is inscribed at least as an incipient Panem, Its public/political space is diminished by the effective slavery of the poor, the pre‐occupation with and distractions of materiality and modern media, and the increasing concentration of power/wealth into a smaller proportion of the population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Foetal Alcohol Syndrome has long gone unrecognised and undiagnosed in Australia. In the last few years of the 21st Century (2010-14) health practitioners are finally seeking ways of diagnosing the effects of alcohol in pregnancy on the next generation. The author offers a power point presentation which gives guidance on making an accurate diagnosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This e-book is devoted to the use of spreadsheets in the service of education in a broad spectrum of disciplines: science, mathematics, engineering, business, and general education. The effort is aimed at collecting the works of prominent researchers and educators that make use of spreadsheets as a means to communicate concepts with high educational value. The e-book brings some of the most recent applications of spreadsheets in education and research to the fore. To offer the reader a broad overview of the diversity of applications, carefully chosen articles from engineering (power systems and control), mathematics (calculus, differential equations, and probability), science (physics and chemistry), and education are provided. Some of these applications make use of Visual Basic for Applications (VBA), a versatile computer language that further expands the functionality of spreadsheets. The material included in this e-book should inspire readers to devise their own applications and enhance their teaching and/or learning experience.