967 resultados para Difference test
Resumo:
This article has been edited from a transcript of the keynote address to the combined ALEA/MTE National Conference, Hobart, Tasmania, August 2001. In this talk Allan reflects on some of the difficulties facing makers of literacy policy in 'New Times'. His reflections are informed by some important research that is having an impact· on literacy teaching in Australia and he raises various issues, ranging from what he sees as a 'dumbing down' of curriculum, to addressing the needs of'at risk' students, to issues of lifelong education in a rapidly changing world.
Resumo:
Adiabatic compression testing of components in gaseous oxygen is a test method that is utilized worldwide and is commonly required to qualify a component for ignition tolerance under its intended service. This testing is required by many industry standards organizations and government agencies; however, a thorough evaluation of the test parameters and test system influences on the thermal energy produced during the test has not yet been performed. This paper presents a background for adiabatic compression testing and discusses an approach to estimating potential differences in the thermal profiles produced by different test laboratories. A “Thermal Profile Test Fixture” (TPTF) is described that is capable of measuring and characterizing the thermal energy for a typical pressure shock by any test system. The test systems at Wendell Hull & Associates, Inc. (WHA) in the USA and at the BAM Federal Institute for Materials Research and Testing in Germany are compared in this manner and some of the data obtained is presented. The paper also introduces a new way of comparing the test method to idealized processes to perform system-by-system comparisons. Thus, the paper introduces an “Idealized Severity Index” (ISI) of the thermal energy to characterize a rapid pressure surge. From the TPTF data a “Test Severity Index” (TSI) can also be calculated so that the thermal energies developed by different test systems can be compared to each other and to the ISI for the equivalent isentropic process. Finally, a “Service Severity Index” (SSI) is introduced to characterizing the thermal energy of actual service conditions. This paper is the second in a series of publications planned on the subject of adiabatic compression testing.
Resumo:
A television series is tagged with the label "cult" by the media, advertisers, and network executives when it is considered edgy or offbeat, when it appeals to nostalgia, or when it is considered emblematic of a particular subculture. By these criteria, almost any series could be described as cult. Yet certain programs exert an uncanny power over their fans, encouraging them to immerse themselves within a fictional world.In Cult Television leading scholars examine such shows as The X-Files; The Avengers; Doctor Who, Babylon Five; Star Trek; Xena, Warrior Princess; and Buffy the Vampire Slayer to determine the defining characteristics of cult television and map the contours of this phenomenon within the larger scope of popular culture.Contributors: Karen Backstein; David A. Black, Seton Hall U; Mary Hammond, Open U; Nathan Hunt, U of Nottingham; Mark Jancovich; Petra Kuppers, Bryant College; Philippe Le Guern, U of Angers, France; Alan McKee; Toby Miller, New York U; Jeffrey Sconce, Northwestern U; Eva ViethSara Gwenllian-Jones is a lecturer in television and digital media at Cardiff University and co-editor of Intensities: The Journal of Cult Media.Roberta E. Pearson is a reader in media and cultural studies at Cardiff University. She is the author of the forthcoming book Small Screen, Big Universe: Star Trek and Television.
Resumo:
While spatial determinants of emmetropization have been examined extensively in animal models and spatial processing of human myopes has also been studied, there have been few studies investigating temporal aspects of emmetropization and temporal processing in human myopia. The influence of temporal light modulation on eye growth and refractive compensation has been observed in animal models and there is evidence of temporal visual processing deficits in individuals with high myopia or other pathologies. Given this, the aims of this work were to examine the relationships between myopia (i.e. degree of myopia and progression status) and temporal visual performance and to consider any temporal processing deficits in terms of the parallel retinocortical pathways. Three psychophysical studies investigating temporal processing performance were conducted in young adult myopes and non-myopes: (1) backward visual masking, (2) dot motion perception and (3) phantom contour. For each experiment there were approximately 30 young emmetropes, 30 low myopes (myopia less than 5 D) and 30 high myopes (5 to 12 D). In the backward visual masking experiment, myopes were also classified according to their progression status (30 stable myopes and 30 progressing myopes). The first study was based on the observation that the visibility of a target is reduced by a second target, termed the mask, presented quickly after the first target. Myopes were more affected by the mask when the task was biased towards the magnocellular pathway; myopes had a 25% mean reduction in performance compared with emmetropes. However, there was no difference in the effect of the mask when the task was biased towards the parvocellular system. For all test conditions, there was no significant correlation between backward visual masking task performance and either the degree of myopia or myopia progression status. The dot motion perception study measured detection thresholds for the minimum displacement of moving dots, the maximum displacement of moving dots and degree of motion coherence required to correctly determine the direction of motion. The visual processing of these tasks is dominated by the magnocellular pathway. Compared with emmetropes, high myopes had reduced ability to detect the minimum displacement of moving dots for stimuli presented at the fovea (20% higher mean threshold) and possibly at the inferior nasal retina. The minimum displacement threshold was significantly and positively correlated to myopia magnitude and axial length, and significantly and negatively correlated with retinal thickness for the inferior nasal retina. The performance of emmetropes and myopes for all the other dot motion perception tasks were similar. In the phantom contour study, the highest temporal frequency of the flickering phantom pattern at which the contour was visible was determined. Myopes had significantly lower flicker detection limits (21.8 ± 7.1 Hz) than emmetropes (25.6 ± 8.8 Hz) for tasks biased towards the magnocellular pathway for both high (99%) and low (5%) contrast stimuli. There was no difference in flicker limits for a phantom contour task biased towards the parvocellular pathway. For all phantom contour tasks, there was no significant correlation between flicker detection thresholds and magnitude of myopia. Of the psychophysical temporal tasks studied here those primarily involving processing by the magnocellular pathway revealed differences in performance of the refractive error groups. While there are a number of interpretations for this data, this suggests that there may be a temporal processing deficit in some myopes that is selective for the magnocellular system. The minimum displacement dot motion perception task appears the most sensitive test, of those studied, for investigating changes in visual temporal processing in myopia. Data from the visual masking and phantom contour tasks suggest that the alterations to temporal processing occur at an early stage of myopia development. In addition, the link between increased minimum displacement threshold and decreasing retinal thickness suggests that there is a retinal component to the observed modifications in temporal processing.
Resumo:
Difference and Dispersion is the fourth in a series of annual research papers produced by doctoral students from The Graduate School of Education, The University of Queensland, following their presentation at the School’s annual Postgraduate Research Conference in Education. The work featured herein celebrates the diversity of cultural and disciplinary backgrounds of education researchers who come from as far afield as Germany, Hong Kong, China, Nigeria, Russia, Singapore, Thailand and of course different parts of Australia. In keeping with a postmodern epistemology, ‘difference’ and ‘dispersion’ are key themes in apprehending the multiplicity of their research topics, methodologies, methods and speaking/writing positions. From widely differing contexts and situations, these writers address the consequences, implications and possibilities for education at the beginning of the third millennium. Their interest ranges from location-specific issues in schools and classrooms, change in learning contexts and processes, educational discourses and relations of power in diverse geographical settings, and the differing articulations of the local and the global in situated policy contexts. Conceived and developed in a spirit of ongoing dialogue with and insight to alternative views and visions of education and society, this edited collection exemplifies the quality in diversity and the high levels of scholarship and supervision at one of Australia’s finest Graduate Schools of Education.
Resumo:
In a much anticipated judgment, the Federal Circuit has sought to clarify the standards applicable in determining whether a claimed method constitutes patent-eligible subject matter. In Bilski, the Federal Circuit identified a test to determine whether a patentee has made claims that pre-empt the use of a fundamental principle or an abstract idea or whether those claims cover only a particular application of a fundamental principle or abstract idea. It held that the sole test for determining subject matter eligibility for a claimed process under § 101 is that: (1) it is tied to a particular machine or apparatus, or (2) it transforms a particular article into a different state or thing. The court termed this the “machine-or-transformation test.” In so doing it overruled its earlier State Street decision to the extent that it deemed its “useful, tangible and concrete result” test as inadequate to determine whether an alleged invention recites patent-eligible subject matter.
Resumo:
In the study of student learning literature, the traditional view holds that when students are faced with heavy workload, poor teaching, and content that they cannot relate to – important aspects of the learning context, they will more likely utilise the surface approach to learning due to stresses, lack of understanding and lack of perceived relevance of the content (Kreber, 2003; Lizzio, Wilson, & Simons, 2002; Ramdsen, 1989; Ramsden, 1992; Trigwell & Prosser, 1991; Vermunt, 2005). For example, in studies involving health and medical sciences students, courses that utilised student-centred, problem-based approaches to teaching and learning were found to elicit a deeper approach to learning than the teacher-centred, transmissive approach (Patel, Groen, & Norman, 1991; Sadlo & Richardson, 2003). It is generally accepted that the line of causation runs from the learning context (or rather students’ self reported data on the learning context) to students’ learning approaches. That is, it is the learning context as revealed by students’ self-reported data that elicit the associated learning behaviour. However, other research studies also found that the same teaching and learning environment can be perceived differently by different students. In a study of students’ perceptions of assessment requirements, Sambell and McDowell (1998) found that students “are active in the reconstruction of the messages and meanings of assessment” (p. 391), and their interpretations are greatly influenced by their past experiences and motivations. In a qualitative study of Hong Kong tertiary students, Kember (2004) found that students using the surface learning approach reported heavier workload than students using the deep learning approach. According to Kember if students learn by extracting meanings from the content and making connections, they will more likely see the higher order intentions embodied in the content and the high cognitive abilities being assessed. On the other hand, if they rote-learn for the graded task, they fail to see the hierarchical relationship in the content and to connect the information. These rote-learners will tend to see the assessment as requiring memorising and regurgitation of a large amount of unconnected knowledge, which explains why they experience a high workload. Kember (2004) thus postulate that it is the learning approach that influences how students perceive workload. Campbell and her colleagues made a similar observation in their interview study of secondary students’ perceptions of teaching in the same classroom (Campbell et al., 2001). The above discussions suggest that students’ learning approaches can influence their perceptions of assessment demands and other aspects of the learning context such as relevance of content and teaching effectiveness. In other words, perceptions of elements in the teaching and learning context are endogenously determined. This study attempted to investigate the causal relationships at the individual level between learning approaches and perceptions of the learning context in economics education. In this study, students’ learning approaches and their perceptions of the learning context were measured. The elements of the learning context investigated include: teaching effectiveness, workload and content. The authors are aware of existence of other elements of the learning context, such as generic skills, goal clarity and career preparation. These aspects, however, were not within the scope of this present study and were therefore not investigated.
Resumo:
Objectives: To explore whether people's organ donation consent decisions occur via a reasoned and/or social reaction pathway. --------- Design: We examined prospectively students' and community members' decisions to register consent on a donor register and discuss organ donation wishes with family. --------- Method: Participants completed items assessing theory of planned behaviour (TPB; attitude, subjective norm, perceived behavioural control (PBC)), prototype/willingness model (PWM; donor prototype favourability/similarity, past behaviour), and proposed additional influences (moral norm, self-identity, recipient prototypes) for registering (N=339) and discussing (N=315) intentions/willingness. Participants self-reported their registering (N=177) and discussing (N=166) behaviour 1 month later. The utility of the (1) TPB, (2) PWM, (3) augmented TPB with PWM, and (4) augmented TPB with PWM and extensions was tested using structural equation modelling for registering and discussing intentions/willingness, and logistic regression for behaviour. --------- Results: While the TPB proved a more parsimonious model, fit indices suggested that the other proposed models offered viable options, explaining greater variance in communication intentions/willingness. The TPB, augmented TPB with PWM, and extended augmented TPB with PWM best explained registering and discussing decisions. The proposed and revised PWM also proved an adequate fit for discussing decisions. Respondents with stronger intentions (and PBC for registering) had a higher likelihood of registering and discussing. --------- Conclusions: People's decisions to communicate donation wishes may be better explained via a reasoned pathway (especially for registering); however, discussing involves more reactive elements. The role of moral norm, self-identity, and prototypes as influences predicting communication decisions were highlighted also.
Resumo:
Objective: The Brief Michigan Alcoholism Screening Test (bMAST) is a 10-item test derived from the 25-item Michigan Alcoholism Screening Test (MAST). It is widely used in the assessment of alcohol dependence. In the absence of previous validation studies, the principal aim of this study was to assess the validity and reliability of the bMAST as a measure of the severity of problem drinking. Method: There were 6,594 patients (4,854 men, 1,740 women) who had been referred for alcohol-use disorders to a hospital alcohol and drug service who voluntarily participated in this study. Results: An exploratory factor analysis defined a two-factor solution, consisting of Perception of Current Drinking and Drinking Consequences factors. Structural equation modeling confirmed that the fit of a nine-item, two-factor model was superior to the original one-factor model. Concurrent validity was assessed through simultaneous administration of the Alcohol Use Disorders Identification Test (AUDIT) and associations with alcohol consumption and clinically assessed features of alcohol dependence. The two-factor bMAST model showed moderate correlations with the AUDIT. The two-factor bMAST and AUDIT were similarly associated with quantity of alcohol consumption and clinically assessed dependence severity features. No differences were observed between the existing weighted scoring system and the proposed simple scoring system. Conclusions: In this study, both the existing bMAST total score and the two-factor model identified were as effective as the AUDIT in assessing problem drinking severity. There are additional advantages of employing the two-factor bMAST in the assessment and treatment planning of patients seeking treatment for alcohol-use disorders. (J. Stud. Alcohol Drugs 68: 771-779,2007)
Resumo:
Introduction Ovine models are widely used in orthopaedic research. To better understand the impact of orthopaedic procedures computer simulations are necessary. 3D finite element (FE) models of bones allow implant designs to be investigated mechanically, thereby reducing mechanical testing. Hypothesis We present the development and validation of an ovine tibia FE model for use in the analysis of tibia fracture fixation plates. Material & Methods Mechanical testing of the tibia consisted of an offset 3-pt bend test with three repetitions of loading to 350N and return to 50N. Tri-axial stacked strain gauges were applied to the anterior and posterior surfaces of the bone and two rigid bodies – consisting of eight infrared active markers, were attached to the ends of the tibia. Positional measurements were taken with a FARO arm 3D digitiser. The FE model was constructed with both geometry and material properties derived from CT images of the bone. The elasticity-density relationship used for material property determination was validated separately using mechanical testing. This model was then transformed to the same coordinate system as the in vitro mechanical test and loads applied. Results Comparison between the mechanical testing and the FE model showed good correlation in surface strains (difference: anterior 2.3%, posterior 3.2%). Discussion & Conclusion This method of model creation provides a simple method for generating subject specific FE models from CT scans. The use of the CT data set for both the geometry and the material properties ensures a more accurate representation of the specific bone. This is reflected in the similarity of the surface strain results.
Resumo:
This study assessed the reliability and validity of a palm-top-based electronic appetite rating system (EARS) in relation to the traditional paper and pen method. Twenty healthy subjects [10 male (M) and 10 female (F)] — mean age M=31 years (S.D.=8), F=27 years (S.D.=5); mean BMI M=24 (S.D.=2), F=21 (S.D.=5) — participated in a 4-day protocol. Measurements were made on days 1 and 4. Subjects were given paper and an EARS to log hourly subjective motivation to eat during waking hours. Food intake and meal times were fixed. Subjects were given a maintenance diet (comprising 40% fat, 47% carbohydrate and 13% protein by energy) calculated at 1.6×Resting Metabolic Rate (RMR), as three isoenergetic meals. Bland and Altman's test for bias between two measurement techniques found significant differences between EARS and paper and pen for two of eight responses (hunger and fullness). Regression analysis confirmed that there were no day, sex or order effects between ratings obtained using either technique. For 15 subjects, there was no significant difference between results, with a linear relationship between the two methods that explained most of the variance (r2 ranged from 62.6 to 98.6). The slope for all subjects was less than 1, which was partly explained by a tendency for bias at the extreme end of results on the EARS technique. These data suggest that the EARS is a useful and reliable technique for real-time data collection in appetite research but that it should not be used interchangeably with paper and pen techniques.
Resumo:
Exercise is known to cause physiological changes that could affect the impact of nutrients on appetite control. This study was designed to assess the effect of drinks containing either sucrose or high-intensity sweeteners on food intake following exercise. Using a repeated-measures design, three drink conditions were employed: plain water (W), a low-energy drink sweetened with artificial sweeteners aspartame and acesulfame-K (L), and a high-energy, sucrose-sweetened drink (H). Following a period of challenging exercise (70% VO2 max for 50 min), subjects consumed freely from a particular drink before being offered a test meal at which energy and nutrient intakes were measured. The degree of pleasantness (palatability) of the drinks was also measured before and after exercise. At the test meal, energy intake following the artificially sweetened (L) drink was significantly greater than after water and the sucrose (H) drinks (p < 0.05). Compared with the artificially sweetened (L) drink, the high-energy (H) drink suppressed intake by approximately the energy contained in the drink itself. However, there was no difference between the water (W) and the sucrose (H) drink on test meal energy intake. When the net effects were compared (i.e., drink + test meal energy intake), total energy intake was significantly lower after the water (W) drink compared with the two sweet (L and H) drinks. The exercise period brought about changes in the perceived pleasantness of the water, but had no effect on either of the sweet drinks. The remarkably precise energy compensation demonstrated after the higher energy sucrose drink suggests that exercise may prime the system to respond sensitively to nutritional manipulations. The results may also have implications for the effect on short-term appetite control of different types of drinks used to quench thirst during and after exercise.
Resumo:
Aims: To develop clinical protocols for acquiring PET images, performing CT-PET registration and tumour volume definition based on the PET image data, for radiotherapy for lung cancer patients and then to test these protocols with respect to levels of accuracy and reproducibility. Method: A phantom-based quality assurance study of the processes associated with using registered CT and PET scans for tumour volume definition was conducted to: (1) investigate image acquisition and manipulation techniques for registering and contouring CT and PET images in a radiotherapy treatment planning system, and (2) determine technology-based errors in the registration and contouring processes. The outcomes of the phantom image based quality assurance study were used to determine clinical protocols. Protocols were developed for (1) acquiring patient PET image data for incorporation into the 3DCRT process, particularly for ensuring that the patient is positioned in their treatment position; (2) CT-PET image registration techniques and (3) GTV definition using the PET image data. The developed clinical protocols were tested using retrospective clinical trials to assess levels of inter-user variability which may be attributed to the use of these protocols. A Siemens Somatom Open Sensation 20 slice CT scanner and a Philips Allegro stand-alone PET scanner were used to acquire the images for this research. The Philips Pinnacle3 treatment planning system was used to perform the image registration and contouring of the CT and PET images. Results: Both the attenuation-corrected and transmission images obtained from standard whole-body PET staging clinical scanning protocols were acquired and imported into the treatment planning system for the phantom-based quality assurance study. Protocols for manipulating the PET images in the treatment planning system, particularly for quantifying uptake in volumes of interest and window levels for accurate geometric visualisation were determined. The automatic registration algorithms were found to have sub-voxel levels of accuracy, with transmission scan-based CT-PET registration more accurate than emission scan-based registration of the phantom images. Respiration induced image artifacts were not found to influence registration accuracy while inadequate pre-registration over-lap of the CT and PET images was found to result in large registration errors. A threshold value based on a percentage of the maximum uptake within a volume of interest was found to accurately contour the different features of the phantom despite the lower spatial resolution of the PET images. Appropriate selection of the threshold value is dependant on target-to-background ratios and the presence of respiratory motion. The results from the phantom-based study were used to design, implement and test clinical CT-PET fusion protocols. The patient PET image acquisition protocols enabled patients to be successfully identified and positioned in their radiotherapy treatment position during the acquisition of their whole-body PET staging scan. While automatic registration techniques were found to reduce inter-user variation compared to manual techniques, there was no significant difference in the registration outcomes for transmission or emission scan-based registration of the patient images, using the protocol. Tumour volumes contoured on registered patient CT-PET images using the tested threshold values and viewing windows determined from the phantom study, demonstrated less inter-user variation for the primary tumour volume contours than those contoured using only the patient’s planning CT scans. Conclusions: The developed clinical protocols allow a patient’s whole-body PET staging scan to be incorporated, manipulated and quantified in the treatment planning process to improve the accuracy of gross tumour volume localisation in 3D conformal radiotherapy for lung cancer. Image registration protocols which factor in potential software-based errors combined with adequate user training are recommended to increase the accuracy and reproducibility of registration outcomes. A semi-automated adaptive threshold contouring technique incorporating a PET windowing protocol, accurately defines the geometric edge of a tumour volume using PET image data from a stand alone PET scanner, including 4D target volumes.
Resumo:
World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.