956 resultados para MODIFIED HODGE TEST
Resumo:
An algorithm based on the concept of combining Kalman filter and Least Error Square (LES) techniques is proposed in this paper. The algorithm is intended to estimate signal attributes like amplitude, frequency and phase angle in the online mode. This technique can be used in protection relays, digital AVRs, DGs, DSTATCOMs, FACTS and other power electronics applications. The Kalman filter is modified to operate on a fictitious input signal and provides precise estimation results insensitive to noise and other disturbances. At the same time, the LES system has been arranged to operate in critical transient cases to compensate the delay and inaccuracy identified because of the response of the standard Kalman filter. Practical considerations such as the effect of noise, higher order harmonics, and computational issues of the algorithm are considered and tested in the paper. Several computer simulations and a laboratory test are presented to highlight the usefulness of the proposed method. Simulation results show that the proposed technique can simultaneously estimate the signal attributes, even if it is highly distorted due to the presence of non-linear loads and noise.
Resumo:
In rural low-voltage networks, distribution lines are usually highly resistive. When many distributed generators are connected to such lines, power sharing among them is difficult when using conventional droop control, as the real and reactive power have strong coupling with each other. A high droop gain can alleviate this problem but may lead the system to instability. To overcome4 this, two droop control methods are proposed for accurate load sharing with frequency droop controller. The first method considers no communication among the distributed generators and regulates the output voltage and frequency, ensuring acceptable load sharing. The droop equations are modified with a transformation matrix based on the line R/X ration for this purpose. The second proposed method, with minimal low bandwidth communication, modifies the reference frequency of the distributed generators based on the active and reactive power flow in the lines connected to the points of common coupling. The performance of these two proposed controllers is compared with that of a controller, which includes an expensive high bandwidth communication system through time-domain simulation of a test system. The magnitude of errors in power sharing between these three droop control schemes are evaluated and tabulated.
Resumo:
The design of driven pile foundations involves an iterative process requiring an initial estimate of the refusal level to determine the depth of boreholes for subsequent analyses. Current procedures for determining borehole depths incorporate parameters typically unknown at the investigation stage. Thus, a quantifiable procedure more applicable at this preliminary stage would provide greater confidence in estimating the founding level of driven piles. This paper examines the effectiveness of the Standard Penetration Test (SPT) in directly estimating driven pile refusal levels. A number of significant correlations were obtained between SPT information and pile penetration records demonstrating the potential application of the SPT. Results indicated pile penetration was generally best described as a function of both the pile toe and cumulative shaft SPT values. The influence of the toe SPT increased when piles penetrated rock. A refusal criteria was established from the results to guide both the estimation of borehole depths and likely pile lengths during the design stage.
Resumo:
Given there is currently a migration trend from traditional electrical supervisory control and data acquisition (SCADA) systems towards a smart grid based approach to critical infrastructure management. This project provides an evaluation of existing and proposed implementations for both traditional electrical SCADA and smart grid based architectures, and proposals a set of reference requirements which test bed implementations should implement. A high-level design for smart grid test beds is proposed and initial implementation performed, based on the proposed design, using open source and freely available software tools. The project examines the move towards smart grid based critical infrastructure management and illustrates the increased security requirements. The implemented test bed provides a basic framework for testing network requirements in a smart grid environment, as well as a platform for further research and development. Particularly to develop, implement and test network security related disturbances such as intrusion detection and network forensics. The project undertaken proposes and develops an architecture of the emulation of some smart grid functionality. The Common Open Research Emulator (CORE) platform was used to emulate the communication network of the smart grid. Specifically CORE was used to virtualise and emulate the TCP/IP networking stack. This is intended to be used for further evaluation and analysis, for example the analysis of application protocol messages, etc. As a proof of concept, software libraries were designed, developed and documented to enable and support the design and development of further smart grid emulated components, such as reclosers, switches, smart meters, etc. As part of the testing and evaluation a Modbus based smart meter emulator was developed to provide basic functionality of a smart meter. Further code was developed to send Modbus request messages to the emulated smart meter and receive Modbus responses from it. Although the functionality of the emulated components were limited, it does provide a starting point for further research and development. The design is extensible to enable the design and implementation of additional SCADA protocols. The project also defines an evaluation criteria for the evaluation of the implemented test bed, and experiments are designed to evaluate the test bed according to the defined criteria. The results of the experiments are collated and presented, and conclusions drawn from the results to facilitate discussion on the test bed implementation. The discussion undertaken also present possible future work.
Resumo:
The human knee acts as a sophisticated shock absorber during landing movements. The ability of the knee to perform this function in the real world is remarkable given that the context of the landing movement may vary widely between performances. For this reason, humans must be capable of rapidly adjusting the mechanical properties of the knee under impact load in order to satisfy many competing demands. However, the processes involved in regulating these properties in response to changing constraints remain poorly understood. In particular, the effects of muscle fatigue on knee function during step landing are yet to be fully explored. Fatigue of the knee muscles is significant for 2 reasons. First, it is thought to have detrimental effects on the ability of the knee to act as a shock absorber and is considered a risk factor for knee injury. Second, fatigue of knee muscles provides a unique opportunity to examine the mechanisms by which healthy individuals alter knee function. A review of the literature revealed that the effect of fatigue on knee function during landing has been assessed by comparing pre and postfatigue measurements, with fatigue induced by a voluntary exercise protocol. The information is limited by inconsistent results with key measures, such as knee stiffness, showing varying results following fatigue, including increased stiffness, decreased stiffness or failure to detect any change in some experiments. Further consideration of the literature questions the validity of the models used to induce and measure fatigue, as well as the pre-post study design, which may explain the lack of consensus in the results. These limitations cast doubt on the usefulness of the available information and identify a need to investigate alternative approaches. Based on the results of this review, the aims of this thesis were to: • evaluate the methodological procedures used in validation of a fatigue model • investigate the adaptation and regulation of post-impact knee mechanics during repeated step landings • use this new information to test the effects of fatigue on knee function during a step-landing task. To address the aims of the thesis, 3 related experiments were conducted that collected kinetic, kinematic and electromyographic data from 3 separate samples of healthy male participants. The methodologies involved optoelectronic motion capture (VICON), isokinetic dynamometry (System3 Pro, BIODEX) and wireless surface electromyography (Zerowire, Aurion, Italy). Fatigue indicators and knee function measures used in each experiment were derived from the data. Study 1 compared the validity and reliability of repetitive stepping and isokinetic contractions with respect to fatigue of the quadriceps and hamstrings. Fifteen participants performed 50 repetitions of each exercise twice in randomised order, over 4 sessions. Sessions were separated by a minimum of 1 week’s rest, to ensure full recovery. Validity and reliability depended on a complex interaction between the exercise protocol, the fatigue indicator, the individual and the muscle of interest. Nevertheless, differences between exercise protocols indicated that stepping was less effective in eliciting valid and reliable changes in peak power and spectral compression, compared with isokinetic exercise. A key finding was that fatigue progressed in a biphasic pattern during both exercises. The point separating the 2 phases, known as the transition point, demonstrated superior between-test reliability during the isokinetic protocol, compared with stepping. However, a correction factor should be used to accurately apply this technique to the study of fatigue during landing. Study 2 examined alterations in knee function during repeated landings, with a different sample (N =12) performing 60 consecutive step landing trials. Each landing trial was separated by 1-minute rest periods. The results provided new information in relation to the pre-post study design in the context of detecting adjustments in knee function during landing. First, participants significantly increased or decreased pre-impact muscle activity or post-impact mechanics despite environmental and task constraints remaining unchanged. This is the 1st study to demonstrate this effect in healthy individuals without external feedback on performance. Second, single-subject analysis was more effective in detecting alterations in knee function compared to group-level analysis. Finally, repeated landing trials did not reduce inter-trial variability of knee function in some participants, contrary to assumptions underpinning previous studies. The results of studies 1 and 2 were used to modify the design of Study 3 relative to previous research. These alterations included a modified isokinetic fatigue protocol, multiple pre-fatigue measurements and singlesubject analysis to detect fatigue-related changes in knee function. The study design incorporated new analytical approaches to investigate fatiguerelated alterations in knee function during landing. Participants (N = 16) were measured during multiple pre-fatigue baseline trial blocks prior to the fatigue model. A final block of landing trials was recorded once the participant met the operational fatigue definition that was identified in Study 1. The analysis revealed that the effects of fatigue in this context are heavily dependent on the compensatory response of the individual. A continuum of responses was observed within the sample for each knee function measure. Overall, preimpact preparation and post-impact mechanics of the knee were altered with highly individualised patterns. Moreover, participants used a range of active or passive pre-impact strategies to adapt post-impact mechanics in response to quadriceps fatigue. The unique patterns identified in the data represented an optimisation of knee function based on priorities of the individual. The findings of these studies explain the lack of consensus within the literature regarding the effects of fatigue on knee function during landing. First, functional fatigue protocols lack validity in inducing fatigue-related changes in mechanical output and spectral compression of surface electromyography (sEMG) signals, compared with isokinetic exercise. Second, fatigue-related changes in knee function during landing are confounded by inter-individual variation, which limits the sensitivity of group-level analysis. By addressing these limitations, the 3rd study demonstrated the efficacies of new experimental and analytical approaches to observe fatigue-related alterations in knee function during landing. Consequently, this thesis provides new perspectives into the effects of fatigue in knee function during landing. In conclusion: • The effects of fatigue on knee function during landing depend on the response of the individual, with considerable variation present between study participants, despite similar physical characteristics. • In healthy males, adaptation of pre-impact muscle activity and postimpact knee mechanics is unique to the individual and reflects their own optimisation of demands such as energy expenditure, joint stability, sensory information and loading of knee structures. • The results of these studies should guide future exploration of adaptations in knee function to fatigue. However, research in this area should continue with reduced emphasis on the directional response of the population and a greater focus on individual adaptations of knee function.
Resumo:
OBJECTIVE: The accurate quantification of human diabetic neuropathy is important to define at-risk patients, anticipate deterioration, and assess new therapies. ---------- RESEARCH DESIGN AND METHODS: A total of 101 diabetic patients and 17 age-matched control subjects underwent neurological evaluation, neurophysiology tests, quantitative sensory testing, and evaluation of corneal sensation and corneal nerve morphology using corneal confocal microscopy (CCM). ---------- RESULTS: Corneal sensation decreased significantly (P = 0.0001) with increasing neuropathic severity and correlated with the neuropathy disability score (NDS) (r = 0.441, P < 0.0001). Corneal nerve fiber density (NFD) (P < 0.0001), nerve fiber length (NFL), (P < 0.0001), and nerve branch density (NBD) (P < 0.0001) decreased significantly with increasing neuropathic severity and correlated with NDS (NFD r = −0.475, P < 0.0001; NBD r = −0.511, P < 0.0001; and NFL r = −0.581, P < 0.0001). NBD and NFL demonstrated a significant and progressive reduction with worsening heat pain thresholds (P = 0.01). Receiver operating characteristic curve analysis for the diagnosis of neuropathy (NDS >3) defined an NFD of <27.8/mm2 with a sensitivity of 0.82 (95% CI 0.68–0.92) and specificity of 0.52 (0.40–0.64) and for detecting patients at risk of foot ulceration (NDS >6) defined a NFD cutoff of <20.8/mm2 with a sensitivity of 0.71 (0.42–0.92) and specificity of 0.64 (0.54–0.74). ---------- CONCLUSIONS: CCM is a noninvasive clinical technique that may be used to detect early nerve damage and stratify diabetic patients with increasing neuropathic severity. Established diabetic neuropathy leads to pain and foot ulceration. Detecting neuropathy early may allow intervention with treatments to slow or reverse this condition (1). Recent studies suggested that small unmyelinated C-fibers are damaged early in diabetic neuropathy (2–4) but can only be detected using invasive procedures such as sural nerve biopsy (4,5) or skin-punch biopsy (6–8). Our studies have shown that corneal confocal microscopy (CCM) can identify early small nerve fiber damage and accurately quantify the severity of diabetic neuropathy (9–11). We have also shown that CCM relates to intraepidermal nerve fiber loss (12) and a reduction in corneal sensitivity (13) and detects early nerve fiber regeneration after pancreas transplantation (14). Recently we have also shown that CCM detects nerve fiber damage in patients with Fabry disease (15) and idiopathic small fiber neuropathy (16) when results of electrophysiology tests and quantitative sensory testing (QST) are normal. In this study we assessed corneal sensitivity and corneal nerve morphology using CCM in diabetic patients stratified for the severity of diabetic neuropathy using neurological evaluation, electrophysiology tests, and QST. This enabled us to compare CCM and corneal esthesiometry with established tests of diabetic neuropathy and define their sensitivity and specificity to detect diabetic patients with early neuropathy and those at risk of foot ulceration.
Resumo:
Background: A number of studies have examined the relationship between high ambient temperature and mortality. Recently, concern has arisen about whether this relationship is modified by socio-demographic factors. However, data for this type of study is relatively scarce in subtropical/tropical regions where people are well accustomed to warm temperatures. Objective: To investigate whether the relationship between daily mean temperature and daily all-cause mortality is modified by age, gender and socio-economic status (SES) in Brisbane, Australia. Methods: We obtained daily mean temperature and all-cause mortality data for Brisbane, Australia during 1996–2004. A generalised additive model was fitted to assess the percentage increase in all deaths with every one degree increment above the threshold temperature. Different age, gender and SES groups were included in the model as categorical variables and their modification effects were estimated separately. Results: A total of 53,316 non-external deaths were included during the study period. There was a clear increasing trend in the harmful effect of high temperature on mortality with age. The effect estimate among women was more than 20 times that among men. We did not find an SES effect on the percent increase associated with temperature. Conclusions: The effects of high temperature on all deaths were modified by age and gender but not by SES in Brisbane, Australia.
Resumo:
While in many travel situations consumers have an almost limitless range of destinations to choose from, their actual decision set will usually only comprise between two and six destinations. One of the greatest challenges facing destination marketers is positioning their destination, against the myriad of competing places that offer similar features, into consumer decision sets. Since positioning requires a narrow focus, marketing communications must present a succinct and meaningful proposition, the selection of which is often problematic for destination marketing organisations (DMO), which deal with a diverse and often eclectic range of attributes in addition to numerous self-interested and demanding stakeholders. This paper reports the application of two qualitative techniques used to explore the range of cognitive attributes, consequences and personal values that represent potential positioning opportunities in the context of short break holidays. The Repertory Test is an effective technique for understanding the salient attributes used by a traveller to differentiate destinations, while Laddering Analysis enables the researcher to explore the smaller set of personal values guiding such decision making. A key finding of the research was that while individuals might vary in their repertoire of salient attributes, there was a commonality of shared consequences and values.
Resumo:
Process modeling is an emergent area of Information Systems research that is characterized through an abundance of conceptual work with little empirical research. To fill this gap, this paper reports on the development and validation of an instrument to measure user acceptance of process modeling grammars. We advance an extended model for a multi-stage measurement instrument development procedure, which incorporates feedback from both expert and user panels. We identify two main contributions: First, we provide a validated measurement instrument for the study of user acceptance of process modeling grammars, which can be used to assist in further empirical studies that investigate phenomena associated with the business process modeling domain. Second, in doing so, we describe in detail a procedural model for developing measurement instruments that ensures high levels of reliability and validity, which may assist fellow scholars in executing their empirical research.
Resumo:
It is important to understand how student performance when higher education is delivered by via new technology. Podcasting is a relatively recent new technology gaining widespread use across the world. We present the results of a quasi-experimental research project that finds when podcasts are used as a revision tool, student performance in Accounting improves. We highlight that aligning podcast use with pedagological design is important and discuss constraints on and barriers to the use of podcasting in higher education.
Resumo:
The development and use of a virtual assessment tool for a signal processing unit is described. It allows students to take a test from anywhere using a web browser to connect to the university server that hosts the test. While student responses are of the multiple choice type, they have to work out problems to arrive at the answer to be entered. CGI programming is used to verify student identification information and record their scores as well as provide immediate feedback after the test is complete. The tool has been used at QUT for the past 3 years and student feedback is discussed. The virtual assessment tool is an efficient alternative to marking written assignment reports that can often take more hours than actual lecture hall contact from a lecturer or tutor. It is especially attractive for very large classes that are now the norm at many universities in the first two years.
Resumo:
Increasingly, large amounts of public and private money are being invested in education and as a result, schools are becoming more accountable to stakeholders for this financial input. In terms of the curriculum, governments worldwide are frequently tying school funding to students‟ and schools‟ academic performances, which are monitored through high-stakes testing programs. To accommodate the resultant pressures from these testing initiatives, many principals are re-focussing their school‟s curriculum on the testing requirements. Such a re-focussing, which was examined critically in this thesis, constituted an externally facilitated rapid approach to curriculum change. In line with previously enacted change theories and recommendations from these, curriculum change in schools has tended to be a fairly slow, considered, collaborative process that is facilitated internally by a deputy-principal (curriculum). However, theoretically based research has shown that such a process has often proved to be difficult and very rarely successful. The present study reports and theorises the experiences of an externally facilitated process that emerged from a practitioner model of change. This case study of the development of the controlled rapid approach to curriculum change began by establishing the reasons three principals initiated curriculum change and why they then engaged an outsider to facilitate the process. It also examined this particular change process from the perspectives of the research participants. The investigation led to the revision of the practitioner model as used in the three schools and challenged the current thinking about the process of school curriculum change. The thesis aims to offer principals and the wider education community an alternative model for consideration when undertaking curriculum change. Finally, the thesis warns that, in the longer term, the application of study‟s revised model (the Controlled Rapid Approach to Curriculum Change [CRACC] Model) may have less then desirable educational consequences.
Resumo:
Burnout has been identified as a significant factor in HIV/AIDS volunteering. It has been associated with depression, anxiety and the loss of volunteers from the health care delivery system. The aim of this study was to test the independence of the health and motivational processes hypothesized within the Job Demands – Resources model of burnout in HIV/AIDS volunteers. Participants were 307 HIV/AIDS volunteers from state AIDS Councils throughout Australia who completed self-report measures pertaining to role ambiguity and role conflict, social support, burnout, intrinsic and organizational satisfaction, and depression. Findings suggested that the independence of the dual processes hypothesized by the model was only partially supported. These findings provide a model for burnout which gives a framework for interventions at both the individual and organizational level which would contribute to the prevention of burnout, depression, and job dissatisfaction in HIV/AIDS volunteers.
Resumo:
Background: In response to the need for more comprehensive quality assessment within Australian residential aged care facilities, the Clinical Care Indicator (CCI) Tool was developed to collect outcome data as a means of making inferences about quality. A national trial of its effectiveness and a Brisbane-based trial of its use within the quality improvement context determined the CCI Tool represented a potentially valuable addition to the Australian aged care system. This document describes the next phase in the CCI Tool.s development; the aims of which were to establish validity and reliability of the CCI Tool, and to develop quality indicator thresholds (benchmarks) for use in Australia. The CCI Tool is now known as the ResCareQA (Residential Care Quality Assessment). Methods: The study aims were achieved through a combination of quantitative data analysis, and expert panel consultations using modified Delphi process. The expert panel consisted of experienced aged care clinicians, managers, and academics; they were initially consulted to determine face and content validity of the ResCareQA, and later to develop thresholds of quality. To analyse its psychometric properties, ResCareQA forms were completed for all residents (N=498) of nine aged care facilities throughout Queensland. Kappa statistics were used to assess inter-rater and test-retest reliability, and Cronbach.s alpha coefficient calculated to determine internal consistency. For concurrent validity, equivalent items on the ResCareQA and the Resident Classification Scales (RCS) were compared using Spearman.s rank order correlations, while discriminative validity was assessed using known-groups technique, comparing ResCareQA results between groups with differing care needs, as well as between male and female residents. Rank-ordered facility results for each clinical care indicator (CCI) were circulated to the panel; upper and lower thresholds for each CCI were nominated by panel members and refined through a Delphi process. These thresholds indicate excellent care at one extreme and questionable care at the other. Results: Minor modifications were made to the assessment, and it was renamed the ResCareQA. Agreement on its content was reached after two Delphi rounds; the final version contains 24 questions across four domains, enabling generation of 36 CCIs. Both test-retest and inter-rater reliability were sound with median kappa values of 0.74 (test-retest) and 0.91 (inter-rater); internal consistency was not as strong, with a Chronbach.s alpha of 0.46. Because the ResCareQA does not provide a single combined score, comparisons for concurrent validity were made with the RCS on an item by item basis, with most resultant correlations being quite low. Discriminative validity analyses, however, revealed highly significant differences in total number of CCIs between high care and low care groups (t199=10.77, p=0.000), while the differences between male and female residents were not significant (t414=0.56, p=0.58). Clinical outcomes varied both within and between facilities; agreed upper and lower thresholds were finalised after three Delphi rounds. Conclusions: The ResCareQA provides a comprehensive, easily administered means of monitoring quality in residential aged care facilities that can be reliably used on multiple occasions. The relatively modest internal consistency score was likely due to the multi-factorial nature of quality, and the absence of an aggregate result for the assessment. Measurement of concurrent validity proved difficult in the absence of a gold standard, but the sound discriminative validity results suggest that the ResCareQA has acceptable validity and could be confidently used as an indication of care quality within Australian residential aged care facilities. The thresholds, while preliminary due to small sample size, enable users to make judgements about quality within and between facilities. Thus it is recommended the ResCareQA be adopted for wider use.
A Modified inverse integer Cholesky decorrelation method and the performance on ambiguity resolution
Resumo:
One of the research focuses in the integer least squares problem is the decorrelation technique to reduce the number of integer parameter search candidates and improve the efficiency of the integer parameter search method. It remains as a challenging issue for determining carrier phase ambiguities and plays a critical role in the future of GNSS high precise positioning area. Currently, there are three main decorrelation techniques being employed: the integer Gaussian decorrelation, the Lenstra–Lenstra–Lovász (LLL) algorithm and the inverse integer Cholesky decorrelation (IICD) method. Although the performance of these three state-of-the-art methods have been proved and demonstrated, there is still a potential for further improvements. To measure the performance of decorrelation techniques, the condition number is usually used as the criterion. Additionally, the number of grid points in the search space can be directly utilized as a performance measure as it denotes the size of search space. However, a smaller initial volume of the search ellipsoid does not always represent a smaller number of candidates. This research has proposed a modified inverse integer Cholesky decorrelation (MIICD) method which improves the decorrelation performance over the other three techniques. The decorrelation performance of these methods was evaluated based on the condition number of the decorrelation matrix, the number of search candidates and the initial volume of search space. Additionally, the success rate of decorrelated ambiguities was calculated for all different methods to investigate the performance of ambiguity validation. The performance of different decorrelation methods was tested and compared using both simulation and real data. The simulation experiment scenarios employ the isotropic probabilistic model using a predetermined eigenvalue and without any geometry or weighting system constraints. MIICD method outperformed other three methods with conditioning improvements over LAMBDA method by 78.33% and 81.67% without and with eigenvalue constraint respectively. The real data experiment scenarios involve both the single constellation system case and dual constellations system case. Experimental results demonstrate that by comparing with LAMBDA, MIICD method can significantly improve the efficiency of reducing the condition number by 78.65% and 97.78% in the case of single constellation and dual constellations respectively. It also shows improvements in the number of search candidate points by 98.92% and 100% in single constellation case and dual constellations case.