387 resultados para EXAMPLE
Resumo:
World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.
Resumo:
Reliability analysis has several important engineering applications. Designers and operators of equipment are often interested in the probability of the equipment operating successfully to a given age - this probability is known as the equipment's reliability at that age. Reliability information is also important to those charged with maintaining an item of equipment, as it enables them to model and evaluate alternative maintenance policies for the equipment. In each case, information on failures and survivals of a typical sample of items is used to estimate the required probabilities as a function of the item's age, this process being one of many applications of the statistical techniques known as distribution fitting. In most engineering applications, the estimation procedure must deal with samples containing survivors (suspensions or censorings); this thesis focuses on several graphical estimation methods that are widely used for analysing such samples. Although these methods have been current for many years, they share a common shortcoming: none of them is continuously sensitive to changes in the ages of the suspensions, and we show that the resulting reliability estimates are therefore more pessimistic than necessary. We use a simple example to show that the existing graphical methods take no account of any service recorded by suspensions beyond their respective previous failures, and that this behaviour is inconsistent with one's intuitive expectations. In the course of this thesis, we demonstrate that the existing methods are only justified under restricted conditions. We present several improved methods and demonstrate that each of them overcomes the problem described above, while reducing to one of the existing methods where this is justified. Each of the improved methods thus provides a realistic set of reliability estimates for general (unrestricted) censored samples. Several related variations on these improved methods are also presented and justified. - i
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
Despite recent developments in fixed-film combined biological nutrients removal (BNR) technology; fixed-film systems (i.e., biofilters), are still at the early stages of development and their application has been limited to a few laboratory-scale experiments. Achieving enhanced biological phosphorus removal in fixed-film systems requires exposing the micro-organisms and the waste stream to alternating anaerobic/aerobic or anaerobic/anoxic conditions in cycles. The concept of cycle duration (CD) as a process control parameter is unique to fixed-film BNR systems, has not been previously investigated, and can be used to optimise the performance of such systems. The CD refers to the elapsed time before the biomass is re-exposed to the same environmental conditions in cycles. Fixed-film systems offer many advantages over suspended growth systems such as reduced operating costs, simplicity of operation, absence of sludge recycling problems, and compactness. The control of nutrient discharges to water bodies, improves water quality, fish production, and allow water reuse. The main objective of this study was to develop a fundamental understanding of the effect of CD on the transformations of nutrients in fixed-film biofilter systems subjected to alternating aeration I no-aeration cycles A fixed-film biofilter system consisting of three up-flow biofilters connected in series was developed and tested. The first and third biofilters were operated in a cyclic mode in which the biomass was subjected to aeration/no-aeration cycles. The influent wastewater was simulated aquaculture whose composition was based on actual water quality parameters of aquacuture wastewater from a prawn grow-out facility. The influent contained 8.5 - 9:3 mg!L a111monia-N, 8.5- 8.7 mg/L phosphate-P, and 45- 50 mg!L acetate. Two independent studies were conducted at two biofiltration rates to evaluate and confirm the effect of CD on nutrient transformations in the biofilter system for application in aquaculture: A third study was conducted to enhance denitrification in the system using an external carbon- source at a rate varying from 0-24 ml/min. The CD was varied in the range of0.25- 120 hours for the first two studies and fixed at 12 hours for the third study. This study identified the CD as an important process control parameter that can be used to optimise the performance of full-scale fixed-film systems for BNR which represents a novel contribution in this field of research. The CD resulted in environmental conditions that inhibited or enhanced nutrient transformations. The effect of CD on BNR in fixed-film systems in terms of phosphorus biomass saturation and depletion has been established. Short CDs did not permit the establishment of anaerobic activity in the un-aerated biofilter and, thus, inhibited phosphorus release. Long CDs resulted in extended anaerobic activity and, thus, resulted in active phosphorus release. Long CDs, however, resulted in depleting the biomass phosphorus reservoir in the releasing biofilter and saturating the biomass phosphorus reservoir in the up-taking biofilter in the cycle. This phosphorus biomass saturation/depletion phenomenon imposes a practical limit on how short or long the CD can be. The length of the CD should be somewhere just before saturation or depletion occur and for the system tested, the optimal CD was 12 hours for the biofiltration rates tested. The system achieved limited net phosphorus removal due to the limited sludge wasting and lack of external carbon supply during phosphorus uptake. The phosphorus saturation and depletion reflected the need to extract phosphorus from the phosphorus-rich micro-organisms, for example, through back-washing. The major challenges of achieving phosphorus removal in the system included: (I) overcoming the deterioration in the performance of the system during the transition period following the start of each new cycle; and (2) wasting excess phosphorus-saturated biomass following the aeration cycle. Denitrification occurred in poorly aerated sections of the third biofilter and generally declined as the CD increased and as the time progressed in the individual cycle. Denitrification and phosphorus uptake were supplied by an internal organic carbon source, and the addition of an external carbon source (acetate) to the third biofilter resulted in improved denitrification efficiency in the system from 18.4 without supplemental carbon to 88.7% when the carbon dose reached 24 mL/min The removal of TOC and nitrification improved as the CD increased, as a result of the reduction in the frequency of transition periods between the cycles. A conceptual design of an effective fixed-film BNR biofilter system for the treatment of the influent simulated aquaculture wastewater was proposed based on the findings of the study.
Resumo:
During the past decade, a significant amount of research has been conducted internationally with the aim of developing, implementing, and verifying "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures. Application of these methods permits comprehensive assessment of the actual failure modes and ultimate strengths of structural systems in practical design situations, without resort to simplified elastic methods of analysis and semi-empirical specification equations. Advanced analysis has the potential to extend the creativity of structural engineers and simplify the design process, while ensuring greater economy and more uniform safety with respect to the ultimate limit state. The application of advanced analysis methods has previously been restricted to steel frames comprising only members with compact cross-sections that are not subject to the effects of local buckling. This precluded the use of advanced analysis from the design of steel frames comprising a significant proportion of the most commonly used Australian sections, which are non-compact and subject to the effects of local buckling. This thesis contains a detailed description of research conducted over the past three years in an attempt to extend the scope of advanced analysis by developing methods that include the effects of local buckling in a non-linear analysis formulation, suitable for practical design of steel frames comprising non-compact sections. Two alternative concentrated plasticity formulations are presented in this thesis: the refined plastic hinge method and the pseudo plastic zone method. Both methods implicitly account for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the methods for the analysis of steel frames comprising non-compact sections has been established by comparison with a comprehensive range of analytical benchmark frame solutions. Both the refined plastic hinge and pseudo plastic zone methods are more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations. For example, the pseudo plastic zone method predicts the ultimate strength of the analytical benchmark frames with an average conservative error of less than one percent, and has an acceptable maximum unconservati_ve error of less than five percent. The pseudo plastic zone model can allow the design capacity to be increased by up to 30 percent for simple frames, mainly due to the consideration of inelastic redistribution. The benefits may be even more significant for complex frames with significant redundancy, which provides greater scope for inelastic redistribution. The analytical benchmark frame solutions were obtained using a distributed plasticity shell finite element model. A detailed description of this model and the results of all the 120 benchmark analyses are provided. The model explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. Its accuracy was verified by comparison with a variety of analytical solutions and the results of three large-scale experimental tests of steel frames comprising non-compact sections. A description of the experimental method and test results is also provided.
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
Resumo:
Environmental education is a field which has only come of age since the late nineteen sixties. While its content and practice have been widely debated and researched, its leadership has been minimally studied and, therefore, is only partially understood. The role of mentoring in the development of leaders has been alluded to, but has attracted scant research. Therefore, this study explores the importance of mentoring during the personal and professional development of leaders in environmental education. Four major research questions were investigated. Firstly, have leaders been men to red during their involvement with environmental education? Secondly, when and how has that mentoring taken place? Thirdly, what was the personal and professional effectiveness of the mentoring relationship? Fourthly, is there any continuation of the mentoring process which might be appropriate for professional development within the field of environmental education? Leaders were solicited from a broad field of environmental educators including teachers, administrators, academics, natural resource personnel, business and community persons. They had to be recognized as active leaders across several environmental education networks. The research elicited qualitative and quantitative survey data from fifty seven persons in Queensland, Australia and Colorado, USA. Seventeen semi-structured interviews were subsequently conducted with selected leaders who had nominated their mentors. This led to a further thirteen 'linked interviews' with some of the mentors' mentors and new mentorees. The interview data is presented as four cases reflecting pairs, triads, chains and webs of relationships- a major finding of the research process. The analysis of the data from the interviews and the surveys was conducted according to a grounded theory approach and was facilitated by NUD.IST, a computer program for non-numerical text analysis. The findings of the study revealed many variations on the classical mentoring patterns found in the literature. Gender and age were not seen as mportant factors, as there were examples of contemporaries in age, older men to younger women, older women to younger men, and women to women. Personal compatibility, professional respect and philosophical congruence were critical. Mentoring was initiated from early, mid and late career stages with the average length of the relationship being fourteen years. There was seldom an example of the mentoree using the mentor for hierarchical career climbing, although frequent career changes were made. However, leadership actions were found to increase after the intervention of a mentoring relationship. Three major categories of informal mentoring were revealed - perceived,acknowledged and deliberate. Further analysis led to the evolution of the core concept, a 'cascade of influence'. The major finding of this study was that this sample of leaders, mentors and new mentorees moved from the perception of having been mentored to the acknowledgment of these relationships and an affirmation of their efficacy for both personal and professional growth. Hence, the participants were more likely to continue future mentoring, not as a serendipitous happening, but through a deliberate choice. Heightened awareness and more frequent 'cascading' of mentoring have positive implications for the professional development of future leaders in environmental education in both formal and informal settings. Effective mentoring in environmental education does not seek to create 'clones' of the mentors, but rather to foster the development of autonomous mentorees who share a philosophical grounding. It is a deliberate invitation to 'join the clan'.
Resumo:
Two perceptions of the marginality of home economics are widespread across educational and other contexts. One is that home economics and those who engage in its pedagogy are inevitably marginalised within patriarchal relations in education and culture. This is because home economics is characterised as women's knowledge, for the private domain of the home. The other perception is that only orthodox epistemological frameworks of inquiry should be used to interrogate this state of affairs. These perceptions have prompted leading theorists in the field to call for non-essentialist approaches to research in order to re-think the thinking that has produced this cul-de-sac positioning of home economics as a body of knowledge and a site of teacher practice. This thesis takes up the challenge of working to locate a space outside the frame of modernist research theory and methods, recognising that this shift in epistemology is necessary to unsettle the idea that home economics is inevitably marginalised. The purpose of the study is to reconfigure how we have come to think about home economics teachers and the profession of home economics as a site of cultural practice, in order to think it otherwise (Lather, 1991). This is done by exploring how the culture of home economics is being contested from within. To do so, the thesis uses a 'posthumanist' approach, which rejects the conception of the individual as a unitary and fixed entity, but instead as a subject in process, shaped by desires and language which are not necessarily consciously determined. This posthumanist project focuses attention on pedagogical body subjects as the 'unsaid' of home economics research. It works to transcend the modernist dualism of mind/body, and other binaries central to modernist work, including private/public, male/female,paid/unpaid, and valued/unvalued. In so doing, it refuses the simple margin/centre geometry so characteristic of current perceptions of home economics itself. Three studies make up this work. Studies one and two serve to document the disciplined body of home economics knowledge, the governance of which works towards normalisation of the 'proper' home economics teacher. The analysis of these accounts of home economics teachers by home economics teachers, reveals that home economics teachers are 'skilled' yet they 'suffer' for their profession. Further,home economics knowledge is seen to be complicit in reinforcing the traditional roles of masculinity and femininity, thereby reinforcing heterosexual normativity which is central to patriarchal society. The third study looks to four 'atypical'subjects who defy the category of 'proper' and 'normal' home economics teacher. These 'atypical' bodies are 'skilled' but fiercely reject the label of 'suffering'. The discussion of the studies is a feminist poststructural account, using Russo's (1994) notion of the grotesque body, which is emergent from Bakhtin's (1968) theory of the carnivalesque. It draws on the 'shreds' of home economics pedagogy,scrutinising them for their subversive, transformative potential. In this analysis, the giving and taking of pleasure and fun in the home economics classroom presents moments of surprise and of carnival. Foucault's notion of the construction of the ethical individual shows these 'atypical' bodies to be 'immoderate' yet striving hard to be 'continent' body subjects. This research captures moments of transgression which suggest that transformative moments are already embodied in the pedagogical practices of home economics teachers, and these can be 'seen' when re-looking through postmodemist lenses. Hence, the cultural practices ofhome economics as inevitably marginalised are being contested from within. Until now, home economics as a lived culture has failed to recognise possibilities for reconstructing its own field beyond the confines of modernity. This research is an example of how to think about home economics teachers and the profession as a reconfigured cultural practice. Future research about home economics as a body of knowledge and a site of teacher practice need not retell a simple story of oppression. Using postmodemist epistemologies is one way to provide opportunities for new ways of looking.
Resumo:
This thesis is the result of an investigation of a Queensland example of curriculum reform based on outcomes, a type of reform common to many parts of the world during the last decade. The purpose of the investigation was to determine the impact of outcomes on teacher perspectives of professional practice. The focus was chosen to permit investigation not only of changes in behaviour resulting from the reform but also of teachers' attitudes and beliefs developed during implementation. The study is based on qualitative methodology, chosen because of its suitability for the investigation of attitudes and perspectives. The study exploits the researcher's opportunities for prolonged, direct contact with groups of teachers through the selection of an over-arching ethnography approach, an approach designed to capture the holistic nature of the reform and to contextualise the data within a broad perspective. The selection of grounded theory as a basis for data analysis reflects the open nature of this inquiry and demonstrates the study's constructivist assumptions about the production of knowledge. The study also constitutes a multi-site case study by virtue of the choice of three individual school sites as objects to be studied and to form the basis of the report. Three primary school sites administered by Brisbane Catholic Education were chosen as the focus of data collection. Data were collected from three school sites as teachers engaged in the first year of implementation of Student Performance Standards, the Queensland version of English outcomes based on the current English syllabus. Teachers' experience of outcomes-driven curriculum reform was studied by means of group interviews conducted at individual school sites over a period of fourteen months, researcher observations and the collection of artefacts such as report cards. Analysis of data followed grounded theory guidelines based on a system of coding. Though classification systems were not generated prior to data analysis, the labelling of categories called on standard, non-idiosyncratic terminology and analytic frames and concepts from existing literature wherever practicable in order to permit possible comparisons with other related research. Data from school sites were examined individually and then combined to determine teacher understandings of the reform, changes that have been made to practice and teacher responses to these changes in terms of their perspectives of professionalism. Teachers in the study understood the reform as primarily an accountability mechanism. Though teachers demonstrated some acceptance of the intentions of the reform, their responses to its conceptualisation, supporting documentation and implications for changing work practices were generally characterised by reduced confidence, anger and frustration. Though the impact of outcomes-based curriculum reform must be interpreted through the inter-relationships of a broad range of elements which comprise teachers' work and their attitudes towards their work, it is proposed that the substantive findings of the study can be understood in terms of four broad themes. First, when the conceptual design of outcomes did not serve teachers' accountability requirements and outcomes were perceived to be expressed in unfamiliar technical language, most teachers in the study lost faith in the value of the reform and lost confidence in their own abilities to understand or implement it. Second, this reduction of confidence was intensified when the scope of outcomes was outside the scope of the teachers' existing curriculum and assessment planning and teachers were confronted with the necessity to include aspects of syllabuses or school programs which they had previously omitted because of a lack of understanding or appreciation. The corollary was that outcomes promoted greater syllabus fidelity when frameworks were closely aligned. Third, other benefits the teachers associated with outcomes included the development of whole school curriculum resources and greater opportunity for teacher collaboration, particularly among schools. The teachers, however, considered a wide range of factors when determining the overall impact of the reform, and perceived a number of them in terms of the costs of implementation. These included the emergence of ethical dilemmas concerning relationships with students, colleagues and parents, reduced individual autonomy, particularly with regard to the selection of valued curriculum content and intensification of workload with the capacity to erode the relationships with students which teachers strongly associated with the rewards of their profession. Finally, in banding together at the school level to resist aspects of implementation, some teachers showed growing awareness of a collective authority capable of being exercised in response to top-down reform. These findings imply that Student Performance Standards require review and, additional implementation resourcing to support teachers through times of reduced confidence in their own abilities. Outcomes prove an effective means of high-fidelity syllabus implementation, and, provided they are expressed in an accessible way and aligned with syllabus frameworks and terminology, should be considered for inclusion in future syllabuses across a range of learning areas. The study also identifies a range of unintended consequences of outcomes-based curriculum and acknowledges the complexity of relationships among all the aspects of teachers' work. It also notes that the impact of reform on teacher perspectives of professional practice may alter teacher-teacher and school-system relationships in ways that have the potential to influence the effectiveness of future curriculum reform.
Resumo:
The primary purpose of this research was to examine individual differences in learning from worked examples. By integrating cognitive style theory and cognitive load theory, it was hypothesised that an interaction existed between individual cognitive style and the structure and presentation of worked examples in their effect upon subsequent student problem solving. In particular, it was hypothesised that Analytic-Verbalisers, Analytic-Imagers, and Wholist-lmagers would perform better on a posttest after learning from structured-pictorial worked examples than after learning from unstructured worked examples. For Analytic-Verbalisers it was reasoned that the cognitive effort required to impose structure on unstructured worked examples would hinder learning. Alternatively, it was expected that Wholist-Verbalisers would display superior performances after learning from unstructured worked examples than after learning from structured-pictorial worked examples. The images of the structured-pictorial format, incongruent with the Wholist-Verbaliser style, would be expected to split attention between the text and the diagrams. The information contained in the images would also be a source of redundancy and not easily ignored in the integrated structured-pictorial format. Despite a number of authors having emphasised the need to include individual differences as a fundamental component of problem solving within domainspecific subjects such as mathematics, few studies have attempted to investigate a relationship between mathematical or science instructional method, cognitive style, and problem solving. Cognitive style theory proposes that the structure and presentation of learning material is likely to affect each of the four cognitive styles differently. No study could be found which has used Riding's (1997) model of cognitive style as a framework for examining the interaction between the structural presentation of worked examples and an individual's cognitive style. 269 Year 12 Mathematics B students from five urban and rural secondary schools in Queensland, Australia participated in the main study. A factorial (three treatments by four cognitive styles) between-subjects multivariate analysis of variance indicated a statistically significant interaction. As the difficulty of the posttest components increased, the empirical evidence supporting the research hypotheses became more pronounced. The rigour of the study's theoretical framework was further tested by the construction of a measure of instructional efficiency, based on an index of cognitive load, and the construction of a measure of problem-solving efficiency, based on problem-solving time. The consistent empirical evidence within this study that learning from worked examples is affected by an interaction of cognitive style and the structure and presentation of the worked examples emphasises the need to consider individual differences among senior secondary mathematics students to enhance educational opportunities. Implications for teaching and learning are discussed and recommendations for further research are outlined.
Resumo:
Literally, the word compliance suggests conformity in fulfilling official requirements. The thesis presents the results of the analysis and design of a class of protocols called compliant cryptologic protocols (CCP). The thesis presents a notion for compliance in cryptosystems that is conducive as a cryptologic goal. CCP are employed in security systems used by at least two mutually mistrusting sets of entities. The individuals in the sets of entities only trust the design of the security system and any trusted third party the security system may include. Such a security system can be thought of as a broker between the mistrusting sets of entities. In order to provide confidence in operation for the mistrusting sets of entities, CCP must provide compliance verification mechanisms. These mechanisms are employed either by all the entities or a set of authorised entities in the system to verify the compliance of the behaviour of various participating entities with the rules of the system. It is often stated that confidentiality, integrity and authentication are the primary interests of cryptology. It is evident from the literature that authentication mechanisms employ confidentiality and integrity services to achieve their goal. Therefore, the fundamental services that any cryptographic algorithm may provide are confidentiality and integrity only. Since controlling the behaviour of the entities is not a feasible cryptologic goal,the verification of the confidentiality of any data is a futile cryptologic exercise. For example, there exists no cryptologic mechanism that would prevent an entity from willingly or unwillingly exposing its private key corresponding to a certified public key. The confidentiality of the data can only be assumed. Therefore, any verification in cryptologic protocols must take the form of integrity verification mechanisms. Thus, compliance verification must take the form of integrity verification in cryptologic protocols. A definition of compliance that is conducive as a cryptologic goal is presented as a guarantee on the confidentiality and integrity services. The definitions are employed to provide a classification mechanism for various message formats in a cryptologic protocol. The classification assists in the characterisation of protocols, which assists in providing a focus for the goals of the research. The resulting concrete goal of the research is the study of those protocols that employ message formats to provide restricted confidentiality and universal integrity services to selected data. The thesis proposes an informal technique to understand, analyse and synthesise the integrity goals of a protocol system. The thesis contains a study of key recovery,electronic cash, peer-review, electronic auction, and electronic voting protocols. All these protocols contain message format that provide restricted confidentiality and universal integrity services to selected data. The study of key recovery systems aims to achieve robust key recovery relying only on the certification procedure and without the need for tamper-resistant system modules. The result of this study is a new technique for the design of key recovery systems called hybrid key escrow. The thesis identifies a class of compliant cryptologic protocols called secure selection protocols (SSP). The uniqueness of this class of protocols is the similarity in the goals of the member protocols, namely peer-review, electronic auction and electronic voting. The problem statement describing the goals of these protocols contain a tuple,(I, D), where I usually refers to an identity of a participant and D usually refers to the data selected by the participant. SSP are interested in providing confidentiality service to the tuple for hiding the relationship between I and D, and integrity service to the tuple after its formation to prevent the modification of the tuple. The thesis provides a schema to solve the instances of SSP by employing the electronic cash technology. The thesis makes a distinction between electronic cash technology and electronic payment technology. It will treat electronic cash technology to be a certification mechanism that allows the participants to obtain a certificate on their public key, without revealing the certificate or the public key to the certifier. The thesis abstracts the certificate and the public key as the data structure called anonymous token. It proposes design schemes for the peer-review, e-auction and e-voting protocols by employing the schema with the anonymous token abstraction. The thesis concludes by providing a variety of problem statements for future research that would further enrich the literature.
Resumo:
This study is conducted within the IS-Impact Research Track at Queensland University of Technology (QUT). The goal of the IS-Impact Track is, "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable et al, 2006). IS-Impact is defined as "a measure at a point in time, of the stream of net benefits from the IS [Information System], to date and anticipated, as perceived by all key-user-groups" (Gable Sedera and Chan, 2008). Track efforts have yielded the bicameral IS-Impact measurement model; the "impact" half includes Organizational-Impact and Individual-Impact dimensions; the "quality" half includes System-Quality and Information-Quality dimensions. The IS-Impact model, by design, is intended to be robust, simple and generalisable, to yield results that are comparable across time, stakeholders, different systems and system contexts. The model and measurement approach employs perceptual measures and an instrument that is relevant to key stakeholder groups, thereby enabling the combination or comparison of stakeholder perspectives. Such a validated and widely accepted IS-Impact measurement model has both academic and practical value. It facilitates systematic operationalisation of a main dependent variable in research (IS-Impact), which can also serve as an important independent variable. For IS management practice it provides a means to benchmark and track the performance of information systems in use. From examination of the literature, the study proposes that IS-Impact is an Analytic Theory. Gregor (2006) defines Analytic Theory simply as theory that ‘says what is’, base theory that is foundational to all other types of theory. The overarching research question thus is "Does IS-Impact positively manifest the attributes of Analytic Theory?" In order to address this question, we must first answer the question "What are the attributes of Analytic Theory?" The study identifies the main attributes of analytic theory as: (1) Completeness, (2) Mutual Exclusivity, (3) Parsimony, (4) Appropriate Hierarchy, (5) Utility, and (6) Intuitiveness. The value of empirical research in Information Systems is often assessed along the two main dimensions - rigor and relevance. Those Analytic Theory attributes associated with the ‘rigor’ of the IS-Impact model; namely, completeness, mutual exclusivity, parsimony and appropriate hierarchy, have been addressed in prior research (e.g. Gable et al, 2008). Though common tests of rigor are widely accepted and relatively uniformly applied (particularly in relation to positivist, quantitative research), attention to relevance has seldom been given the same systematic attention. This study assumes a mainly practice perspective, and emphasises the methodical evaluation of the Analytic Theory ‘relevance’ attributes represented by the Utility and Intuitiveness of the IS-Impact model. Thus, related research questions are: "Is the IS-Impact model intuitive to practitioners?" and "Is the IS-Impact model useful to practitioners?" March and Smith (1995), identify four outputs of Design Science: constructs, models, methods and instantiations (Design Science research may involve one or more of these). IS-Impact can be viewed as a design science model, composed of Design Science constructs (the four IS-Impact dimensions and the two model halves), and instantiations in the form of management information (IS-Impact data organised and presented for management decision making). In addition to methodically evaluating the Utility and Intuitiveness of the IS-Impact model and its constituent constructs, the study aims to also evaluate the derived management information. Thus, further research questions are: "Is the IS-Impact derived management information intuitive to practitioners?" and "Is the IS-Impact derived management information useful to practitioners? The study employs a longitudinal design entailing three surveys over 4 years (the 1st involving secondary data) of the Oracle-Financials application at QUT, interspersed with focus groups involving senior financial managers. The study too entails a survey of Financials at four other Australian Universities. The three focus groups respectively emphasise: (1) the IS-Impact model, (2) the 2nd survey at QUT (descriptive), and (3) comparison across surveys within QUT, and between QUT and the group of Universities. Aligned with the track goal of producing IS-Impact scores that are highly comparable, the study also addresses the more specific utility-related questions, "Is IS-Impact derived management information a useful comparator across time?" and "Is IS-Impact derived management information a useful comparator across universities?" The main contribution of the study is evidence of the utility and intuitiveness of IS-Impact to practice, thereby further substantiating the practical value of the IS-Impact approach; and also thereby motivating continuing and further research on the validity of IS-Impact, and research employing the ISImpact constructs in descriptive, predictive and explanatory studies. The study also has value methodologically as an example of relatively rigorous attention to relevance. A further key contribution is the clarification and instantiation of the full set of analytic theory attributes.
Resumo:
Radioactive wastes are by-products of the use of radiation technologies. As with many technologies, the wastes are required to be disposed of in a safe manner so as to minimise risk to human health. This study examines the requirements for a hypothetical repository and develops techniques for decision making to permit the establishment of a shallow ground burial facility to receive an inventory of low-level radioactive wastes. Australia’s overall inventory is used as an example. Essential and desirable siting criteria are developed and applied to Australia's Northern Territory resulting in the selection of three candidate sites for laboratory investigations into soil behaviour. The essential quantifiable factors which govern radionuclide migration and ultimately influence radiation doses following facility closure are reviewed. Simplified batch and column procedures were developed to enable laboratory determination of distribution and retardation coefficient values for use in one-dimensional advection-dispersion transport equations. Batch and column experiments were conducted with Australian soils sampled from the three identified candidate sites using a radionuclide representative of the current national low-level radioactive waste inventory. The experimental results are discussed and site soil performance compared. The experimental results are subsequently used to compare the relative radiation health risks between each of the three sites investigated. A recommendation is made as to the preferred site to construct an engineered near-surface burial facility to receive the Australian low-level radioactive waste inventory.
Resumo:
Patterns of connectivity among local populations influence the dynamics of regional systems, but most ecological models have concentrated on explaining the effect of connectivity on local population structure using dynamic processes covering short spatial and temporal scales. In this study, a model was developed in an extended spatial system to examine the hypothesis that long term connectivity levels among local populations are influenced by the spatial distribution of resources and other habitat factors. The habitat heterogeneity model was applied to local wild rabbit populations in the semi-arid Mitchell region of southern central Queensland (the Eastern system). Species' specific population parameters which were appropriate for the rabbit in this region were used. The model predicted a wide range of long term connectivity levels among sites, ranging from the extreme isolation of some sites to relatively high interaction probabilities for others. The validity of model assumptions was assessed by regressing model output against independent population genetic data, and explained over 80% of the variation in the highly structured genetic data set. Furthermore, the model was robust, explaining a significant proportion of the variation in the genetic data over a wide range of parameters. The performance of the habitat heterogeneity model was further assessed by simulating the widely reported recent range expansion of the wild rabbit into the Mitchell region from the adjacent, panmictic Western rabbit population system. The model explained well the independently determined genetic characteristics of the Eastern system at different hierarchic levels, from site specific differences (for example, fixation of a single allele in the population at one site), to differences between population systems (absence of an allele in the Eastern system which is present in all Western system sites). The model therefore explained the past and long term processes which have led to the formation and maintenance of the highly structured Eastern rabbit population system. Most animals exhibit sex biased dispersal which may influence long term connectivity levels among local populations, and thus the dynamics of regional systems. When appropriate sex specific dispersal characteristics were used, the habitat heterogeneity model predicted substantially different interaction patterns between female-only and combined male and female dispersal scenarios. In the latter case, model output was validated using data from a bi-parentally inherited genetic marker. Again, the model explained over 80% of the variation in the genetic data. The fact that such a large proportion of variability is explained in two genetic data sets provides very good evidence that habitat heterogeneity influences long term connectivity levels among local rabbit populations in the Mitchell region for both males and females. The habitat heterogeneity model thus provides a powerful approach for understanding the large scale processes that shape regional population systems in general. Therefore the model has the potential to be useful as a tool to aid in the management of those systems, whether it be for pest management or conservation purposes.
Resumo:
Prostate cancer is an important male health issue. The strategies used to diagnose and treat prostate cancer underscore the cell and molecular interactions that promote disease progression. Prostate cancer is histologically defined by increasingly undifferentiated tumour cells and therapeutically targeted by androgen ablation. Even as the normal glandular architecture of the adult prostate is lost, prostate cancer cells remain dependent on the androgen receptor (AR) for growth and survival. This project focused on androgen-regulated gene expression, altered cellular differentiation, and the nexus between these two concepts. The AR controls prostate development, homeostasis and cancer progression by regulating the expression of downstream genes. Kallikrein-related serine peptidases are prominent transcriptional targets of AR in the adult prostate. Kallikrein 3 (KLK3), which is commonly referred to as prostate-specific antigen, is the current serum biomarker for prostate cancer. Other kallikreins are potential adjunct biomarkers. As secreted proteases, kallikreins act through enzyme cascades that may modulate the prostate cancer microenvironment. Both as a panel of biomarkers and cascade of proteases, the roles of kallikreins are interconnected. Yet the expression and regulation of different kallikreins in prostate cancer has not been compared. In this study, a spectrum of prostate cell lines was used to evaluate the expression profile of all 15 members of the kallikrein family. A cluster of genes was co-ordinately expressed in androgenresponsive cell lines. This group of kallikreins included KLK2, 3, 4 and 15, which are located adjacent to one another at the centromeric end of the kallikrein locus. KLK14 was also of interest, because it was ubiquitously expressed among the prostate cell lines. Immunohistochemistry showed that these 5 kallikreins are co-expressed in benign and malignant prostate tissue. The androgen-regulated expression of KLK2 and KLK3 is well-characterised, but has not been compared with other kallikreins. Therefore, KLK2, 3, 4, 14 and 15 expression were all measured in time course and dose response experiments with androgens, AR-antagonist treatments, hormone deprivation experiments and cells transfected with AR siRNA. Collectively, these experiments demonstrated that prostatic kallikreins are specifically and directly regulated by the AR. The data also revealed that kallikrein genes are differentially regulated by androgens; KLK2 and KLK3 were strongly up-regulated, KLK4 and KLK15 were modestly up-regulated, and KLK14 was repressed. Notably, KLK14 is located at the telomeric end of the kallikrein locus, far away from the centromeric cluster of kallikreins that are stimulated by androgens. These results show that the expression of KLK2, 3, 4, 14 and 15 is maintained in prostate cancer, but that these genes exhibit different responses to androgens. This makes the kallikrein locus an ideal model to investigate AR signalling. The increasingly dedifferentiated phenotype of aggressive prostate cancer cells is accompanied by the re-expression of signalling molecules that are usually expressed during embryogenesis and foetal tissue development. The Wnt pathway is one developmental cascade that is reactivated in prostate cancer. The canonical Wnt cascade regulates the intracellular levels of β-catenin, a potent transcriptional co-activator of T-cell factor (TCF) transcription factors. Notably, β-catenin can also bind to the AR and synergistically stimulate androgen-mediated gene expression. This is at the expense of typical Wnt/TCF target genes, because the AR:β-catenin and TCF:β-catenin interactions are mutually exclusive. The effect of β-catenin on kallikrein expression was examined to further investigate the role of β-catenin in prostate cancer. Stable knockdown of β-catenin in LNCaP prostate cancer cells attenuated the androgen-regulated expression of KLK2, 3, 4 and 15, but not KLK14. To test whether KLK14 is instead a TCF:β-catenin target gene, the endogenous levels of β-catenin were increased by inhibiting its degradation. Although KLK14 expression was up-regulated by these treatments, siRNA knockdown of β-catenin demonstrated that this effect was independent of β-catenin. These results show that β-catenin is required for maximal expression of KLK2, 3, 4 and 15, but not KLK14. Developmental cells and tumour cells express a similar repertoire of signalling molecules, which means that these different cell types are responsive to one another. Previous reports have shown that stem cells and foetal tissues can reprogram aggressive cancer cells to less aggressive phenotypes by restoring the balance to developmental signalling pathways that are highly dysregulated in cancer. To investigate this phenomenon in prostate cancer, DU145 and PC-3 prostate cancer cells were cultured on matrices pre-conditioned with human embryonic stem cells (hESCs). Soft agar assays showed that prostate cancer cells exposed to hESC conditioned matrices had reduced clonogenicity compared with cells harvested from control matrices. A recent study demonstrated that this effect was partially due to hESC-derived Lefty, an antagonist of Nodal. A member of the transforming growth factor β (TGFβ) superfamily, Nodal regulates embryogenesis and is re-expressed in cancer. The role of Nodal in prostate cancer has not previously been reported. Therefore, the expression and function of the Nodal signalling pathway in prostate cancer was investigated. Western blots confirmed that Nodal is expressed in DU145 and PC-3 cells. Immunohistochemistry revealed greater expression of Nodal in malignant versus benign glands. Notably, the Nodal inhibitor, Lefty, was not expressed at the mRNA level in any prostate cell lines tested. The Nodal signalling pathway is functionally active in prostate cancer cells. Recombinant Nodal treatments triggered downstream phosphorylation of Smad2 in DU145 and LNCaP cells, and stably-transfected Nodal increased the clonogencity of LNCaP cells. Nodal was also found to modulate AR signalling. Nodal reduced the activity of an androgen-regulated KLK3 promoter construct in luciferase assays and attenuated the endogenous expression of AR target genes including prostatic kallikreins. These results demonstrate that Nodal is a novel example of a developmental signalling molecule that is reexpressed in prostate cancer and may have a functional role in prostate cancer progression. In summary, this project clarifies the role of androgens and changing cellular differentiation in prostate cancer by characterising the expression and function of the downstream genes encoding kallikrein-related serine proteases and Nodal. Furthermore, this study emphasises the similarities between prostate cancer and early development, and the crosstalk between developmental signalling pathways and the AR axis. The outcomes of this project also affirm the utility of the kallikrein locus as a model system to monitor tumour progression and the phenotype of prostate cancer cells.