949 resultados para Performance degradation
Resumo:
Since 2007 Kite Arts Education Program (KITE), based at Queensland Performing Arts Centre (QPAC), has been engaged in delivering a series of theatre-based experiences for children in low socio-economic primary schools in Queensland. The artist in residence (AIR) project titled Yonder includes performances developed by the children with the support and leadership of teacher artists from KITE for their community and parents/carers,supported by a peak community cultural institution. In 2009,Queensland Performing Arts Centre partnered with Queensland University of Technology (QUT) Creative Industries Faculty (Drama) to conduct a three-year evaluation of the Yonder project to understand the operational dynamics, artistic outputs and the educational benefits of the project. This paper outlines the research findings for children engaged in the Yonder project in the interrelated areas of literacy development and social competencies. Findings are drawn from six iterations of the project in suburban locations on the edge of Brisbane city and in regional Queensland.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
Adolescent idiopathic scoliosis is a complex three dimensional deformity affecting 2-3% of the general population. Resulting spine deformities include progressive coronal curvature, hypokyphosis, or frank lordosis in the thoracic spine and vertebral rotation in the axial plane with posterior elements turned into the curve concavity. The potential for curve progression is heightened during the adolescent growth spurt. Success of scoliosis deformity correction depends on solid bony fusion between adjacent vertebrae after the intervertebral discs have been surgically cleared and the disc spaces filled with graft material. Problems with bone graft harvest site morbidity as well as limited bone availability have led to the search for bone graft substitutes. Recently, a bioactive and resorbable scaffold fabricated from medical grade polycaprolactone (PCL) has been developed for bone regeneration at load bearing sites. Combined with recombinant human bone morphogenic protein–2 (rhBMP-2), this has been shown to be successful in acting as a bone graft substitute in acting as a bone graft substitute in a porcine lumbar interbody fusion model when compared to autologous bone graft. This in vivo sheep study intends to evaluate the suitability of a custom designed medical grade PCL scaffold in combination with rhBMP-2 as a bone graft substitute in the setting of mini–thoracotomy surgery as a platform for ongoing research to benefit patients with adolescent idiopathic scoliosis.
Resumo:
The determination of performance standards and assessment practices in regard to student work placements is an essential and important task. Inappropriate, inadequate, or excessively complex assessment tasks can influence levels of student engagement and the quality of learning outcomes. Critical to determining appropriate standards and assessment tasks is an understanding and knowledge of key elements of the learning environment and the extent to which opportunities are provided for students to engage in critical reflection and judgement of their own performance in the contexts of the work environment. This paper focuses on the development of essential skills and knowledge (capabilities) that provide evidence of learning in work placements by describing an approach taken in the science and technology disciplines. Assessment matrices are presented to illustrate a method of assessment for use within the context of the learning environment centred on work placements in science and technology. This study contributes to the debate on the meaning of professional capability, performance standards and assessment practices in work placement programs by providing evidence of an approach that can be adapted by other programs to achieve similar benefits. The approach may also be valuable to other learning contexts where capability and performance are being judged in situations that are outside a controlled teaching and learning environment i.e. in other life-wide learning contexts.
Resumo:
Return side streams from anaerobic digesters and dewatering facilities at wastewater treatment plants (WWTPs) contribute a significant proportion of the total nitrogen load on a mainstream process. Similarly, significant phosphate loads are also recirculated in biological nutrient removal (BNR) wastewater treatment plants. Ion exchange using a new material, known by the name MesoLite, shows strong potential for the removal of ammonia from these side streams and an opportunity to concurrently reduce phosphate levels. A pilot plant was designed and operated for several months on an ammonia rich centrate from a dewatering centrifuge at the Oxley Creek WWTP, Brisbane, Australia. The system operated with a detention time in the order of one hour and was operated for between 12 and 24 hours prior to regeneration with a sodium rich solution. The same pilot plant was used to demonstrate removal of phosphate from an abattoir wastewater stream at similar flow rates. Using MesoLite materials, >90% reduction of ammonia was achieved in the centrate side stream. A full-scale process would reduce the total nitrogen load at the Oxley Creek WWTP by at least 18%. This reduction in nitrogen load consequently improves the TKN/COD ratio of the influent and enhances the nitrogen removal performance of the biological nutrient removal process.
Resumo:
The challenge of persistent appearance-based navigation and mapping is to develop an autonomous robotic vision system that can simultaneously localize, map and navigate over the lifetime of the robot. However, the computation time and memory requirements of current appearance-based methods typically scale not only with the size of the environment but also with the operation time of the platform; also, repeated revisits to locations will develop multiple competing representations which reduce recall performance. In this paper we present a solution to the persistent localization, mapping and global path planning problem in the context of a delivery robot in an office environment over a one-week period. Using a graphical appearance-based SLAM algorithm, CAT-Graph, we demonstrate constant time and memory loop closure detection with minimal degradation during repeated revisits to locations, along with topological path planning that improves over time without using a global metric representation. We compare the localization performance of CAT-Graph to openFABMAP, an appearance-only SLAM algorithm, and the path planning performance to occupancy-grid based metric SLAM. We discuss the limitations of the algorithm with regard to environment change over time and illustrate how the topological graph representation can be coupled with local movement behaviors for persistent autonomous robot navigation.
Resumo:
Stories by children’s writer Dr. Seuss have often been utilised as non-traditional narrative reflections regarding the issues of ethics and morality (Greenwood, 2000). Such case studies are viewed as effective teaching and learning tools due to the associated analytical and decision-making frameworks that are represented within the texts, and focus upon the exploration of universally general virtues and approaches to ethics (Hankes, 2012). Whilst Dr. Seuss did not create a story directly related to the sport, exercise or performance domains, many of his narratives possess psychological implications that are applicable in any situation that requires ethical consideration of the thinking and choices people make. The following exploration of the ‘ethical places you’ll go’ draws upon references to his work as a guide to navigating this interesting and sometimes challenging landscape for sport, exercise, and performance psychologists (SEPP).
Resumo:
Sport and exercise psychologists are often sought after to apply their knowledge, skills and experience from a sporting context into other performance-related industries and endeavours. Over the past two decades, this has noticeably expanded out from a natural progression into the performing arts with other ‘typical’ performers (e.g., dancers, actors, musicians, singers) through to people who work in high pressure environments that consist of clear performance outputs and requirements that are usually linked to high impact consequences for non-achievement (e.g., lawyers, surgeons, executives, military personnel, safety professionals). Whilst these areas of application continue to increase in popularity and performance psychology is more readily recognised as an important factor in people performance across industries, the use of psychology within the performing arts continues to deepen and solidify its value as an essential and critical factor for success. This article focuses on the contribution of psychology to the performing arts that I have observed over more than 20 years – obtained through a variety of roles primarily within the dance sector including as performer, educator, health professional, researcher, commentator and senior leader.
Resumo:
Educators are faced with many challenging questions in designing an effective curriculum. What prerequisite knowledge do students have before commencing a new subject? At what level of mastery? What is the spread of capabilities between bare-passing students vs. the top performing group? How does the intended learning specification compare to student performance at the end of a subject? In this paper we present a conceptual model that helps in answering some of these questions. It has the following main capabilities: capturing the learning specification in terms of syllabus topics and outcomes; capturing mastery levels to model progression; capturing the minimal vs. aspirational learning design; capturing confidence and reliability metrics for each of these mappings; and finally, comparing and reflecting on the learning specification against actual student performance. We present a web-based implementation of the model, and validate it by mapping the final exams from four programming subjects against the ACM/IEEE CS2013 topics and outcomes, using Bloom's Taxonomy as the mastery scale. We then import the itemised exam grades from 632 students across the four subjects and compare the demonstrated student performance against the expected learning for each of these. Key contributions of this work are the validated conceptual model for capturing and comparing expected learning vs. demonstrated performance, and a web-based implementation of this model, which is made freely available online as a community resource.
Resumo:
The current gold standard for the design of orthopaedic implants is 3D models of long bones obtained using computed tomography (CT). However, high-resolution CT imaging involves high radiation exposure, which limits its use in healthy human volunteers. Magnetic resonance imaging (MRI) is an attractive alternative for the scanning of healthy human volunteers for research purposes. Current limitations of MRI include difficulties of tissue segmentation within joints and long scanning times. In this work, we explore the possibility of overcoming these limitations through the use of MRI scanners operating at a higher field strength. We quantitatively compare the quality of anatomical MR images of long bones obtained at 1.5 T and 3 T and optimise the scanning protocol of 3 T MRI. FLASH images of the right leg of five human volunteers acquired at 1.5 T and 3 T were compared in terms of signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR). The comparison showed a relatively high CNR and SNR at 3 T for most regions of the femur and tibia, with the exception of the distal diaphyseal region of the femur and the mid diaphyseal region of the tibia. This was accompanied by an ~65% increase in the longitudinal spin relaxation time (T1) of the muscle at 3 T compared to 1.5 T. The results suggest that MRI at 3 T may be able to enhance the segmentability and potentially improve the accuracy of 3D anatomical models of long bones, compared to 1.5 T. We discuss how the total imaging times at 3 T can be kept short while maximising the CNR and SNR of the images obtained.
Resumo:
The dynamic capabilities view (DCV) focuses on renewal of firms’ strategic knowledge resources so as to sustain competitive advantage within turbulent markets. Within the context of the DCV, the focus of knowledge management (KM) is to develop the KMC through deploying knowledge governance mechanisms that are conducive to facilitating knowledge processes so as to produce superior business performance over time. The essence of KM performance evaluation is to assess how well the KMC is configured with knowledge governance mechanisms and processes that enable a firm to achieve superior performance through matching its knowledge base with market needs. However, little research has been undertaken to evaluate KM performance from the DCV perspective. This study employed a survey study design and adopted hypothesis-testing approaches to develop a capability-based KM evaluation framework (CKMEF) that upholds the basic assertions of the DCV. Under the governance of the framework, a KM index (KMI) and a KM maturity model (KMMM) were derived not only to indicate the extent to which a firm’s KM implementations fulfill its strategic objectives, and to identify the evolutionary phase of its KMC, but also to bench-mark the KMC in the research population. The research design ensured that the evaluation framework and instruments have statistical significance and good generalizabilty to be applied in the research population, namely construction firms operating in the dynamic Hong Kong construction market. The study demonstrated the feasibility of quantitatively evaluating the development of the KMC and revealing the performance heterogeneity associated with the development.
Resumo:
In recent times, light gauge steel frame (LSF) wall systems are increasingly used in the building industry. They are usually made of cold-formed and thin-walled steel studs that are fire-protected by two layers of plasterboard on both sides. A composite LSF wall panel system was developed recently, where an insulation layer was used externally between the two plasterboards to improve the fire performance of LSF wall panels. In this research, finite element thermal models of the new composite panels were developed using a finite element program, SAFIR, to simulate their thermal performance under both standard and Eurocode design fire curves. Suitable apparent thermal properties of both the gypsum plasterboard and insulation materials were proposed and used in the numerical models. The developed models were then validated by comparing their results with available standard fire test results of composite panels. This paper presents the details of the finite element models of composite panels, the thermal analysis results in the form of time-temperature profiles under standard and Eurocode design fire curves and their comparisons with fire test results. Effects of using rockwool, glass fibre and cellulose fibre insulations with varying thickness and density were also investigated, and the results are presented in this paper. The results show that the use of composite panels in LSF wall systems will improve their fire rating, and that Eurocode design fires are likely to cause severe damage to LSF walls than standard fires.