616 resultados para difficult brands
Resumo:
This case-study exemplifies a ‘writing movement’, which is currently occurring in various parts of Australia through the support of social media. A concept emerging from the café scene in San Francisco, ‘Shut Up and Write!’ is a meetup group that brings writers together at a specific time and place to write side by side, thus making writing practice, social. This concept has been applied to the academic environment and our case-study explores the positive outcomes in two locations: RMIT University and QUT. We believe that this informal learning practice can be implemented to assist research students in developing academic skills. DESCRIPTION: Please describe your practice as a case study, including its context; challenge addressed; its aims; what it is; and how it supports creative practice PhD students or supervisors. Additional information may include: the outcomes; key factors or principles that contribute to its effectiveness; anticipated impact/evidence of impact. Research students spend the majority of their time outside of formal learning environments. Doctoral candidates enter their degree with a range of experience, knowledge and needs, making it difficult to provide writing assistance in a structured manner. Using a less structured approach to provide writing assistance has been trialled with promising results (Boud, Cohen, & Sampson, 2001; Stracke, 2010; Devenish et al, 2009). Although, semi structured approaches have been developed and examined, informal learning opportunities have received minimal attention. The primary difference of Shut Up and Write! to other writing practices, is that individuals do not engage in any structured activity and they do not share the outcomes of the writing. The purpose of Shut Up and Write! is to transform writing practice from a solitary experience, to a social one. Shut Up and Write! typically takes place outside of formal learning environments, in public spaces such as a café. The structure of Shut Up and Write! sessions is simple: participants meet at a specific time and place, chat for a few minutes, then they Shut Up and Write for a predetermined amount of time. Critical to the success of the sessions, is that there is no critiquing of the writing, and there is no competition or formal exercises. Our case-study examines the experience of two meetup groups at RMIT University and QUT through narrative accounts from participants. These accounts reveal that participants have learned: • Writing/productivity techniques; • Social/cloud software; • Aspects of the PhD; and • ‘Mundane’ dimensions of academic practice. In addition to this, activities such as Shut Up and Write! promote peer to peer bonding, knowledge exchange, and informal learning within the higher degree research experience. This case-study extends the initial work presented by the authors in collaboration with Dr. Inger Mewburn at QPR2012 – Quality in Postgraduate Research Conference, 2012.
Resumo:
Providing help for research degree writing within a formal structure is difficult because research students come into their degree with widely varying needs and levels of experience. Providing writing assistance within a less structured learning context is an approach which has been trialled in higher education with promising results (Boud, Cohen & Sampson, 2001; Stracke, 2010; Devendish et al., 2009). While semi structured approaches have been the subject of study, little attention has been paid to the processes of informal learning which exist within doctoral education. In this paper we explore a 'writing movement' which has started to be taken up at various locations in Australia through the auspices of social media (Twitter and Facebook). 'Shut up and Write' is a concept first used in the cafe scene in San Francisco, where writers converge at a specific time and place and write together, without showing each other the outcomes, temporarily transforming writing from a solitary practice to a social one. In this paper we compare the experience of facilitating shut up and write sessions in two locations: RMIT University and Queensland University of Technology. The authors describe the set up and functioning of the different groups and report on feedback from regular participants, both physical and virtual. We suggest that informal learning practices can be exploited to assist research students to orientate themselves to the university environment and share vital technical skills, with very minimal input from academic staff. This experience suggests there is untapped potential within these kinds of activities to promote learning within the research degree experience which is sustainable and builds a stronger sense of community.
Resumo:
Research background: Communicating the diverse nature of multimodal practice is inherently difficult for the design-led research academic. Websites are an effective means of displaying images and text, but for the user/viewer the act of viewing is often random and disorienting, due to the non-linear means of accessing the information. This characteristic of websites limits the medium’s efficacy in regard to presenting an overarching philosophical standpoint or theme - the key driver behind most academic research. Research Contribution: This website: http://www.ianweirarchitect.com, presents a means of reconciling this problem by presenting a deceptively simple graphic and temporal layout, which limits the opportunity for the user/viewer to become disoriented and miss the key themes and issues that binds, the otherwise divergent, research material together. Research significance: http://www.ianweirarchitect.com, is a creative work that supplements Dr Ian Weir’s exhibition “Enacted Cartography” held in August 2012 in Brisbane and in August/September 2012 in Venice, Italy for the 13th International Architecture Exhibition (Venice Architecture Biennale). Dr Weir was selected by the Australian Institute of Architects to represent innovation in architectural practice for the Institute’s Formations: New Practices in Australian Architecture, exhibition and catalogue (of the same name) held in the Australian Pavilion, The Giardini, Venice. This website is creative output that compliments Dr Weir’s other multimodal outputs including photographic artworks, cartographic maps and architectural designs.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
Background subtraction is a fundamental low-level processing task in numerous computer vision applications. The vast majority of algorithms process images on a pixel-by-pixel basis, where an independent decision is made for each pixel. A general limitation of such processing is that rich contextual information is not taken into account. We propose a block-based method capable of dealing with noise, illumination variations, and dynamic backgrounds, while still obtaining smooth contours of foreground objects. Specifically, image sequences are analyzed on an overlapping block-by-block basis. A low-dimensional texture descriptor obtained from each block is passed through an adaptive classifier cascade, where each stage handles a distinct problem. A probabilistic foreground mask generation approach then exploits block overlaps to integrate interim block-level decisions into final pixel-level foreground segmentation. Unlike many pixel-based methods, ad-hoc postprocessing of foreground masks is not required. Experiments on the difficult Wallflower and I2R datasets show that the proposed approach obtains on average better results (both qualitatively and quantitatively) than several prominent methods. We furthermore propose the use of tracking performance as an unbiased approach for assessing the practical usefulness of foreground segmentation methods, and show that the proposed approach leads to considerable improvements in tracking accuracy on the CAVIAR dataset.
Resumo:
The field of Arts-Health practice and research has grown exponentially in the past 30 years. While researchers are using applied arts as the subject of investigation in research, the evaluation of practice and participant benefits has a limited general focus. In recent years, the field has witnessed a growing concentration on the evaluation of health outcomes, outputs and tangential benefits for participants engaging in Arts-Health practice. The wide range of methodological approaches applied arts practitioners implement make the field difficult to define. This article introduces the term Arts-Health intersections as a model of practice and framework to promote consistency in design, implementation and evaluative processes in applied arts programmes promoting health outcomes. The article challenges the current trend to solely evaluate health outcomes in the field, and promotes a concurrent and multidisciplinary methodological approach that can be adopted to promote evaluation, consistency and best practice in the field of Arts-Health intersections. The article provides a theoretical overview of Arts-Health intersections, and then takes this theoretical platform and details a best model of practice for developing Arts-Health intersections and presents this model as a guide.
Resumo:
The effort to make schools more inclusive, together with the pressure to retain students until the end of secondary school, has greatly increased both the number and educational requirements of students enrolling in their local school. Of critical concern, despite years of research and improvements in policy, pedagogy and educational knowledge, is the enduring categorisation and marginalization of students with diverse abilities. Research has shown that it can be difficult for schools to negotiate away from the pressure to categorise or diagnose such students, particularly those with challenging behaviour. In this paper, we highlight instances where some schools have responded to increasing diversity by developing new cultural practices to engage both staff and students; in some cases, decreasing suspension while improving retention, behaviour and performance.
Resumo:
Climate change and land use pressures are making environmental monitoring increasingly important. As environmental health is degrading at an alarming rate, ecologists have tried to tackle the problem by monitoring the composition and condition of environment. However, traditional monitoring methods using experts are manual and expensive; to address this issue government organisations designed a simpler and faster surrogate-based assessment technique for consultants, landholders and ordinary citizens. However, it remains complex, subjective and error prone. This makes collected data difficult to interpret and compare. In this paper we describe a work-in-progress mobile application designed to address these shortcomings through the use of augmented reality and multimedia smartphone technology.
Resumo:
Adolescent idiopathic scoliosis (AIS) is a three-dimensional spinal deformity involving the side-to-side curvature of the spine in the coronal plane and axial rotation of the vertebrae in the transverse plane. For patients with a severe or rapidly progressing deformity, corrective instrumented fusion surgery is performed. The wide choice of implants and large variability between patients make it difficult for surgeons to choose optimal treatment strategies. This paper describes the patient specific finite element modelling techniques employed and the results of preliminary analyses predicting the surgical outcomes for a series of AIS patients. This report highlights the importance of not only patient-specific anatomy and material parameters, but also patient-specific data for the clinical and physiological loading conditions experienced by the patient who has corrective scoliosis surgery.
Resumo:
Despite the compelling case for moving towards cloud computing, the upstream oil & gas industry faces several technical challenges—most notably, a pronounced emphasis on data security, a reliance on extremely large data sets, and significant legacy investments in information technology infrastructure—that make a full migration to the public cloud difficult at present. Private and hybrid cloud solutions have consequently emerged within the industry to yield as much benefit from cloud-based technologies as possible while working within these constraints. This paper argues, however, that the move to private and hybrid clouds will very likely prove only to be a temporary stepping stone in the industry's technological evolution. By presenting evidence from other market sectors that have faced similar challenges in their journey to the cloud, we propose that enabling technologies and conditions will probably fall into place in a way that makes the public cloud a far more attractive option for the upstream oil & gas industry in the years ahead. The paper concludes with a discussion about the implications of this projected shift towards the public cloud, and calls for more of the industry's services to be offered through cloud-based “apps.”
Resumo:
Several major human pathogens, including the filoviruses, paramyxoviruses, and rhabdoviruses, package their single-stranded RNA genomes within helical nucleocapsids, which bud through the plasma membrane of the infected cell to release enveloped virions. The virions are often heterogeneous in shape, which makes it difficult to study their structure and assembly mechanisms. We have applied cryo-electron tomography and sub-tomogram averaging methods to derive structures of Marburg virus, a highly pathogenic filovirus, both after release and during assembly within infected cells. The data demonstrate the potential of cryo-electron tomography methods to derive detailed structural information for intermediate steps in biological pathways within intact cells. We describe the location and arrangement of the viral proteins within the virion. We show that the N-terminal domain of the nucleoprotein contains the minimal assembly determinants for a helical nucleocapsid with variable number of proteins per turn. Lobes protruding from alternate interfaces between each nucleoprotein are formed by the C-terminal domain of the nucleoprotein, together with viral proteins VP24 and VP35. Each nucleoprotein packages six RNA bases. The nucleocapsid interacts in an unusual, flexible "Velcro-like" manner with the viral matrix protein VP40. Determination of the structures of assembly intermediates showed that the nucleocapsid has a defined orientation during transport and budding. Together the data show striking architectural homology between the nucleocapsid helix of rhabdoviruses and filoviruses, but unexpected, fundamental differences in the mechanisms by which the nucleocapsids are then assembled together with matrix proteins and initiate membrane envelopment to release infectious virions, suggesting that the viruses have evolved different solutions to these conserved assembly steps.
Resumo:
Historically, perceptions about mathematics and how it is taught and learned in schools have been mixed and as a consequence have an influence on self efficacy. There are those of us who see mathematics as logical and an enjoyable subject to learn, whilst others see mathematics as irrelevant, difficult and contributing to their school failure. Research has shown that over-represented in the latter are Aboriginal and Torres Strait Islander, low SES and ESL students. These students are the focus of YuMi Deadly Centre (YDC) professional learning and research work at the Queensland University of Technology in Brisbane.
Resumo:
There is global competition for engineering talent with some industries struggling to attract quality candidates. The ‘brands’ of industries and organisations are important elements in attracting talent in a competitive environment. Using brand equity and signalling theory, this paper reports a quantitative study examining factors that attract graduating engineers and technicians to engineering careers in a weak brand profile industry. The survey measures graduating engineers’ preferences for career benefits and their perceptions of the rail industry, which has identified a significant skilled labour shortfall. Knowledge of young engineers’ preferences for certain benefits and segmenting preferences can inform branding and communications strategies. The findings have implications for all industries and organisations, especially those with a weaker brand profile and issues with attracting talent.
Resumo:
Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the “gold standard” for predicting dose deposition in the patient. In this study, software has been developed that enables the transfer of treatment plan information from the treatment planning system to a Monte Carlo dose calculation engine. A database of commissioned linear accelerator models (Elekta Precise and Varian 2100CD at various energies) has been developed using the EGSnrc/BEAMnrc Monte Carlo suite. Planned beam descriptions and CT images can be exported from the treatment planning system using the DICOM framework. The information in these files is combined with an appropriate linear accelerator model to allow the accurate calculation of the radiation field incident on a modelled patient geometry. The Monte Carlo dose calculation results are combined according to the monitor units specified in the exported plan. The result is a 3D dose distribution that could be used to verify treatment planning system calculations. The software, MCDTK (Monte Carlo Dicom ToolKit), has been developed in the Java programming language and produces BEAMnrc and DOSXYZnrc input files, ready for submission on a high-performance computing cluster. The code has been tested with the Eclipse (Varian Medical Systems), Oncentra MasterPlan (Nucletron B.V.) and Pinnacle3 (Philips Medical Systems) planning systems. In this study the software was validated against measurements in homogenous and heterogeneous phantoms. Monte Carlo models are commissioned through comparison with quality assurance measurements made using a large square field incident on a homogenous volume of water. This study aims to provide a valuable confirmation that Monte Carlo calculations match experimental measurements for complex fields and heterogeneous media.
Resumo:
Changing environments present a number of challenges to mobile robots, one of the most significant being mapping and localisation. This problem is particularly significant in vision-based systems where illumination and weather changes can cause feature-based techniques to fail. In many applications only sections of an environment undergo extreme perceptual change. Some range-based sensor mapping approaches exploit this property by combining occasional place recognition with the assumption that odometry is accurate over short periods of time. In this paper, we develop this idea in the visual domain, by using occasional vision-driven loop closures to infer loop closures in nearby locations where visual recognition is difficult due to extreme change. We demonstrate successful map creation in an environment in which change is significant but constrained to one area, where both the vanilla CAT-Graph and a Sum of Absolute Differences matcher fails, use the described techniques to link dissimilar images from matching locations, and test the robustness of the system against false inferences.