349 resultados para Context-aware applications
Resumo:
This paper presents a robust place recognition algorithm for mobile robots. The framework proposed combines nonlinear dimensionality reduction, nonlinear regression under noise, and variational Bayesian learning to create consistent probabilistic representations of places from images. These generative models are learnt from a few images and used for multi-class place recognition where classification is computed from a set of feature-vectors. Recognition can be performed in near real-time and accounts for complexity such as changes in illumination, occlusions and blurring. The algorithm was tested with a mobile robot in indoor and outdoor environments with sequences of 1579 and 3820 images respectively. This framework has several potential applications such as map building, autonomous navigation, search-rescue tasks and context recognition.
Resumo:
Alternative dispute resolution (a.d.r.) processes are entrenched in western style legal systems. Forms of dispute resolution are utilised within schools and health systems; built in to commercial contracts; found in workplaces, clubs and organisations; and accepted in general day-to-day community disputes. The a.d.r. literature includes references to ‘apology’, but is largely silent on ‘forgiveness’. Where an apology is offered as part of a dispute resolution process, practice suggests that formalised ‘forgiveness’ rarely follows. Mediators may agree there is a meaningful place for apology in dispute resolution processes, but are most unlikely to support a view that forgiveness, as a conscious act, has an equivalent place. Yet, if forgiveness is not limited to the ‘pardoning of an offence’, but includes a ‘giving up of resentment’, or the relinquishing of a grudge, then forgiveness may play an underestimated role in dispute management. In the context of some day-to-day dispute management practice, this paper questions whether forgiveness should follow an apology; and concludes that meaningful resolutions can be reached without any formal element of ‘forgiveness’ or absolution. However, dispute management practitioners need to be aware of the latent role other aspects of forgiveness may play for the disputing parties.
Resumo:
Becoming a teacher in technology-rich classrooms is a complex and challenging transition for career-change entrants. Those with generic or specialist Information and Communication Technology (ICT) expertise bring a mindset about purposeful uses of ICT that enrich student learning and school communities. The transition process from a non-education environment is both enhanced and constrained by shifting the technology context of generic or specialist ICT expertise, developed through a former career as well as general life experience. In developing an understanding of the complexity of classrooms and creating a learner centred way of working, perceptions about learners and learning evolve and shift. Shifts in thinking about how ICT expertise supports learners and enhances learning preceded shifts in perceptions about being a teacher, working with colleagues, and functioning in schools that have varying degrees of intensity and impact on evolving professional identities. Current teacher education and school induction programs are seen to be falling short of meeting the needs of career-change entrants and, as a flow on, the students they nurture. Research (see, for example, Tigchelaar, Brouwer, & Korthagen, 2008; Williams & Forgasz, 2009) highlights the value of generic and specialist expertise career-change teachers bring to the profession and draws attention to the challenges such expertise begets (Anthony & Ord, 2008; Priyadharshini & Robinson-Pant, 2003). As such, the study described in this thesis investigated perceptions of career-change entrants, who have generic (Mishra & Koehler, 2006) or specialist expertise, that is, ICT qualifications and work experience in the use of ICT. The career-change entrants‘ perceptions were sought as they shifted the technology context and transitioned into teaching in technology-rich classrooms. The research involved an interpretive analysis of qualitative data and quantitative data. The study used the explanatory case study (Yin, 1994) methodology enriched through grounded theory processes (Strauss & Corbin, 1998), to develop a theory about professional identity transition from the perceptions of the participants in the study. The study provided insights into the expertise and experiences of career change entrants, particularly in relation to how professional identities that include generic and specialist ICT knowledge and expertise were reconfigured while transitioning into the teaching profession. This thesis presents the Professional Identity Transition Theory that encapsulates perceptions about teaching in technology-rich classrooms amongst a selection of the increasing number of career-change entrants. The theory, grounded in the data, (Strauss & Corbin, 1998) proposes that career-change entrants experience transition phases of varying intensity that impact on professional identity, retention and development as a teacher. These phases are linked to a shift in perceptions rather than time as a teacher. Generic and specialist expertise in the use of ICT is a weight of the past and an asset that makes the transition process more challenging for career-change entrants. The study showed that career-change entrants used their experiences and perceptions to develop a way of working in a school community. Their way of working initially had an adaptive orientation focussed on immediate needs as their teaching practice developed. Following a shift of thinking, more generative ways of working focussed on the future emerged to enable continual enhancement and development of practice. Sustaining such learning is a personal, school and systemic challenge for the teaching profession.
Resumo:
The ability to reproducibly load bioactive molecules into polymeric microspheres is a challenge. Traditional microsphere fabrication methods typically provide inhomogeneous release profiles and suffer from lack of batch to batch reproducibility, hindering their potential to up-scale and their translation to the clinic. This deficit in homogeneity is in part attributed to broad size distributions and variability in the morphology of particles. It is thus desirable to control morphology and size of non-loaded particles in the first instance, in preparation for obtaining desired release profiles of loaded particles in the later stage. This is achieved by identifying the key parameters involved in particle production and understanding how adapting these parameters affects the final characteristics of particles. In this study, electrospraying was presented as a promising technique for generating reproducible particles made of polycaprolactone, a biodegradable, FDA-approved polymer. Narrow size distributions were obtained by the control of electrospraying flow rate and polymer concentration, with average particle sizes ranging from 10 to 20 um. Particles were shown to be spherical with a homogenous embossed texture, determined by the polymer entanglement regime taking place during electrospraying. No toxic residue was detected by this process based on preliminary cell work using DNA quantification assays, validating this method as suitable for further loading of bioactive components.
Resumo:
Guanxi has become a common term in the wider business environment and has attracted the increasing attention of researchers. Despite this, a consistent understanding of the concept continues to prove elusive. We review the extant business literature to highlight the major inconsistencies in the way guanxi is currently conceptualized: the breadth, linguistic-cultural depth, temporality, and level of analysis. We conclude with a clearer conceptualization of guanxi which separates the core elements from antecedents and consequences of guanxi. Furthermore, we compare and contrast guanxi with western correlates such as social networks and social capitals to further consolidate our understanding of guanxi.
Resumo:
Detection of Region of Interest (ROI) in a video leads to more efficient utilization of bandwidth. This is because any ROIs in a given frame can be encoded in higher quality than the rest of that frame, with little or no degradation of quality from the perception of the viewers. Consequently, it is not necessary to uniformly encode the whole video in high quality. One approach to determine ROIs is to use saliency detectors to locate salient regions. This paper proposes a methodology for obtaining ground truth saliency maps to measure the effectiveness of ROI detection by considering the role of user experience during the labelling process of such maps. User perceptions can be captured and incorporated into the definition of salience in a particular video, taking advantage of human visual recall within a given context. Experiments with two state-of-the-art saliency detectors validate the effectiveness of this approach to validating visual saliency in video. This paper will provide the relevant datasets associated with the experiments.
Resumo:
Throughout this workshop session we have looked at various configurations of Sage as well as using the Sage UI to run Sage applications (e.g. the image viewer). More advanced usage of Sage has been demonstrated using a Sage compatible version of Paraview highlighting the potential of parallel rendering. The aim of this tutorial session is to give a practical introduction to developing visual content for a tiled display using the Sage libraries. After completing this tutorial you should have the basic tools required to develop your own custom Sage applications. This tutorial is designed for software developers and intermediate programming knowledge is assumed, along with some introductory OpenGL . You will be required to write small portions of C/C++ code to complete this worksheet. However if you do not feel comfortable writing code (or have never written in C or C++), we will be on hand throughout this session so feel free to ask for some help. We have a number of machines in this lab running a VNC client to a virtual machine running Fedora 12. You should all be able to log in with the username “escience”, and password “escience10”. Some of the commands in this worksheet require you to run them as the root user, so note the password as you may need to use it a few times. If you need to access the Internet, then use the username “qpsf01”, password “escience10”
Resumo:
Osteoarthritis (OA) is a chronic, non-inflammatory type of arthritis, which usually affects the movable and weight bearing joints of the body. It is the most common joint disease in human beings and common in elderly people. Till date, there are no safe and effective diseases modifying OA drugs (DMOADs) to treat the millions of patients suffering from this serious and debilitating disease. However, recent studies provide strong evidence for the use of mesenchymal stem cell (MSC) therapy in curing cartilage related disorders. Due to their natural differentiation properties, MSCs can serve as vehicles for the delivery of effective, targeted treatment to damaged cartilage in OA disease. In vitro, MSCs can readily be tailored with transgenes with anti-catabolic or pro-anabolic effects to create cartilage-friendly therapeutic vehicles. On the other hand, tissue engineering constructs with scaffolds and biomaterials holds promising biological cartilage therapy. Many of these strategies have been validated in a wide range of in vitro and in vivo studies assessing treatment feasibility or efficacy. In this review, we provide an outline of the rationale and status of stem-cell-based treatments for OA cartilage, and we discuss prospects for clinical implementation and the factors crucial for maintaining the drive towards this goal.
Resumo:
Research in structural dynamics has received considerable attention due to problems associated with emerging slender structures, increased vulnerability of structures to random loads and aging infrastructure. This paper briefly describes some such research carried out on i) dynamics of composite floor structure, ii) dynamics of cable supported footbridge, iii) seismic mitigation of frame-shear wall structure using passive dampers and iv) development of a damage assessment model for use in structural health modelling.
Resumo:
Intelligible and accurate risk-based decision-making requires a complex balance of information from different sources, appropriate statistical analysis of this information and consequent intelligent inference and decisions made on the basis of these analyses. Importantly, this requires an explicit acknowledgement of uncertainty in the inputs and outputs of the statistical model. The aim of this paper is to progress a discussion of these issues in the context of several motivating problems related to the wider scope of agricultural production. These problems include biosecurity surveillance design, pest incursion, environmental monitoring and import risk assessment. The information to be integrated includes observational and experimental data, remotely sensed data and expert information. We describe our efforts in addressing these problems using Bayesian models and Bayesian networks. These approaches provide a coherent and transparent framework for modelling complex systems, combining the different information sources, and allowing for uncertainty in inputs and outputs. While the theory underlying Bayesian modelling has a long and well established history, its application is only now becoming more possible for complex problems, due to increased availability of methodological and computational tools. Of course, there are still hurdles and constraints, which we also address through sharing our endeavours and experiences.
Resumo:
In light of the changing nature of contemporary workplaces, this chapter attempts to identify employer expectations and the associated skills required to workers to function effectively in such workplaces. Workers are required to participate in informed discussion about their specific jobs and to contribute to the overall development of organisations. This requires deep understanding of domain-specific knowledge, which at times can be very complex. Workers are also required to take responsibility for their actions and are expected to be flexible so that they can be deployed to other related jobs depending on demand. Finally, workers are expected to be pro-active, be able to anticipate situations and continuously update their knowledge to address new situations. This chapter discusses the nature of knowledge and skills that will facilitate the above qualities.
Resumo:
As computer applications become more available—both technically and economically—construction project managers are increasingly able to access advanced computer tools capable of transforming the role that project managers have typically performed. Competence at using these tools requires a dual commitment in training—from the individual and the firm. Improving the computer skills of project managers can provide construction firms with a competitive advantage to differentiate from others in an increasingly competitive international market. Yet, few published studies have quantified what existing level of competence construction project managers have. Identification of project managers’ existing computer application skills is a necessary first step to developing more directed training to better capture the benefits of computer applications. This paper discusses the yet to be released results of a series of surveys undertaken in Malaysia, Singapore, Indonesia, Australia and the United States through QUT’s School of Construction Management and Property and the M.E. Rinker, Sr. School of Building Construction at the University of Florida. This international survey reviews the use and reported competence in using a series of commercially-available computer applications by construction project managers. The five different country locations of the survey allow cross-national comparisons to be made between project managers undertaking continuing professional development programs. The results highlight a shortfall in the ability of construction project managers to capture potential benefits provided by advanced computer applications and provide directions for targeted industry training programs. This international survey also provides a unique insight to the cross-national usage of advanced computer applications and forms an important step in this ongoing joint review of technology and the construction project manager.
Resumo:
Queer student activists are a visible aspect of Australian tertiary communities. Institutionally there are a number of organisations and tools representing and serving gay, lesbian, bisexual, transgender, intersex and ‘otherwise queer identifying’ (GLBTIQ) students. ‘Queer’ is a contentious term with meanings ranging from a complex deconstructive academic theory to a term for ‘gay’. Despite the institutional applications, the definition remains unclear and under debate. In this thesis I examine queer student activists’ production of print media, a previously under-researched area. In queer communities, print media provides crucial grounding for a model of queer. Central to identity formation and activism, this media is a site of textuality for the construction and circulation of discourses of queer student media. Thus, I investigate the various ways Australian queer student activists construct queer, queer identity, and queer activism in their print media. I use discourse analysis, participant observation and semi-structured interviews to enable a thorough investigation of both the process and the products of queer student media. My findings demonstrate that queer student activists’ politics are grounded in a range of ideologies drawing from Marxism, Feminism, Gay Liberation, Anti-assimilation and Queer Theory. Grounded in queer theoretical perspectives of performativity this research makes relatively new links between Queer Theory and Media Studies in its study of the production contexts of queer student media. In doing so, I show how the university context informs student articulations of queer, proving the necessity to locate research within its social-cultural setting. My research reveals that, much like Queer Theory, these representations of queer are rich with paradox. I argue that queer student activists are actually theorising queer. I call for a reconceptualisation of Queer Theory and question the current barriers between who is considered a ‘theorist’ of queer and who is an ‘activist’. If we can think about ‘theory’ as encompassing the work of activists, what implications might this have for politics and analysis?
Resumo:
A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.