944 resultados para Complexity theory
Resumo:
This thesis investigates the place of online moderation in supporting teachers to work in a system of standards-based assessment. The participants of the study were fifty middle school teachers who met online with the aim of developing consistency in their judgement decisions. Data were gathered through observation of the online meetings, interviews, surveys and the collection of artefacts. The data were viewed and analysed through sociocultural theories of learning and sociocultural theories of technology, and demonstrates how utilising these theories can add depth to understanding the added complexity of developing shared meaning of standards in an online context. The findings contribute to current understanding of standards-based assessment by examining the social moderation process as it acts to increase the reliability of judgements that are made within a standards framework. Specifically, the study investigates the opportunities afforded by conducting social moderation practices in a synchronous online context. The study explicates how the technology affects the negotiation of judgements and the development of shared meanings of assessment standards, while demonstrating how involvement in online moderation discussions can support teachers to become and belong within a practice of standards-based assessment. This research responds to a growing international interest in standards-based assessment and the use of social moderation to develop consistency in judgement decisions. Online moderation is a new practice to address these concerns on a systemic basis.
Resumo:
Background: Mentoring is often proposed as a solution to the problem of successfully recruiting and retaining nursing staff. The aim of this constructivist grounded theory study was to explore Australian rural nurses' experiences of mentoring. Design: The research design used was reflexive in nature resulting in a substantive, constructivist grounded theory study. Participants: A national advertising campaign and snowball sampling were used to recruit nine participants from across Australia. Participants were rural nurses who had experience in mentoring others. Methods: Standard grounded theory methods of theoretical sampling, concurrent data collection and analysis using open, axial and theoretical coding and a story line technique to develop the core category and category saturation were used. To cultivate the reflexivity required of a constructivist study, we also incorporated reflective memoing, situational analysis mapping techniques and frame analysis. Data was generated through eleven interviews, email dialogue and shared situational mapping. Results: Cultivating and growing new or novice rural nurses using supportive relationships such as mentoring was found to be an existing, integral part of experienced rural nurses' practice, motivated by living and working in the same communities. Getting to know a stranger is the first part of the process of cultivating and growing another. New or novice rural nurses gain the attention of experienced rural nurses through showing potential or experiencing a critical incidence. Conclusions: The problem of retaining nurses is a global issue. Experienced nurses engaged in clinical practice have the potential to cultivate and grow new or novice nurses-many already do so. Recognising this role and providing opportunities for development will help grow a positive, supportive work environment that nurtures the experienced nurses of tomorrow.
Resumo:
Aim. Our aim in this paper is to explain a methodological/methods package devised to incorporate situational and social world mapping with frame analysis, based on a grounded theory study of Australian rural nurses' experiences of mentoring. Background. Situational analysis, as conceived by Adele Clarke, shifts the research methodology of grounded theory from being located within a postpositivist paradigm to a postmodern paradigm. Clarke uses three types of maps during this process: situational, social world and positional, in combination with discourse analysis. Method. During our grounded theory study, the process of concurrent interview data generation and analysis incorporated situational and social world mapping techniques. An outcome of this was our increased awareness of how outside actors influenced participants in their constructions of mentoring. In our attempts to use Clarke's methodological package, however, it became apparent that our constructivist beliefs about human agency could not be reconciled with the postmodern project of discourse analysis. We then turned to the literature on symbolic interactionism and adopted frame analysis as a method to examine the literature on rural nursing and mentoring as secondary form of data. Findings. While we found situational and social world mapping very useful, we were less successful in using positional maps. In retrospect, we would argue that collective action framing provides an alternative to analysing such positions in the literature. This is particularly so for researchers who locate themselves within a constructivist paradigm, and who are therefore unwilling to reject the notion of human agency and the ability of individuals to shape their world in some way. Conclusion. Our example of using this package of situational and social worlds mapping with frame analysis is intended to assist other researchers to locate participants more transparently in the social worlds that they negotiate in their everyday practice. © 2007 Blackwell Publishing Ltd.
Resumo:
better health service.Conclusion:This research provides an insight into the perceptions of the rhetoric and reality of community member involvement in the process of developing multi-purpose services. It revealed a grounded theory in which fear and trust were intrinsic to a process of changing from a traditional hospital service to the acceptance of a new model of health care provided at a multi-purpose service.
Resumo:
This research investigates home literacy education practices of Taiwanese families in Australia. As Taiwanese immigrants represent the largest ¡°Chinese Australian¡± subgroup to have settled in the state of Queensland, teachers in this state often face the challenges of cultural differences between Australian schools and Taiwanese homes. Extensive work by previous researchers suggests that understanding the cultural and linguistic differences that influence how an immigrant child views and interacts with his/her environment is a possible way to minimise the challenges. Cultural practices start from infancy and at home. Therefore, this study is focused on young children who are around the age of four to five. It is a study that examines the form of literacy education that is enacted and valued by Taiwanese parents in Australia. Specifically, this study analyses ¡°what literacy knowledge and skill is taught at home?¡±, ¡°how is it taught?¡± and ¡°why is it taught?¡± The study is framed in Pierre Bourdieu.s theory of social practice that defines literacy from a sociological perspective. The aim is to understand the practices through which literacy is taught in the Taiwanese homes. Practices of literacy education are culturally embedded. Accordingly, the study shows the culturally specialised ways of learning and knowing that are enacted in the study homes. The study entailed four case studies that draw on: observations and recording of the interactions between the study parent and child in their literacy events; interviews and dialogues with the parents involved; and a collection of photographs of the children.s linguistic resources and artefacts. The methodological arguments and design addressed the complexity of home literacy education where Taiwanese parents raise children in their own cultural ways while adapting to a new country in an immigrant context. In other words, the methodology not only involves cultural practices, but also involves change and continuity in home literacy practices. Bernstein.s theory of pedagogic discourse was used to undertake a detailed analysis of parents. selection and organisation of content for home literacy education, and the evaluative criteria they established for the selected literacy knowledge and skill. This analysis showed how parents selected and controlled the interactions in their child.s literacy learning. Bernstein.s theory of pedagogic discourse was used also to analyse change and continuity in home literacy practice, specifically, the concepts of ¡°classification¡± and ¡°framing¡±. The design of this study aimed to gain an understanding of parents. literacy teaching in an immigrant context. The study found that parents tended to value and enact traditional practices, yet most of the parents were also searching for innovative ideas for their adult-structured learning. Home literacy education of Taiwanese families in this study was found to be complex, multi-faceted and influenced in an ongoing way by external factors. Implications for educators and recommendations for future study are provided. The findings of this study offer early childhood teachers in Australia understandings that will help them build knowledge about home literacy education of Taiwanese Australian families.
Resumo:
Aims and objectives. This purpose of this study was to describe the process of expertise acquisition in nephrology nursing practice. Background. It has been recognized for a number of decades that experts, compared with other practitioners in a number of professions and occupations, are the most knowledgeable and effective, in terms of both the quantity and quality of output. Studies relating to expertise have been undertaken in a range of nursing contexts and specialties; to date, however, none have been undertaken which focus on nephrology nursing. Design. This study, using grounded theory methodology, took place in one renal unit in New South Wales, Australia and involved six non-expert and 11 expert nurses. Methods. Simultaneous data collection and analysis took place using participant observation, semi-structured interviews and review of nursing documentation. Findings. The study revealed a three-stage skills-acquisitive process that was identified as non-expert, experienced non-expert and expert stages. Each stage was typified by four characteristics, which altered during the acquisitive process; these were knowledge, experience, skill and focus. Conclusion. This was the first study to explore nephrology nursing expertise and uncovered new aspects of expertise not documented in the literature and it also made explicit other areas, which had only been previously implied. Relevance to clinical practice. Of significance to nursing, the exercise of expertise is a function of the recognition of expertise by others and it includes the blurring of the normal boundaries of professional practice. © 2006 Blackwell Publishing Ltd.
Resumo:
A forced landing is an unscheduled event in flight requiring an emergency landing, and is most commonly attributed to engine failure, failure of avionics or adverse weather. Since the ability to conduct a successful forced landing is the primary indicator for safety in the aviation industry, automating this capability for unmanned aerial vehicles (UAVs) will help facilitate their integration into, and subsequent routine operations over civilian airspace. Currently, there is no commercial system available to perform this task; however, a team at the Australian Research Centre for Aerospace Automation (ARCAA) is working towards developing such an automated forced landing system. This system, codenamed Flight Guardian, will operate onboard the aircraft and use machine vision for site identification, artificial intelligence for data assessment and evaluation, and path planning, guidance and control techniques to actualize the landing. This thesis focuses on research specific to the third category, and presents the design, testing and evaluation of a Trajectory Generation and Guidance System (TGGS) that navigates the aircraft to land at a chosen site, following an engine failure. Firstly, two algorithms are developed that adapts manned aircraft forced landing techniques to suit the UAV planning problem. Algorithm 1 allows the UAV to select a route (from a library) based on a fixed glide range and the ambient wind conditions, while Algorithm 2 uses a series of adjustable waypoints to cater for changing winds. A comparison of both algorithms in over 200 simulated forced landings found that using Algorithm 2, twice as many landings were within the designated area, with an average lateral miss distance of 200 m at the aimpoint. These results present a baseline for further refinements to the planning algorithms. A significant contribution is seen in the design of the 3-D Dubins Curves planning algorithm, which extends the elementary concepts underlying 2-D Dubins paths to account for powerless flight in three dimensions. This has also resulted in the development of new methods in testing for path traversability, in losing excess altitude, and in the actual path formation to ensure aircraft stability. Simulations using this algorithm have demonstrated lateral and vertical miss distances of under 20 m at the approach point, in wind speeds of up to 9 m/s. This is greater than a tenfold improvement on Algorithm 2 and emulates the performance of manned, powered aircraft. The lateral guidance algorithm originally developed by Park, Deyst, and How (2007) is enhanced to include wind information in the guidance logic. A simple assumption is also made that reduces the complexity of the algorithm in following a circular path, yet without sacrificing performance. Finally, a specific method of supplying the correct turning direction is also used. Simulations have shown that this new algorithm, named the Enhanced Nonlinear Guidance (ENG) algorithm, performs much better in changing winds, with cross-track errors at the approach point within 2 m, compared to over 10 m using Park's algorithm. A fourth contribution is made in designing the Flight Path Following Guidance (FPFG) algorithm, which uses path angle calculations and the MacCready theory to determine the optimal speed to fly in winds. This algorithm also uses proportional integral- derivative (PID) gain schedules to finely tune the tracking accuracies, and has demonstrated in simulation vertical miss distances of under 2 m in changing winds. A fifth contribution is made in designing the Modified Proportional Navigation (MPN) algorithm, which uses principles from proportional navigation and the ENG algorithm, as well as methods specifically its own, to calculate the required pitch to fly. This algorithm is robust to wind changes, and is easily adaptable to any aircraft type. Tracking accuracies obtained with this algorithm are also comparable to those obtained using the FPFG algorithm. For all three preceding guidance algorithms, a novel method utilising the geometric and time relationship between aircraft and path is also employed to ensure that the aircraft is still able to track the desired path to completion in strong winds, while remaining stabilised. Finally, a derived contribution is made in modifying the 3-D Dubins Curves algorithm to suit helicopter flight dynamics. This modification allows a helicopter to autonomously track both stationary and moving targets in flight, and is highly advantageous for applications such as traffic surveillance, police pursuit, security or payload delivery. Each of these achievements serves to enhance the on-board autonomy and safety of a UAV, which in turn will help facilitate the integration of UAVs into civilian airspace for a wider appreciation of the good that they can provide. The automated UAV forced landing planning and guidance strategies presented in this thesis will allow the progression of this technology from the design and developmental stages, through to a prototype system that can demonstrate its effectiveness to the UAV research and operations community.
Resumo:
The works depicted two ostensibly plaster figures 'cocooned' in protective overalls. The pose of both figures had a sense of instability, balancing improbably due to internal weights. This teetering, arching quality, combined with the empty sleeves of the overalls, made reference to the Rodin's Balzac and its aura of heroic subjectivity. As the Tyvek suits depicted in the works are a common part of my studio paraphernalia, these works sought to draw a line between these two opposing aspects of the subjectivity of the artist - the transcendent and the quotidian. The works were shown as part of ‘The Day the Machine Started’ for Dianne Tanzer Gallery + Projects at the 2010 Melbourne Art Fair. The works received citations in The Age and The Australian newspapers.
Resumo:
Many of the classification algorithms developed in the machine learning literature, including the support vector machine and boosting, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0–1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. To study these issues, we provide a general quantitative relationship between the risk as assessed using the 0–1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial upper bounds on excess risk under the weakest possible condition on the loss function—that it satisfies a pointwise form of Fisher consistency for classification. The relationship is based on a simple variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise, and show that in this case, strictly convex loss functions lead to faster rates of convergence of the risk than would be implied by standard uniform convergence arguments. Finally, we present applications of our results to the estimation of convergence rates in function classes that are scaled convex hulls of a finite-dimensional base class, with a variety of commonly used loss functions.
Resumo:
We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.
Resumo:
The paper "the importance of convexity in learning with squared loss" gave a lower bound on the sample complexity of learning with quadratic loss using a nonconvex function class. The proof contains an error. We show that the lower bound is true under a stronger condition that holds for many cases of interest.
Resumo:
In fault detection and diagnostics, limitations coming from the sensor network architecture are one of the main challenges in evaluating a system’s health status. Usually the design of the sensor network architecture is not solely based on diagnostic purposes, other factors like controls, financial constraints, and practical limitations are also involved. As a result, it quite common to have one sensor (or one set of sensors) monitoring the behaviour of two or more components. This can significantly extend the complexity of diagnostic problems. In this paper a systematic approach is presented to deal with such complexities. It is shown how the problem can be formulated as a Bayesian network based diagnostic mechanism with latent variables. The developed approach is also applied to the problem of fault diagnosis in HVAC systems, an application area with considerable modeling and measurement constraints.
Resumo:
In the context of learning paradigms of identification in the limit, we address the question: why is uncertainty sometimes desirable? We use mind change bounds on the output hypotheses as a measure of uncertainty and interpret ‘desirable’ as reduction in data memorization, also defined in terms of mind change bounds. The resulting model is closely related to iterative learning with bounded mind change complexity, but the dual use of mind change bounds — for hypotheses and for data — is a key distinctive feature of our approach. We show that situations exist where the more mind changes the learner is willing to accept, the less the amount of data it needs to remember in order to converge to the correct hypothesis. We also investigate relationships between our model and learning from good examples, set-driven, monotonic and strong-monotonic learners, as well as class-comprising versus class-preserving learnability.