765 resultados para Computer supported collaborative learning
Resumo:
Evaluation of the Get REAL programme in an inclusive primary school setting has indicated its effectiveness in promoting pro-social behaviour for children with high functioning Autism. However, two children with co-morbid diagnoses and complex personal circumstances showed less consistent improvements. In order to explain their unique trajectories, not readily derived from quantitative studies, an exploratory case study approach was used to examine contextual influences on patterns of progress. Multiple data sources included coded video footage from the Get REAL programme, school reports on conduct, and parents and classroom teacher reports using the Strengths and Difficulties Questionnaire. While results provide support for the efficacy of the Get REAL programme for the two children, they also highlight the value of co-ordinated strategies and collaborative individualised approaches in more complex cases. This paper outlines the Get REAL intervention and a range of other school and support agency strategies impacting progress.
Resumo:
This paper illustrates the complexity of pointing as it is employed in a design workshop. Using the method of interaction analysis, we argue that pointing is not merely employed to index, locate, or fix reference to an object. It also constitutes a practice for reestablishing intersubjectivity and solving interactional trouble such as misunderstandings or disagreements by virtue of enlisting something as part of the participants’ shared experience. We use this analysis to discuss implications for how such practices might be supported with computer mediation, arguing for a “bricolage” approach to systems development that emphasizes the provision of resources for users to collaboratively negotiate the accomplishment of intersubjectivity ra- ther than systems that try to support pointing as a specific gestural action.
Resumo:
There have been many improvements in Australian engineering education since the 1990s. However, given the recent drive for assuring the achievement of identified academic standards, more progress needs to be made, particularly in the area of evidence-based assessment. This paper reports on initiatives gathered from the literature and engineering academics in the USA, through an Australian National Teaching Fellowship program. The program aims to establish a process to help academics in designing and implementing evidence-based assessments that meet the needs of not only students and the staff that teach them, but also industry as well as accreditation bodies. The paper also examines the kinds and levels of support necessary for engineering academics, especially early career ones, to help meet the expectations of the current drive for assured quality and standards of both research and teaching. Academics are experiencing competing demands on their time and energy with very high expectations in research performance and increased teaching responsibilities, although many are researchers who have not had much pedagogic training. Based on the literature and investigation of relevant initiatives in the USA, we conducted interviews with several identified experts and change agents who have wrought effective academic cultural change within their institutions and beyond. These reveal that assuring the standards and quality of student learning outcomes through evidence-based assessments cannot be appropriately addressed without also addressing the issue of pedagogic training for academic staff. To be sustainable, such training needs to be complemented by a culture of on-going mentoring support from senior academics, formalised through the university administration, so that mentors are afforded resources, time, and appropriate recognition.
Resumo:
Abstract. For interactive systems, recognition, reproduction, and generalization of observed motion data are crucial for successful interaction. In this paper, we present a novel method for analysis of motion data that we refer to as K-OMM-trees. K-OMM-trees combine Ordered Means Models (OMMs) a model-based machine learning approach for time series with an hierarchical analysis technique for very large data sets, the K-tree algorithm. The proposed K-OMM-trees enable unsupervised prototype extraction of motion time series data with hierarchical data representation. After introducing the algorithmic details, we apply the proposed method to a gesture data set that includes substantial inter-class variations. Results from our studies show that K-OMM-trees are able to substantially increase the recognition performance and to learn an inherent data hierarchy with meaningful gesture abstractions.
Resumo:
Collaborative methods are promising tools for solving complex security tasks. In this context, the authors present the security overlay framework CIMD (Collaborative Intrusion and Malware Detection), enabling participants to state objectives and interests for joint intrusion detection and find groups for the exchange of security-related data such as monitoring or detection results accordingly; to these groups the authors refer as detection groups. First, the authors present and discuss a tree-oriented taxonomy for the representation of nodes within the collaboration model. Second, they introduce and evaluate an algorithm for the formation of detection groups. After conducting a vulnerability analysis of the system, the authors demonstrate the validity of CIMD by examining two different scenarios inspired sociology where the collaboration is advantageous compared to the non-collaborative approach. They evaluate the benefit of CIMD by simulation in a novel packet-level simulation environment called NeSSi (Network Security Simulator) and give a probabilistic analysis for the scenarios.
Resumo:
Whole-body computer control interfaces present new opportunities to engage children with games for learning. Stomp is a suite of educational games that use such a technology, allowing young children to use their whole body to interact with a digital environment projected on the floor. To maximise the effectiveness of this technology, tenets of self-determination theory (SDT) are applied to the design of Stomp experiences. By meeting user needs for competence, autonomy, and relatedness our aim is to increase children's engagement with the Stomp learning platform. Analysis of Stomp's design suggests that these tenets are met. Observations from a case study of Stomp being used by young children show that they were highly engaged and motivated by Stomp. This analysis demonstrates that continued application of SDT to Stomp will further enhance user engagement. It also is suggested that SDT, when applied more widely to other whole-body multi-user interfaces, could instil similar positive effects.
Resumo:
A video detailing our new virtual world BPMN process modelling tool developed by Erik Poppe. Enables better situational awareness via use of remotely connected avatars and a shared 3D process diagram.
Resumo:
Educators are faced with many challenging questions in designing an effective curriculum. What prerequisite knowledge do students have before commencing a new subject? At what level of mastery? What is the spread of capabilities between bare-passing students vs. the top performing group? How does the intended learning specification compare to student performance at the end of a subject? In this paper we present a conceptual model that helps in answering some of these questions. It has the following main capabilities: capturing the learning specification in terms of syllabus topics and outcomes; capturing mastery levels to model progression; capturing the minimal vs. aspirational learning design; capturing confidence and reliability metrics for each of these mappings; and finally, comparing and reflecting on the learning specification against actual student performance. We present a web-based implementation of the model, and validate it by mapping the final exams from four programming subjects against the ACM/IEEE CS2013 topics and outcomes, using Bloom's Taxonomy as the mastery scale. We then import the itemised exam grades from 632 students across the four subjects and compare the demonstrated student performance against the expected learning for each of these. Key contributions of this work are the validated conceptual model for capturing and comparing expected learning vs. demonstrated performance, and a web-based implementation of this model, which is made freely available online as a community resource.
'Going live' : establishing the creative attributes of the live multi-camera television professional
Resumo:
In my capacity as a television professional and teacher specialising in multi-camera live television production for over 40 years, I was drawn to the conclusion that opaque or inadequately formed understandings of how creativity applies to the field of live television, have impeded the development of pedagogies suitable to the teaching of live television in universities. In the pursuit of this hypothesis, the thesis shows that television degrees were born out of film studies degrees, where intellectual creativity was aligned to single camera production, and the 'creative roles' of producers, directors and scriptwriters. At the same time, multi-camera live television production was subsumed under the 'mass communication' banner, leading to an understanding that roles other than producer and director are simply technical, and bereft of creative intent or acumen. The thesis goes on to show that this attitude to other television production personnel, for example, the vision mixer, videotape operator and camera operator, relegates their roles to that of 'button pusher'. This has resulted in university teaching models with inappropriate resources and unsuitable teaching practices. As a result, the industry is struggling to find people with the skills to fill the demands of the multi-camera live television sector. In specific terms the central hypothesis is pursued through the following sequenced approach. Firstly, the thesis sets out to outline the problems, and traces the origins of the misconceptions that hold with the notion that intellectual creativity does not exist in live multi-camera television. Secondly, this more adequately conceptualised rendition, of the origins particular to the misconceptions of live television and creativity, is then anchored to the field of examination by presentation of the foundations of the roles involved in making live television programs, using multicamera production techniques. Thirdly, this more nuanced rendition of the field sets the stage for a thorough analysis of education and training in the industry, and teaching models at Australian universities. The findings clearly establish that the pedagogical models are aimed at single camera production, a position that deemphasises the creative aspects of multi-camera live television production. Informed by an examination of theories of learning, qualitative interviews, professional reflective practice and observations, the roles of four multi-camera live production crewmembers (camera operator, vision mixer, EVS/videotape operator and director's assistant), demonstrate the existence of intellectual creativity during live production. Finally, supported by the theories of learning, and the development and explication of a successful teaching model, a new approach to teaching students how to work in live television is proposed and substantiated.
Resumo:
The knowledge economy of the 21st century requires skills such as creativity, critical thinking, problem solving, communication and collaboration (Partnership for 21st century skills, 2011) – skills that cannot easily be learnt from books, but rather through learning-by-doing and social interaction. Big ideas and disruptive innovation often result from collaboration between individuals from diverse backgrounds and areas of expertise. Public libraries, as facilitators of education and knowledge, have been actively seeking responses to such changing needs of the general public...
Resumo:
Service robots that operate in human environments will accomplish tasks most efficiently and least disruptively if they have the capability to mimic and understand the motion patterns of the people in their workspace. This work demonstrates how a robot can create a humancentric navigational map online, and that this map re ects changes in the environment that trigger altered motion patterns of people. An RGBD sensor mounted on the robot is used to detect and track people moving through the environment. The trajectories are clustered online and organised into a tree-like probabilistic data structure which can be used to detect anomalous trajectories. A costmap is reverse engineered from the clustered trajectories that can then inform the robot's onboard planning process. Results show that the resultant paths taken by the robot mimic expected human behaviour and can allow the robot to respond to altered human motion behaviours in the environment.
Resumo:
This paper proposes an efficient and online learning control system that uses the successful Model Predictive Control (MPC) method in a model based locally weighted learning framework. The new approach named Locally Weighted Learning Model Predictive Control (LWL-MPC) has been proposed as a solution to learn to control complex and nonlinear Elastic Joint Robots (EJR). Elastic Joint Robots are generally difficult to learn to control due to their elastic properties preventing standard model learning techniques from being used, such as learning computed torque control. This paper demonstrates the capability of LWL-MPC to perform online and incremental learning while controlling the joint positions of a real three Degree of Freedom (DoF) EJR. An experiment on a real EJR is presented and LWL-MPC is shown to successfully learn to control the system to follow two different figure of eight trajectories.
Resumo:
Organizations from every industry sector seek to enhance their business performance and competitiveness through the deployment of contemporary information systems (IS), such as Enterprise Systems (ERP). Investments in ERP are complex and costly, attracting scrutiny and pressure to justify their cost. Thus, IS researchers highlight the need for systematic evaluation of information system success, or impact, which has resulted in the introduction of varied models for evaluating information systems. One of these systematic measurement approaches is the IS-Impact Model introduced by a team of researchers at Queensland University of technology (QUT) (Gable, Sedera, & Chan, 2008). The IS-Impact Model is conceptualized as a formative, multidimensional index that consists of four dimensions. Gable et al. (2008) define IS-Impact as "a measure at a point in time, of the stream of net benefits from the IS, to date and anticipated, as perceived by all key-user-groups" (p.381). The IT Evaluation Research Program (ITE-Program) at QUT has grown the IS-Impact Research Track with the central goal of conducting further studies to enhance and extend the IS-Impact Model. The overall goal of the IS-Impact research track at QUT is "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable, 2009). In order to achieve that, the IS-Impact research track advocates programmatic research having the principles of tenacity, holism, and generalizability through extension research strategies. This study was conducted within the IS-Impact Research Track, to further generalize the IS-Impact Model by extending it to the Saudi Arabian context. According to Hofsted (2012), the national culture of Saudi Arabia is significantly different from the Australian national culture making the Saudi Arabian culture an interesting context for testing the external validity of the IS-Impact Model. The study re-visits the IS-Impact Model from the ground up. Rather than assume the existing instrument is valid in the new context, or simply assess its validity through quantitative data collection, the study takes a qualitative, inductive approach to re-assessing the necessity and completeness of existing dimensions and measures. This is done in two phases: Exploratory Phase and Confirmatory Phase. The exploratory phase addresses the first research question of the study "Is the IS-Impact Model complete and able to capture the impact of information systems in Saudi Arabian Organization?". The content analysis, used to analyze the Identification Survey data, indicated that 2 of the 37 measures of the IS-Impact Model are not applicable for the Saudi Arabian Context. Moreover, no new measures or dimensions were identified, evidencing the completeness and content validity of the IS-Impact Model. In addition, the Identification Survey data suggested several concepts related to IS-Impact, the most prominent of which was "Computer Network Quality" (CNQ). The literature supported the existence of a theoretical link between IS-Impact and CNQ (CNQ is viewed as an antecedent of IS-Impact). With the primary goal of validating the IS-Impact model within its extended nomological network, CNQ was introduced to the research model. The Confirmatory Phase addresses the second research question of the study "Is the Extended IS-Impact Model Valid as a Hierarchical Multidimensional Formative Measurement Model?". The objective of the Confirmatory Phase was to test the validity of IS-Impact Model and CNQ Model. To achieve that, IS-Impact, CNQ, and IS-Satisfaction were operationalized in a survey instrument, and then the research model was assessed by employing the Partial Least Squares (PLS) approach. The CNQ model was validated as a formative model. Similarly, the IS-Impact Model was validated as a hierarchical multidimensional formative construct. However, the analysis indicated that one of the IS-Impact Model indicators was insignificant and can be removed from the model. Thus, the resulting Extended IS-Impact Model consists of 4 dimensions and 34 measures. Finally, the structural model was also assessed against two aspects: explanatory and predictive power. The analysis revealed that the path coefficient between CNQ and IS-Impact is significant with t-value= (4.826) and relatively strong with â = (0.426) with CNQ explaining 18% of the variance in IS-Impact. These results supported the hypothesis that CNQ is antecedent of IS-Impact. The study demonstrates that the quality of Computer Network affects the quality of the Enterprise System (ERP) and consequently the impacts of the system. Therefore, practitioners should pay attention to the Computer Network quality. Similarly, the path coefficient between IS-Impact and IS-Satisfaction was significant t-value = (17.79) and strong â = (0.744), with IS-Impact alone explaining 55% of the variance in Satisfaction, consistent with results of the original IS-Impact study (Gable et al., 2008). The research contributions include: (a) supporting the completeness and validity of IS-Impact Model as a Hierarchical Multi-dimensional Formative Measurement Model in the Saudi Arabian context, (b) operationalizing Computer Network Quality as conceptualized in the ITU-T Recommendation E.800 (ITU-T, 1993), (c) validating CNQ as a formative measurement model and as an antecedent of IS Impact, and (d) conceptualizing and validating IS-Satisfaction as a reflective measurement model and as an immediate consequence of IS Impact. The CNQ model provides a framework to perceptually measure Computer Network Quality from multiple perspectives. The CNQ model features an easy-to-understand, easy-to-use, and economical survey instrument.
Resumo:
Smartphones are getting increasingly popular and several malwares appeared targeting these devices. General countermeasures to smartphone malwares are currently limited to signature-based antivirus scanners which efficiently detect known malwares, but they have serious shortcomings with new and unknown malwares creating a window of opportunity for attackers. As smartphones become host for sensitive data and applications, extended malware detection mechanisms are necessary complying with the corresponding resource constraints. The contribution of this paper is twofold. First, we perform static analysis on the executables to extract their function calls in Android environment using the command readelf. Function call lists are compared with malware executables for classifying them with PART, Prism and Nearest Neighbor Algorithms. Second, we present a collaborative malware detection approach to extend these results. Corresponding simulation results are presented.
Resumo:
Complex Internet attacks may come from multiple sources, and target multiple networks and technologies. Nevertheless, Collaborative Intrusion Detection Systems (CIDS) emerges as a promising solution by using information from multiple sources to gain a better understanding of objective and impact of complex Internet attacks. CIDS also help to cope with classical problems of Intrusion Detection Systems (IDS) such as zero-day attacks, high false alarm rates and architectural challenges, e. g., centralized designs exposing the Single-Point-of-Failure. Improved complexity on the other hand gives raise to new exploitation opportunities for adversaries. The contribution of this paper is twofold. We first investigate related research on CIDS to identify the common building blocks and to understand vulnerabilities of the Collaborative Intrusion Detection Framework (CIDF). Second, we focus on the problem of anonymity preservation in a decentralized intrusion detection related message exchange scheme. We use techniques from design theory to provide multi-path peer-to-peer communication scheme where the adversary can not perform better than guessing randomly the originator of an alert message.