695 resultados para mobile learning technologies
Resumo:
How do reactive and planned behaviors interact in real time? How are sequences of such behaviors released at appropriate times during autonomous navigation to realize valued goals? Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. The SOVEREIGN (Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goaloriented Navigation) animat model embodies these capabilities, and is tested in a 3D virtual reality environment. SOVEREIGN includes several interacting subsystems which model complementary properties of cortical What and Where processing streams and which clarify similarities between mechanisms for navigation and arm movement control. As the animat explores an environment, visual inputs are processed by networks that are sensitive to visual form and motion in the What and Where streams, respectively. Position-invariant and sizeinvariant recognition categories are learned by real-time incremental learning in the What stream. Estimates of target position relative to the animat are computed in the Where stream, and can activate approach movements toward the target. Motion cues from animat locomotion can elicit head-orienting movements to bring a new target into view. Approach and orienting movements are alternately performed during animat navigation. Cumulative estimates of each movement are derived from interacting proprioceptive and visual cues. Movement sequences are stored within a motor working memory. Sequences of visual categories are stored in a sensory working memory. These working memories trigger learning of sensory and motor sequence categories, or plans, which together control planned movements. Predictively effective chunk combinations are selectively enhanced via reinforcement learning when the animat is rewarded. Selected planning chunks effect a gradual transition from variable reactive exploratory movements to efficient goal-oriented planned movement sequences. Volitional signals gate interactions between model subsystems and the release of overt behaviors. The model can control different motor sequences under different motivational states and learns more efficient sequences to rewarded goals as exploration proceeds.
Resumo:
This article introduces an unsupervised neural architecture for the control of a mobile robot. The system allows incremental learning of the plant during robot operation, with robust performance despite unexpected changes of robot parameters such as wheel radius and inter-wheel distance. The model combines Vector associative Map (VAM) learning and associate learning, enabling the robot to reach targets at arbitrary distances without knowledge of the robot kinematics and without trajectory recording, but relating wheel velocities with robot movements.
Resumo:
Advanced sensory systems address a number of major obstacles towards the provision for cost effective and proactive rehabilitation. Many of these systems employ technologies such as high-speed video or motion capture to generate quantitative measurements. However these solutions are accompanied by some major limitations including extensive set-up and calibration, restriction to indoor use, high cost and time consuming data analysis. Additionally many do not quantify improvement in a rigorous manner for example gait analysis for 5 minutes as opposed to 24 hour ambulatory monitoring. This work addresses these limitations using low cost, wearable wireless inertial measurement as a mobile and minimal infrastructure alternative. In cooperation with healthcare professionals the goal is to design and implement a reconfigurable and intelligent movement capture system. A key component of this work is an extensive benchmark comparison with the 'gold standard' VICON motion capture system.
Resumo:
The healthcare industry is beginning to appreciate the benefits which can be obtained from using Mobile Health Systems (MHS) at the point-of-care. As a result, healthcare organisations are investing heavily in mobile health initiatives with the expectation that users will employ the system to enhance performance. Despite widespread endorsement and support for the implementation of MHS, empirical evidence surrounding the benefits of MHS remains to be fully established. For MHS to be truly valuable, it is argued that the technological tool be infused within healthcare practitioners work practices and used to its full potential in post-adoptive scenarios. Yet, there is a paucity of research focusing on the infusion of MHS by healthcare practitioners. In order to address this gap in the literature, the objective of this study is to explore the determinants and outcomes of MHS infusion by healthcare practitioners. This research study adopts a post-positivist theory building approach to MHS infusion. Existing literature is utilised to develop a conceptual model by which the research objective is explored. Employing a mixed-method approach, this conceptual model is first advanced through a case study in the UK whereby propositions established from the literature are refined into testable hypotheses. The final phase of this research study involves the collection of empirical data from a Canadian hospital which supports the refined model and its associated hypotheses. The results from both phases of data collection are employed to develop a model of MHS infusion. The study contributes to IS theory and practice by: (1) developing a model with six determinants (Availability, MHS Self-Efficacy, Time-Criticality, Habit, Technology Trust, and Task Behaviour) and individual performance-related outcomes of MHS infusion (Effectiveness, Efficiency, and Learning), (2) examining undocumented determinants and relationships, (3) identifying prerequisite conditions that both healthcare practitioners and organisations can employ to assist with MHS infusion, (4) developing a taxonomy that provides conceptual refinement of IT infusion, and (5) informing healthcare organisations and vendors as to the performance of MHS in post-adoptive scenarios.
Resumo:
The electroencephalogram (EEG) is a medical technology that is used in the monitoring of the brain and in the diagnosis of many neurological illnesses. Although coarse in its precision, the EEG is a non-invasive tool that requires minimal set-up times, and is suitably unobtrusive and mobile to allow continuous monitoring of the patient, either in clinical or domestic environments. Consequently, the EEG is the current tool-of-choice with which to continuously monitor the brain where temporal resolution, ease-of- use and mobility are important. Traditionally, EEG data are examined by a trained clinician who identifies neurological events of interest. However, recent advances in signal processing and machine learning techniques have allowed the automated detection of neurological events for many medical applications. In doing so, the burden of work on the clinician has been significantly reduced, improving the response time to illness, and allowing the relevant medical treatment to be administered within minutes rather than hours. However, as typical EEG signals are of the order of microvolts (μV ), contamination by signals arising from sources other than the brain is frequent. These extra-cerebral sources, known as artefacts, can significantly distort the EEG signal, making its interpretation difficult, and can dramatically disimprove automatic neurological event detection classification performance. This thesis therefore, contributes to the further improvement of auto- mated neurological event detection systems, by identifying some of the major obstacles in deploying these EEG systems in ambulatory and clinical environments so that the EEG technologies can emerge from the laboratory towards real-world settings, where they can have a real-impact on the lives of patients. In this context, the thesis tackles three major problems in EEG monitoring, namely: (i) the problem of head-movement artefacts in ambulatory EEG, (ii) the high numbers of false detections in state-of-the-art, automated, epileptiform activity detection systems and (iii) false detections in state-of-the-art, automated neonatal seizure detection systems. To accomplish this, the thesis employs a wide range of statistical, signal processing and machine learning techniques drawn from mathematics, engineering and computer science. The first body of work outlined in this thesis proposes a system to automatically detect head-movement artefacts in ambulatory EEG and utilises supervised machine learning classifiers to do so. The resulting head-movement artefact detection system is the first of its kind and offers accurate detection of head-movement artefacts in ambulatory EEG. Subsequently, addtional physiological signals, in the form of gyroscopes, are used to detect head-movements and in doing so, bring additional information to the head- movement artefact detection task. A framework for combining EEG and gyroscope signals is then developed, offering improved head-movement arte- fact detection. The artefact detection methods developed for ambulatory EEG are subsequently adapted for use in an automated epileptiform activity detection system. Information from support vector machines classifiers used to detect epileptiform activity is fused with information from artefact-specific detection classifiers in order to significantly reduce the number of false detections in the epileptiform activity detection system. By this means, epileptiform activity detection which compares favourably with other state-of-the-art systems is achieved. Finally, the problem of false detections in automated neonatal seizure detection is approached in an alternative manner; blind source separation techniques, complimented with information from additional physiological signals are used to remove respiration artefact from the EEG. In utilising these methods, some encouraging advances have been made in detecting and removing respiration artefacts from the neonatal EEG, and in doing so, the performance of the underlying diagnostic technology is improved, bringing its deployment in the real-world, clinical domain one step closer.
Resumo:
The enculturation of Irish traditional musicians involves informal, non-formal, and sometimes formal learning processes in a number of different settings, including traditional music sessions, workshops, festivals, and classes. Irish traditional musicians also learn directly from family, peers, and mentors and by using various forms of technology. Each experience contributes to the enculturation process in meaningful and complementary ways. The ethnographic research discussed in this dissertation suggests that within Irish traditional music culture, enculturation occurs most effectively when learners experience a multitude of learning practices. A variety of experiences insures that novices receive multiple opportunities for engagement and learning. If a learner finds one learning practice ineffective, there are other avenues of enculturation. This thesis explores the musical enculturation of Irish traditional musicians. It focuses on the process of becoming a musician by drawing on methodologies and theories from ethnomusicology, education, and Irish traditional music studies. Data was gathered through multiple ethnographic methodologies. Fieldwork based on participant-observation was carried out in a variety of learning contexts, including traditional music sessions, festivals, workshops, and weekly classes. Additionally, interviews with twenty accomplished Irish traditional musicians provide diverse narratives and firsthand insight into musical development and enculturation. These and other methodologies are discussed in Chapter 1. The three main chapters of the thesis explore various common learning experiences. Chapter 2 explores how Irish traditional musicians learn during social and musical interactions between peers, mentors, and family members, and focuses on live music-making which occurs in private homes, sessions, and concerts. These informal and non-formal learning experiences primarily take place outside of organizations and institutions. The interview data suggests these learning experiences are perhaps the most pervasive and influential in terms of musical enculturation. Chapter 3 discusses learning experience in more organized settings, such as traditional music classes, workshops, summer schools, and festivals. The role of organizations such as Comhaltas Ceoltóirí Éireann and pipers’ clubs are discussed from the point of view of the learner. Many of the learning experiences explored in this chapter are informal, non-formal, and sometimes formal in nature, depending on the philosophy of the organization, institution, and individual teacher. The interview data and field observations indicate that learning in these contexts is common and plays a significant role in enculturation, particularly for traditional musicians who were born during and after the 1970s. Chapter 4 explores the ways Irish traditional musicians use technology, including written sources, phonography, videography, websites, and emerging technologies, during the enculturation process. Each type of technology presents different educational implications, and traditional musicians use these technologies in diverse ways and some more than others. For this, and other reasons, technology plays a complex role during the process of musical enculturation. Drawing on themes which emerge during Chapter 2, 3, and 4, the final chapter of this dissertation explores overarching patterns of enculturation within Irish traditional music culture. This ethnographic work suggests that longevity of participation and engagement in multiple learning and performance opportunities foster the enculturation of Irish traditional musicians. Through numerous and prolonged participation in music-making, novices become accustomed to and learn musical, social, and cultural behaviours. The final chapter also explores interconnections between learning experiences and also proposes directions for future research.
Resumo:
Nearly one billion smart mobile devices are now used for a growing number of tasks, such as browsing the web and accessing online services. In many communities, such devices are becoming the platform of choice for tasks traditionally carried out on a personal computer. However, despite the advances, these devices are still lacking in resources compared to their traditional desktop counterparts. Mobile cloud computing is seen as a new paradigm that can address the resource shortcomings in these devices with the plentiful computing resources of the cloud. This can enable the mobile device to be used for a large range of new applications hosted in the cloud that are too resource demanding to run locally. Bringing these two technologies together presents various difficulties. In this paper, we examine the advantages of the mobile cloud and the new approaches to applications it enables. We present our own solution to create a positive user experience for such applications and describe how it enables these applications.
Resumo:
The pervasive use of mobile technologies has provided new opportunities for organisations to achieve competitive advantage by using a value network of partners to create value for multiple users. The delivery of a mobile payment (m-payment) system is an example of a value network as it requires the collaboration of multiple partners from diverse industries, each bringing their own expertise, motivations and expectations. Consequently, managing partnerships has been identified as a core competence required by organisations to form viable partnerships in an m-payment value network and an important factor in determining the sustainability of an m-payment business model. However, there is evidence that organisations lack this competence which has been witnessed in the m-payment domain where it has been attributed as an influencing factor in a number of failed m-payment initiatives since 2000. In response to this organisational deficiency, this research project leverages the use of design thinking and visualisation tools to enhance communication and understanding between managers who are responsible for managing partnerships within the m-payment domain. By adopting a design science research approach, which is a problem solving paradigm, the research builds and evaluates a visualisation tool in the form of a Partnership Management Canvas. In doing so, this study demonstrates that when organisations encourage their managers to adopt design thinking, as a way to balance their analytical thinking and intuitive thinking, communication and understanding between the partners increases. This can lead to a shared understanding and a shared commitment between the partners. In addition, the research identifies a number of key business model design issues that need to be considered by researchers and practitioners when designing an m-payment business model. As an applied research project, the study makes valuable contributions to the knowledge base and to the practice of management.
Resumo:
Innovation in technology and communications and particularly the advent of the Web is changing the structure of teaching and learning today. While there is much debate about the use of technology in learning and how e-learning is creating new approaches to delivery of learning there is been very little if any work on the use of of the emerging technologies in providing student support through their learning process. This paper reports on research and development undertaken by the eCentre based at the University of Greenwich School Of Computing in designing and developing a "Project Blog System" in order to address some long standing issues related to supervision of final year degree student projects. The paper will report on the methodology used to design the system and will discuss some of the results from students and staff evaluation of the system developed.
Resumo:
This paper describes a Framework for e-Learning and presents the findings of a study investigating whether the use of Blended Learning can fulfill or at least accommodate some of the human requirements presently neglected by current e-Learning systems. This study evaluates the in-house system: Teachmat, and discusses how the use of Blended Learning has become increasingly prevalent as a result of its enhancement and expansion, its relationship to the human and pedagogical issues, and both the positive and negative implications of this reality. [From the Authors]
Resumo:
In this paper we revisit a study on e-Learning and suggestions for developing a framework for e-Learning. The original study in 2005 looked at e-Learning, specifically e-Tutoring and the issues that surround it. However, re-examining these findings led to the realization that whilst most courses were not fully "e" many were in essence using Blended Learning to varying degrees. It is concluded that the encroachment of a Blended Learning approach has been an indirect consequence of the extension and enhancement of in-house course management technologies now employed. The pros and cons of the situation are identified and discussed. In addition, we summarize the positions of participants of the workshop on Developing a Framework for e-Learning.
Resumo:
The Guardian newspaper (21st October 2005) informed its readers that: "Stanford University in California is to make its course content available on iTunes...The service, Stanford on iTunes, will provide…downloads of faculty lectures, campus events, performances, book readings, music recorded by Stanford students and even podcasts of Stanford football games". The emergence of Podcasting as means of sending audio data to users has clearly excited educational technologists around the world. This paper will explore the technologies behind Podcasting and how this could be used to develop and deliver new E-Learning material. The paper refers to the work done to create Podcasts of lectures for University of Greenwich students.
Resumo:
With emergence of "Semantic Web" there has been much discussion about the impact of technologies such as XML and RDF on the way we use the Web for developing e-learning applications and perhaps more importantly on how we can personalise these applications. Personalisation of e-learning is viewed by many authors (see amongst others Eklund & Brusilovsky, 1998; Kurzel, Slay, & Hagenus, 2003; Martinez, 2000; Sampson, Karagiannidis, & Kinshuk, 2002; Voigt & Swatman, 2003) as the key challenge for the learning technologists. According to Kurzel (2004) the tailoring of e-learning applications can have an impact on content and how it's accesses; the media forms used; method of instruction employed and the learning styles supported. This paper will report on a research project currently underway at the eCentre in University of Greenwich which is exploring different approaches and methodologies to create an e-learning platform with personalisation built-in. This personalisation is proposed to be set from different levels of within the system starting from being guided by the information that the user inputs into the system down to the lower level of being set using information inferred by the system's processing engine.
Resumo:
The Student Experience of e-Learning Laboratory (SEEL) project at the University of Greenwich was designed to explore and then implement a number of approaches to investigate learners’ experiences of using technology to support their learning. In this paper members of the SEEL team present initial findings from a University-wide survey of nearly a 1000 students. A selection of 90 ‘cameos’, drawn from the survey data, offer further insights into personal perceptions of e-learning and illustrate the diversity of students experiences. The cameos provide a more coherent picture of individual student experience based on the totality of each person’s responses to the questionnaire. Finally, extracts from follow-up case studies, based on interviews with a small number of students, allow us to ‘hear’ the student voice more clearly. Issues arising from an analysis of the data include student preferences for communication and social networking tools, views on the ‘smartness’ of their tutors’ uses of technology and perceptions of the value of e-learning. A primary finding and the focus of this paper, is that students effectively arrive at their own individualised selection, configuration and use of technologies and software that meets their perceived needs. This ‘personalisation’ does not imply that such configurations are the most efficient, nor does it automatically suggest that effective learning is occurring. SEEL reminds us that learners are individuals, who approach learning both with and without technology in their own distinctive ways. Hearing, understanding and responding to the student voice is fundamental in maximising learning effectiveness. Institutions should consider actively developing the capacity of academic staff to advise students on the usefulness of particular online tools and resources in support of learning and consider the potential benefits of building on what students already use in their everyday lives. Given the widespread perception that students tend to be ‘digital natives’ and academic staff ‘digital immigrants’ (Prensky, 2001), this could represent a considerable cultural challenge.
Resumo:
The original article is available as an open access file on the Springer website in the following link: http://link.springer.com/article/10.1007/s10639-015-9388-2