973 resultados para synaesthesia for touch


Relevância:

20.00% 20.00%

Publicador:

Resumo:

 Touch-screen devices have been enthusiastically adopted by schools across Australia and Canada. Their ease of use means that they are accessible by very young children, and these children often have free access to these devices in their home, however the devices tend to be ‘domesticated’ in the school context (O’Mara and Laidlaw, 2011). In the short period of their availability, a plethora of educational applications have been developed for these devices. This paper addresses emergent themes from our 2011-2013 Canadian/Australian project, Literacy learning in playful spaces: using multi-modal strategies to develop narrative with young learners, funded by the Canadian Social Sciences and Humanities Research Council (Insight Development Grant). In our analysis of the discourse around the introduction of portable touch screen devices into school literacy classes (published texts, teacher interviews, classroom observations), we noted that much of the public discourse is slanted towards the idea of “teacher-proofing” the curriculum. Initially the teachers we have been working with saw the apps themselves as complete, as doing all the work and the discourse around the devices was around what apps are “best”, and “is there an app for that?” It was only with more experience and time that teachers were able to harness the range of affordances of the devices—their capacity for recording audio, video, pictures etc., and start to categorise the apps themselves. In this paper we suggest ways in which current literacy models might be used to develop a repertoire of pedagogical discourse around these devices, providing language and framings for teachers to think about how these new tools might best be used to enhance literacy teaching and learning. O’Mara, J. & Laidlaw, L. (2011). Living in the iWorld: Two literacy researchers reflect on the changing texts and literacy practices of childhood. English Teaching: Practice and Critique 10 (4): 149-159. Available: http://edlinked.soe.waikato.ac.nz/research/journal/view.php?article=true&id=754&p=1

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is concerned with the potential of mobile touch-screen devices and emerging socio-technological practices to support pedagogies of place that provide a means for young people to reflect critically on the social construction of place and to take actions that speak of and to their own locatedness. Drawing on de Certeau’s (1984) concept of space as a practiced place and Massey’s (2005) perspective of spatiality and interrelatedness, we examine two school-based examples of learning activities that bring together the virtual and physical as in experiences and representations of place. The first example is an Australian local history unit, where lower secondary school students participated in a series of field trips, planned and conducted under the guidance of an indigenous elder. They used Smartphones and iPads to capture and create personalised audio-visual records of their knowledge of place that were then used to create geo-location games. In the second example, upper primary school students worked with local authorities and environmental educators to select sites for two environmental monitoring posts, which were then installed and provided a locus for the students’ school-based environmental science learning as well as a vehicle for community engagement. Drawing on interview, video and photographic data, this paper examines the way mobile technologies were deployed for student knowledge production, engagement with place, reconstruction of place and engagement with community.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many creative and technical areas, professionals make use of paper sketches for developing and expressing concepts and models. Paper offers an almost constraint free environment where they have as much freedom to express themselves as they need. However, paper does have some disadvantages, such as size and not being able to manipulate the content (other than remove it or scratch it), which can be overcome by creating systems that can offer the same freedom people have from paper but none of the disadvantages and limitations. Only in recent years has the technology become massively available that allows doing precisely that, with the development in touch‐sensitive screens that also have the ability to interact with a stylus. In this project a prototype was created with the objective of finding a set of the most useful and usable interactions, which are composed of combinations of multi‐touch and pen. The project selected Computer Aided Software Engineering (CASE) tools as its application domain, because it addresses a solid and well‐defined discipline with still sufficient room for new developments. This was the result from the area research conducted to find an application domain, which involved analyzing sketching tools from several possible areas and domains. User studies were conducted using Model Driven Inquiry (MDI) to have a better understanding of the human sketch creation activities and concepts devised. Then the prototype was implemented, through which it was possible to execute user evaluations of the interaction concepts created. Results validated most interactions, in the face of limited testing only being possible at the time. Users had more problems using the pen, however handwriting and ink recognition were very effective, and users quickly learned the manipulations and gestures from the Natural User Interface (NUI).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis argues on the possibility of supporting deictic gestures through handheld multi-touch devices in remote presentation scenarios. In [1], Clark distinguishes indicative techniques of placing-for and directing-to, where placing-for refers to placing a referent into the addressee’s attention, and directing-to refers to directing the addressee’s attention towards a referent. Keynote, PowerPoint, FuzeMeeting and others support placing-for efficiently with slide transitions, and animations, but support limited to none directing-to. The traditional “pointing feature” present in some presentation tools comes as a virtual laser pointer or mouse cursor. [12, 13] have shown that the mouse cursor and laser pointer offer very little informational expressiveness and do not do justice to human communicative gestures. In this project, a prototype application was implemented for the iPad in order to explore, develop, and test the concept of pointing in remote presentations. The prototype offers visualizing and navigating the slides as well as “pointing” and zooming. To further investigate the problem and possible solutions, a theoretical framework was designed representing the relationships between the presenter’s intention and gesture and the resulting visual effect (cursor) that enables the audience members to interpret the meaning of the effect and the presenter’s intention. Two studies were performed to investigate people’s appreciation of different ways of presenting remotely. An initial qualitative study was performed at The Hague, followed by an online quantitative user experiment. The results indicate that subjects found pointing to be helpful in understanding and concentrating, while the detached video feed of the presenter was considered to be distracting. The positive qualities of having the video feed were the emotion and social presence that it adds to the presentations. For a number of subjects, pointing displayed some of the same social and personal qualities [2] that video affords, while less intensified. The combination of pointing and video proved to be successful with 10-out-of-19 subjects scoring it the highest while pointing example came at a close 8-out-of-19. Video was the least preferred with only one subject preferring it. We suggest that the research performed here could provide a basis for future research and possibly be applied in a variety of distributed collaborative settings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This project aimed to create a communication and interaction channel between Madeira Airport and its passengers. We used the pre-existent touch enabled screens at the terminal since their potential was not being utilised to their full capacity. To achieve our goal, we have followed an agile strategy to create a testable prototype and take advantages of its results. The developed prototype is based on a plugin architecture turning it into a maintainable and highly customisable system. The collected usage data suggests that we have achieved the initially defined goals. There is no doubt that this new interaction channel is an improvement regarding the provided services and, supported by the usage data, there is an opportunity to explore additional developments to the channel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Synaesthesia is a condition in which the input of one sensory modality triggers extraordinary additional experiences. On an explicit level, subjects affected by this condition normally report unidirectional experiences. In grapheme-colour synaesthesia for example, the letter A printed in black may trigger a red colour experience but not vice versa. However on an implicit level, at least for some types of synaesthesia, bidirectional activation is possible. In this study we tested whether bidirectional implicit activation is mediated by the same brain areas as explicit synaesthetic experiences. Specifically, we demonstrated suppression of implicit bidirectional activation with the application of transcranial magnetic stimulation over parieto-occipital brain areas. Our findings indicate that parieto-occipital regions are not only involved in explicit but also implicit synaesthetic binding.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes.

Relevância:

20.00% 20.00%

Publicador: