856 resultados para Touch screens
Resumo:
This study aimed to examine the effects on driving, usability and subjective workload of performing music selection tasks using a touch screen interface. Additionally, to explore whether the provision of visual and/or auditory feedback offers any performance and usability benefits. Thirty participants performed music selection tasks with a touch screen interface while driving. The interface provided four forms of feedback: no feedback, auditory feedback, visual feedback, and a combination of auditory and visual feedback. Performance on the music selection tasks significantly increased subjective workload and degraded performance on a range of driving measures including lane keeping variation and number of lane excursions. The provision of any form of feedback on the touch screen interface did not significantly affect driving performance, usability or subjective workload, but was preferred by users over no feedback. Overall, the results suggest that touch screens may not be a suitable input device for navigating scrollable lists.
Resumo:
In many cities around the world, surveillance by a pervasive net of CCTV cameras is a common phenomenon in an attempt to uphold safety and security across the urban environment. Video footage is being recorded and stored, sometimes live feeds are being watched in control rooms hidden from public access and view. In this study, we were inspired by Steve Mann’s original work on sousveillance (surveillance from below) to examine how a network of camera equipped urban screens could allow the residents of Oulu in Finland to collaborate on the safekeeping of their city. An agile, rapid prototyping process led to the design, implementation and ‘in the wild’ deployment of the UbiOpticon screen application. Live video streams captured by web cams integrated at the top of 12 distributed urban screens were broadcast and displayed in a matrix arrangement on all screens. The matrix also included live video streams of two roaming mobile phone cameras. In our field study we explored the reactions of passers-by and users of this screen application that seeks to inverse Bentham’s original panopticon by allowing the watched to be watchers at the same time. In addition to the original goal of participatory sousveillance, the system’s live video feature sparked fun and novel user-led apprlopriations.
Resumo:
While the number of traditional laptops and computers sold has dipped slightly year over year, manufacturers have developed new hybrid laptops with touch screens to build on the tactile trend. This market is moving quickly to make touch the rule rather than the exception and the sales of these devices have tripled since the launch of Windows 8 in 2012, to reach more than sixty million units sold in 2015. Unlike tablets, that benefit from easy-to-use applications specially designed for tactile interactions, hybrid laptops are intended to be used with regular user-interfaces. Hence, one could ask whether tactile interactions are suited for every task and activity performed with such interfaces. Since hybrid laptops are increasingly used in educational situations, this study focuses on information search tasks which are commonly performed for learning purposes. It is hypothesized that tasks that require complex and/or less common gestures will increase user's cognitive load and impair task performance in terms of efficacy and efficiency. A study was carried out in a usability laboratory with 30 participants for whom prior experience with tactile devices has been controlled. They were asked to perform information search tasks on an online encyclopaedia by using only the touch screen of and hybrid laptop. Tasks were selected with respect to their level of cognitive demand (amount of information that had to be maintained in working memory) and the complexity of gestures needed (left and/or right clicks, zoom, text selection and/or input.), and grouped into 4 sets accordingly. Task performance was measured by the number of tasks succeeded (efficacy) and time spent on each task (efficiency). Perceived cognitive load was assessed thanks to a questionnaire given after each set of tasks. An eye tracking device was used to monitor users' attention allocation and to provide objective cognitive load measures based on pupil dilation and the Index of Cognitive Activity. Each experimental run took approximately one hour. The results of this within-subjects design indicate that tasks involving complex gestures led to a lower efficacy, especially when the tasks were cognitively demanding. Regarding efficacy, there is no significant differences between sets of tasks excepted for tasks with low cognitive demand and complex gestures that required more time to be achieved. Surprisingly, users that declared the biggest experience with tactile devices spent more time than less frequent users. Cognitive load measures indicate that participants reported having devoted more mental effort in the interaction when they had to use complex gestures.
Resumo:
Keyboards, mice, and touch screens are a potential source of infection or contamination in operating rooms, intensive care units, and autopsy suites. The authors present a low-cost prototype of a system, which allows for touch-free control of a medical image viewer. This touch-free navigation system consists of a computer system (IMac, OS X 10.6 Apple, USA) with a medical image viewer (OsiriX, OsiriX foundation, Switzerland) and a depth camera (Kinect, Microsoft, USA). They implemented software that translates the data delivered by the camera and a voice recognition software into keyboard and mouse commands, which are then passed to OsiriX. In this feasibility study, the authors introduced 10 medical professionals to the system and asked them to re-create 12 images from a CT data set. They evaluated response times and usability of the system compared with standard mouse/keyboard control. Users felt comfortable with the system after approximately 10 minutes. Response time was 120 ms. Users required 1.4 times more time to re-create an image with gesture control. Users with OsiriX experience were significantly faster using the mouse/keyboard and faster than users without prior experience. They rated the system 3.4 out of 5 for ease of use in comparison to the mouse/keyboard. The touch-free, gesture-controlled system performs favorably and removes a potential vector for infection, protecting both patients and staff. Because the camera can be quickly and easily integrated into existing systems, requires no calibration, and is low cost, the barriers to using this technology are low.
Resumo:
“The Cube” is a unique facility that combines 48 large multi-touch screens and very large-scale projection surfaces to form one of the world’s largest interactive learning and engagement spaces. The Cube facility is part of the Queensland University of Technology’s (QUT) newly established Science and Engineering Centre, designed to showcase QUT’s teaching and research capabilities in the STEM (Science, Technology, Engineering, and Mathematics) disciplines. In this application paper we describe, the Cube, its technical capabilities, design rationale and practical day-to-day operations, supporting up to 70,000 visitors per week. Essential to the Cube’s operation are five interactive applications designed and developed in tandem with the Cube’s technical infrastructure. Each of the Cube’s launch applications was designed and delivered by an independent team, while the overall vision of the Cube was shepherded by a small executive team. The diversity of design, implementation and integration approaches pursued by these five teams provides some insight into the challenges, and opportunities, presented when working with large distributed interaction technologies. We describe each of these applications in order to discuss the different challenges and user needs they address, which types of interactions they support and how they utilise the capabilities of the Cube facility.
Resumo:
CubIT is a multi-user, large-scale presentation and collaboration framework installed at the Queensland University of Technology’s (QUT) Cube facility, an interactive facility made up 48 multi-touch screens and very large projected display screens. CubIT was built to make the Cube facility accessible to QUT’s academic and student population. The system allows users to upload, interact with and share media content on the Cube’s very large display surfaces. CubIT implements a unique combination of features including RFID authentication, content management through multiple interfaces, multi-user shared workspace support, drag and drop upload and sharing, dynamic state control between different parts of the system and execution and synchronisation of the system across multiple computing nodes.
Resumo:
CubIT is a multi-user, large-scale presentation and collaboration framework installed at the Queensland University of Technology’s (QUT) Cube facility, an interactive facility made up 48 multi-touch screens and very large projected display screens. The CubIT system allows users to upload, interact with and share their own content on the Cube’s display surfaces. This paper outlines the collaborative features of CubIT which are implemented via three user interfaces, a large-screen multi-touch interface, a mobile phone and tablet application and a web-based content management system. Each of these applications plays a different role and supports different interaction mechanisms supporting a wide range of collaborative features including multi-user shared workspaces, drag and drop upload and sharing between users, session management and dynamic state control between different parts of the system.
Resumo:
In this paper we describe the use and evaluation of CubIT, a multi-user, very large-scale presentation and collaboration framework. CubIT is installed at the Queensland University of Technology’s (QUT) Cube facility. The “Cube” is an interactive visualisation facility made up of five very large-scale interactive multi-panel wall displays, each consisting of up to twelve 55-inch multi-touch screens (48 screens in total) and massive projected display screens situated above the display panels. The paper outlines the unique design challenges, features, use and evaluation of CubIT. The system was built to make the Cube facility accessible to QUT’s academic and student population. CubIT enables users to easily upload and share their own media content, and allows multiple users to simultaneously interact with the Cube’s wall displays. The features of CubIT are implemented via three user interfaces, a multi-touch interface working on the wall displays, a mobile phone and tablet application and a web-based content management system. The evaluation reveals issues around the public use and functional scope of the system.
Resumo:
In this paper we describe CubIT, a multi-user presentation and collaboration system installed at the Queensland University of Technology’s (QUT) Cube facility. The ‘Cube’ is an interactive visualisation facility made up of five very large-scale interactive multi-panel wall displays, each consisting of up to twelve 55-inch multi-touch screens (48 screens in total) and massive projected display screens situated above the display panels. The paper outlines the unique design challenges, features, implementation and evaluation of CubIT. The system was built to make the Cube facility accessible to QUT’s academic and student population. CubIT enables users to easily upload and share their own media content, and allows multiple users to simultaneously interact with the Cube’s wall displays. The features of CubIT were implemented via three user interfaces, a multi-touch interface working on the wall displays, a mobile phone and tablet application and a web-based content management system. Each of these interfaces plays a different role and offers different interaction mechanisms. Together they support a wide range of collaborative features including multi-user shared workspaces, drag and drop upload and sharing between users, session management and dynamic state control between different parts of the system. The results of our evaluation study showed that CubIT was successfully used for a variety of tasks, and highlighted challenges with regards to user expectations regarding functionality as well as issues arising from public use.
Resumo:
Project work can involve multiple people from varying disciplines coming together to solve problems as a group. Large scale interactive displays are presenting new opportunities to support such interactions with interactive and semantically enabled cooperative work tools such as intelligent mind maps. In this paper, we present a novel digital, touch-enabled mind-mapping tool as a first step towards achieving such a vision. This first prototype allows an evaluation of the benefits of a digital environment for a task that would otherwise be performed on paper or flat interactive surfaces. Observations and surveys of 12 participants in 3 groups allowed the formulation of several recommendations for further research into: new methods for capturing text input on touch screens; inclusion of complex structures; multi-user environments and how users make the shift from single- user applications; and how best to navigate large screen real estate in a touch-enabled, co-present multi-user setting.
Resumo:
Through ubiquitous computing and location-based social media, information is spreading outside the traditional domains of home and work into the urban environment. Digital technologies have changed the way people relate to the urban form supporting discussion on multiple levels, allowing more citizens to be heard in new ways (Fredericks et al. 2013; Houghton et al. 2014; Caldwell et al. 2013). Face-to-face and digitally mediated discussions, facilitated by tangible and hybrid interaction, such as multi-touch screens and media façades, are initiated through a telephone booth inspired portable structure: The InstaBooth. The InstaBooth prototype employs a multidisciplinary approach to engage local communities in a situated debate on the future of their urban environment. With it, we capture citizens’ past stories and opinions on the use and design of public places. The way public consultations are currently done often engages only a section of the population involved in a proposed development; the more vocal citizens are not necessarily the more representative of the communities (Jenkins 2006). Alternative ways to engage urban dwellers in the debate about the built environment are explored at the moment, including the use of social media or online tools (Foth 2009). This project fosters innovation by providing pathways for communities to participate in the decision making process that informs the urban form. The InstaBooth promotes dialogue and mediation between a bottom-up and a top-down approach to urban design, with the aim of promoting community connectedness with the urban environment. The InstaBooth provides an engagement and discussion platform that leverages a number of locally developed display and interaction technologies in order to facilitate a dialogue of ideas and commentary. The InstaBooth combines multiple interaction techniques into a hybrid (digital and analogue) media space. Through the InstaBooth, urban design and architectural proposals are displayed encouraging commentary from visitors. Inside the InstaBooth, visitors can activate a multi-touch screen in order to browse media, write a note, or draw a picture to provide feedback. The purpose of the InstaBooth is to engage with a broader section of society, including those who are often marginalised. The specific design of the internal and external interfaces, the mutual relationship between these interfaces with regards to information display and interaction, and the question how visitors can engage with the system, are part of the research agenda of the project.
Resumo:
This paper reports on the creation of an interface for 3D virtual environments, computer-aided design applications or computer games. Standard computer interfaces are bound to 2D surfaces, e.g., computer mouses, keyboards, touch pads or touch screens. The Smart Object is intended to provide the user with a 3D interface by using sensors that register movement (inertial measurement unit), touch (touch screen) and voice (microphone). The design and development process as well as the tests and results are presented in this paper. The Smart Object was developed by a team of four third-year engineering students from diverse scientific backgrounds and nationalities during one semester.
Resumo:
In recent years scientists have made rapid and significant advances in the field of semiconductor physics. One of the most important fields of current interest in materials science is the fundamental aspects and applications of conducting transparent oxide thin films (TCO). The characteristic properties of such coatings are low electrical resistivity and high transparency in the visible region. The first semitransparent and electrically conducting CdO film was reported as early as in 1907 [1]. Though early work on these films was performed out of purely scientific interest, substantial technological advances in such films were made after 1940. The technological interest in the study of transparent semiconducting films was generated mainly due to the potential applications of these materials both in industry and research. Such films demonstrated their utility as transparent electrical heaters for windscreens in the aircraft industry. However, during the last decade, these conducting transparent films have been widely used in a variety of other applications such as gas sensors [2], solar cells [3], heat reflectors [4], light emitting devices [5] and laser damage resistant coatings in high power laser technology [6]. Just a few materials dominate the current TCO industry and the two dominant markets for TCO’s are in architectural applications and flat panel displays. The architectural use of TCO is for energy efficient windows. Fluorine doped tin oxide (FTO), deposited using a pyrolysis process is the TCO usually finds maximum application. SnO2 also finds application ad coatings for windows, which are efficient in preventing radiative heat loss, due to low emissivity (0.16). Pyrolitic tin oxide is used in PV modules, touch screens and plasma displays. However indium tin oxide (ITO) is mostly used in the majority of flat panel display (FPD) applications. In FPDs, the basic function of ITO is as transparent electrodes. The volume of FPD’s produced, and hence the volume of ITO coatings produced, continues to grow rapidly. But the current increase in the cost of indium and the scarcity of this material created the difficulty in obtaining low cost TCOs. Hence search for alternative TCO materials has been a topic of active research for the last few decades. This resulted in the development of binary materials like ZnO, SnO2, CdO and ternary materials like II Zn2SnO4, CdSb2O6:Y, ZnSO3, GaInO3 etc. The use of multicomponent oxide materials makes it possible to have TCO films suitable for specialized applications because by altering their chemical compositions, one can control the electrical, optical, chemical and physical properties. But the advantages of using binary materials are the easiness to control the chemical compositions and depositions conditions. Recently, there were reports claiming the deposition of CdO:In films with a resistivity of the order of 10-5 ohm cm for flat panel displays and solar cells. However they find limited use because of Cd-Toxicity. In this regard, ZnO films developed in 1980s, are very useful as these use Zn, an abundant, inexpensive and nontoxic material. Resistivity of this material is still not very low, but can be reduced through doping with group-III elements like In, Al or Ga or with F [6]. Hence there is a great interest in ZnO as an alternative of ITO. In the present study, we prepared and characterized transparent and conducting ZnO thin films, using a cost effective technique viz Chemical Spray Pyrolysis (CSP). This technique is also suitable for large area film deposition. It involves spraying a solution, (usually aqueous) containing soluble salts of the constituents of the desired compound, onto a heated substrate.
Resumo:
Allt eftersom utvecklingen går framåt inom applikationer och system så förändras också sättet på vilket vi interagerar med systemet på. Hittills har navigering och användning av applikationer och system mestadels skett med händerna och då genom mus och tangentbord. På senare tid så har navigering via touch-skärmar och rösten blivit allt mer vanligt. Då man ska styra en applikation med hjälp av rösten är det viktigt att vem som helst kan styra applikationen, oavsett vilken dialekt man har. För att kunna se hur korrekt ett röstigenkännings-API (Application Programming Interface) uppfattar svenska dialekter så initierades denna studie med dokumentstudier om dialekters kännetecken och ljudkombinationer. Dessa kännetecken och ljudkombinationer låg till grund för de ord vi valt ut till att testa API:et med. Varje dialekt fick alltså ett ord uppbyggt för att vara extra svårt för API:et att uppfatta när det uttalades av just den aktuella dialekten. Därefter utvecklades en prototyp, närmare bestämt en android-applikation som fungerade som ett verktyg i datainsamlingen. Då arbetet innehåller en prototyp och en undersökning så valdes Design and Creation Research som forskningsstrategi med datainsamlingsmetoderna dokumentstudier och observationer för att få önskat resultat. Data samlades in via observationer med prototypen som hjälpmedel och med hjälp av dokumentstudier. Det empiriska data som registrerats via observationerna och med hjälp av applikationen påvisade att vissa dialekter var lättare för API:et att uppfatta korrekt. I vissa fall var resultaten väntade då vissa ord uppbyggda av ljudkombinationer i enlighet med teorin skulle uttalas väldigt speciellt av en viss dialekt. Ibland blev det väldigt låga resultat på just dessa ord men i andra fall förvånansvärt höga. Slutsatsen vi drog av detta var att de ord vi valt ut med en baktanke om att de skulle få låga resultat för den speciella dialekten endast visade sig stämma vid två tillfällen. Det var istället det ord innehållande sje- och tje-ljud som enligt teorin var gemensamma kännetecken för alla dialekter som fick lägst resultat överlag.
Resumo:
A sizeable amount of the testing in eye care, requires either the identification of targets such as letters to assess functional vision, or the subjective evaluation of imagery by an examiner. Computers can render a variety of different targets on their monitors and can be used to store and analyse ophthalmic images. However, existing computing hardware tends to be large, screen resolutions are often too low, and objective assessments of ophthalmic images unreliable. Recent advances in mobile computing hardware and computer-vision systems can be used to enhance clinical testing in optometry. High resolution touch screens embedded in mobile devices, can render targets at a wide variety of distances and can be used to record and respond to patient responses, automating testing methods. This has opened up new opportunities in computerised near vision testing. Equally, new image processing techniques can be used to increase the validity and reliability of objective computer vision systems. Three novel apps for assessing reading speed, contrast sensitivity and amplitude of accommodation were created by the author to demonstrate the potential of mobile computing to enhance clinical measurement. The reading speed app could present sentences effectively, control illumination and automate the testing procedure for reading speed assessment. Meanwhile the contrast sensitivity app made use of a bit stealing technique and swept frequency target, to rapidly assess a patient’s full contrast sensitivity function at both near and far distances. Finally, customised electronic hardware was created and interfaced to an app on a smartphone device to allow free space amplitude of accommodation measurement. A new geometrical model of the tear film and a ray tracing simulation of a Placido disc topographer were produced to provide insights on the effect of tear film breakdown on ophthalmic images. Furthermore, a new computer vision system, that used a novel eye-lash segmentation technique, was created to demonstrate the potential of computer vision systems for the clinical assessment of tear stability. Studies undertaken by the author to assess the validity and repeatability of the novel apps, found that their repeatability was comparable to, or better, than existing clinical methods for reading speed and contrast sensitivity assessment. Furthermore, the apps offered reduced examination times in comparison to their paper based equivalents. The reading speed and amplitude of accommodation apps correlated highly with existing methods of assessment supporting their validity. Their still remains questions over the validity of using a swept frequency sine-wave target to assess patient’s contrast sensitivity functions as no clinical test provides the range of spatial frequencies and contrasts, nor equivalent assessment at distance and near. A validation study of the new computer vision system found that the authors tear metric correlated better with existing subjective measures of tear film stability than those of a competing computer-vision system. However, repeatability was poor in comparison to the subjective measures due to eye lash interference. The new mobile apps, computer vision system, and studies outlined in this thesis provide further insight into the potential of applying mobile and image processing technology to enhance clinical testing by eye care professionals.