989 resultados para visual interfaces


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Standard operating procedures state that police officers should not drive while interacting with their mobile data terminal (MDT) which provides in-vehicle information essential to police work. Such interactions do however occur in practice and represent a potential source of driver distraction. The MDT comprises visual output with manual input via touch screen and keyboard. This study investigated the potential for alternative input and output methods to mitigate driver distraction with specific focus on eye movements. Method Nineteen experienced drivers of police vehicles (one female) from the NSW Police Force completed four simulated urban drives. Three drives included a concurrent secondary task: imitation licence plate search using an emulated MDT. Three different interface methods were examined: Visual-Manual, Visual-Voice, and Audio-Voice (“Visual” and “Audio” = output modality; “Manual” and “Voice” = input modality). During each drive, eye movements were recorded using FaceLAB™ (Seeing Machines Ltd, Canberra, ACT). Gaze direction and glances on the MDT were assessed. Results The Visual-Voice and Visual-Manual interfaces resulted in a significantly greater number of glances towards the MDT than Audio-Voice or Baseline. The Visual-Manual and Visual-Voice interfaces resulted in significantly more glances to the display than Audio-Voice or Baseline. For longer duration glances (>2s and 1-2s) the Visual-Manual interface resulted in significantly more fixations than Baseline or Audio-Voice. The short duration glances (<1s) were significantly greater for both Visual-Voice and Visual-Manual compared with Baseline and Audio-Voice. There were no significant differences between Baseline and Audio-Voice. Conclusion An Audio-Voice interface has the greatest potential to decrease visual distraction to police drivers. However, it is acknowledged that an audio output may have limitations for information presentation compared with visual output. The Visual-Voice interface offers an environment where the capacity to present information is sustained, whilst distraction to the driver is reduced (compared to Visual-Manual) by enabling adaptation of fixation behaviour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research proposes the development of interfaces to support collaborative, community-driven inquiry into data, which we refer to as Participatory Data Analytics. Since the investigation is led by local communities, it is not possible to anticipate which data will be relevant and what questions are going to be asked. Therefore, users have to be able to construct and tailor visualisations to their own needs. The poster presents early work towards defining a suitable compositional model, which will allow users to mix, match, and manipulate data sets to obtain visual representations with little-to-no programming knowledge. Following a user-centred design process, we are subsequently planning to identify appropriate interaction techniques and metaphors for generating such visual specifications on wall-sized, multi-touch displays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As technological capabilities for capturing, aggregating, and processing large quantities of data continue to improve, the question becomes how to effectively utilise these resources. Whenever automatic methods fail, it is necessary to rely on human background knowledge, intuition, and deliberation. This creates demand for data exploration interfaces that support the analytical process, allowing users to absorb and derive knowledge from data. Such interfaces have historically been designed for experts. However, existing research has shown promise in involving a broader range of users that act as citizen scientists, placing high demands in terms of usability. Visualisation is one of the most effective analytical tools for humans to process abstract information. Our research focuses on the development of interfaces to support collaborative, community-led inquiry into data, which we refer to as Participatory Data Analytics. The development of data exploration interfaces to support independent investigations by local communities around topics of their interest presents a unique set of challenges, which we discuss in this paper. We present our preliminary work towards suitable high-level abstractions and interaction concepts to allow users to construct and tailor visualisations to their own needs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the effects of experience on the intuitiveness of physical and visual interactions performed by airport security screeners. Using portable eye tracking glasses, 40 security screeners were observed in the field as they performed search, examination and interface interactions during airport security x-ray screening. Data from semi structured interviews was used to further explore the nature of visual and physical interactions. Results show there are positive relationships between experience and the intuitiveness of visual and physical interactions performed by security screeners. As experience is gained, security screeners are found to perform search, examination and interface interactions more intuitively. In addition to experience, results suggest that intuitiveness is affected by the nature and modality of activities performed. This inference was made based on the dominant processing styles associated with search and examination activities. The paper concludes by discussing the implications that this research has for the design of visual and physical interfaces. We recommend designing interfaces that build on users’ already established intuitive processes, and that reduce the cognitive load incurred during transitions between visual and physical interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In visual search one tries to find the currently relevant item among other, irrelevant items. In the present study, visual search performance for complex objects (characters, faces, computer icons and words) was investigated, and the contribution of different stimulus properties, such as luminance contrast between characters and background, set size, stimulus size, colour contrast, spatial frequency, and stimulus layout were investigated. Subjects were required to search for a target object among distracter objects in two-dimensional stimulus arrays. The outcome measure was threshold search time, that is, the presentation duration of the stimulus array required by the subject to find the target with a certain probability. It reflects the time used for visual processing separated from the time used for decision making and manual reactions. The duration of stimulus presentation was controlled by an adaptive staircase method. The number and duration of eye fixations, saccade amplitude, and perceptual span, i.e., the number of items that can be processed during a single fixation, were measured. It was found that search performance was correlated with the number of fixations needed to find the target. Search time and the number of fixations increased with increasing stimulus set size. On the other hand, several complex objects could be processed during a single fixation, i.e., within the perceptual span. Search time and the number of fixations depended on object type as well as luminance contrast. The size of the perceptual span was smaller for more complex objects, and decreased with decreasing luminance contrast within object type, especially for very low contrasts. In addition, the size and shape of perceptual span explained the changes in search performance for different stimulus layouts in word search. Perceptual span was scale invariant for a 16-fold range of stimulus sizes, i.e., the number of items processed during a single fixation was independent of retinal stimulus size or viewing distance. It is suggested that saccadic visual search consists of both serial (eye movements) and parallel (processing within perceptual span) components, and that the size of the perceptual span may explain the effectiveness of saccadic search in different stimulus conditions. Further, low-level visual factors, such as the anatomical structure of the retina, peripheral stimulus visibility and resolution requirements for the identification of different object types are proposed to constrain the size of the perceptual span, and thus, limit visual search performance. Similar methods were used in a clinical study to characterise the visual search performance and eye movements of neurological patients with chronic solvent-induced encephalopathy (CSE). In addition, the data about the effects of different stimulus properties on visual search in normal subjects were presented as simple practical guidelines, so that the limits of human visual perception could be taken into account in the design of user interfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe the design of a digital noticeboard to support communication within a remote Aboriginal community whose aspiration is to live in "both worlds", nurturing and extending their Aboriginal culture and actively participating in Western society and economy. Three bi-cultural aspects have emerged and are presented here: the need for a bi-lingual noticeboard to span both oral and written language traditions, the tension between perfunctory information exchange and social, embodied protocols of telling in person and the different ways in which time is represented in both cultures. The design approach, developed iteratively through consultation, demonstration and testing led to an "unsurprising interface", aimed at maximizing use and appropriation across cultures by unifying visual, text and spoken contents in both passive and interactive displays in a modeless manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Análise, de forma visual, de como algumas grandezas físico-químico estão presentes na interface proteica. A proteína é mostrada em 3 dimensões(3D). Cada região da cadeia proteic é colorida com uma cor diferente, representando a variaçao de características físico-químicas na cadeia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A rapidly increasing number of Web databases are now become accessible via
their HTML form-based query interfaces. Query result pages are dynamically generated
in response to user queries, which encode structured data and are displayed for human
use. Query result pages usually contain other types of information in addition to query
results, e.g., advertisements, navigation bar etc. The problem of extracting structured data
from query result pages is critical for web data integration applications, such as comparison
shopping, meta-search engines etc, and has been intensively studied. A number of approaches
have been proposed. As the structures of Web pages become more and more complex, the
existing approaches start to fail, and most of them do not remove irrelevant contents which
may a®ect the accuracy of data record extraction. We propose an automated approach for
Web data extraction. First, it makes use of visual features and query terms to identify data
sections and extracts data records in these sections. We also represent several content and
visual features of visual blocks in a data section, and use them to ¯lter out noisy blocks.
Second, it measures similarity between data items in di®erent data records based on their
visual and content features, and aligns them into di®erent groups so that the data in the
same group have the same semantics. The results of our experiments with a large set of
Web query result pages in di®erent domains show that our proposed approaches are highly
e®ective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As infraestruturas de televisão interativa atualmente existentes possibilitam a integração de uma grande variedade de recursos e serviços, possibilitando aos utilizadores novas experiências de interação e participação. Para a maioria dos telespetadores, o uso de serviços interativos não acarreta grandes dificuldades; no entanto, para públicos com necessidades especiais, por exemplo para pessoas com défice visual, essa tarefa torna-se complexa, dificultando, ou mesmo impedindo, que estes utilizadores possam beneficiar deste tipo de serviços. Portugal não é uma exceção neste contexto, existindo um número significativo de utilizadores com défice visual (UDV) que não beneficiam totalmente das potencialidades do paradigma televisivo atual. Neste âmbito, o projeto de investigação que suporta esta tese explora a problemática do Design Universal aplicado à Televisão Interativa (iTV) e tem como objetivos a conceptualização, prototipagem e validação de um serviço de iTV adaptado especificamente a UDV, visando promover a sua inclusão digital. Para cumprir estes objetivos, a investigação dividiu-se em três etapas distintas. Na primeira etapa, a partir da Teoria Fundamentada nos Dados, foram identificadas as dificuldades e necessidades dos UDV enquanto consumidores de conteúdos televisivos e serviços de audiodescrição; foi selecionada a plataforma tecnológica mais adequada para o suporte do serviço prototipado; e foi definido um conjunto de princípios orientadores de design (POD’s) de interfaces de televisão interativa específico para este público-alvo. Inicialmente foram efetuadas duas entrevistas a 20 participantes com défice visual, para determinar as suas dificuldades e necessidades enquanto consumidores de conteúdos televisivos e serviços de audiodescrição. De seguida, foi realizada uma entrevista a um perito responsável pelo processo de transição para a TDT em Portugal (inicialmente considerou-se que a TDT seria uma plataforma promissora e poderia suportar o protótipo) e efetuada a revisão da literatura sobre POD’s para o desenvolvimento de interfaces para serviços iTV dirigidos a pessoas com défice visual. A partir dos resultados obtidos nesta etapa foi possível definir os requisitos funcionais e técnicos do sistema, bem como os seus PODs, tanto ao nível da componente gráfica, como de interação. Na segunda etapa foi concetualizado e desenvolvido o protótipo iTV adaptado a UDV ‘meo ad+’, com recurso à plataforma tecnológica IPTV da Portugal Telecom, seguindo os requisitos e os princípios de design definidos. Relativamente à terceira etapa, esta contemplou a avaliação do serviço prototipado, por parte de um grupo de participantes com défice visual. Esta fase do trabalho foi conduzida através do método de Estudo Avaliativo, possibilitando, através de testes de usabilidade e acessibilidade, complementados com entrevistas, compreender se o serviço prototipado ia efetivamente ao encontro das necessidades deste tipo de utilizadores, tendo-se observado que os participantes que estiveram envolvidos nos testes ao protótipo mostraram-se satisfeitos com as funcionalidades oferecidas pelo sistema, bem como com o design da sua interface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many older adults wish to gain competence in using a computer, but many application interfaces are perceived as complex and difficult to use, deterring potential users from investing the time to learn them. Hence, this study looks at the potential of ‘familiar’ interface design which builds upon users’ knowledge of real world interactions, and applies existing skills to a new domain. Tools are provided in the form of familiar visual objects, and manipulated like real-world counterparts, rather than with buttons, icons and menus found in classic WIMP interfaces. This paper describes the formative evaluation of computer interactions that are based upon familiar real world tasks, which supports multitouch interaction, involves few buttons and icons, no menus, no right-clicks or double-clicks and no dialogs. Using an example of an email client to test the principles of using “familiarity”, the initial feedback was very encouraging, with 3 of the 4 participants being able to undertake some of the basic email tasks with no prior training and little or no help. The feedback has informed a number of refinements of the design principles, such as providing clearer affordance for visual objects. A full study is currently underway.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Haptic computer interfaces provide users with feedback through the sense of touch, thereby allowing users to feel a graphical user interface. Force feedback gravity wells, i.e. attractive basins that can pull the cursor toward a target, are one type of haptic effect that have been shown to provide improvements in "point and click" tasks. For motion-impaired users, gravity wells could improve times by as much as 50%. It has been reported that the presentation of information to multiple sensory modalities, e.g. haptics and vision, can provide performance benefits. However, previous studies investigating the use of force feedback gravity wells have generally not provided visual representations of the haptic effect. Where force fields extend beyond clickable targets, the addition of visual cues may affect performance. This paper investigates how the performance of motion-impaired computer users is affected by having visual representations of force feedback gravity wells presented on-screen. Results indicate that the visual representation does not affect times and errors in a "point and click" task involving multiple targets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Event-related desynchronization (ERD) of the electroencephalogram (EEG) from the motor cortex is associated with execution, observation, and mental imagery of motor tasks. Generation of ERD by motor imagery (MI) has been widely used for brain-computer interfaces (BCIs) linked to neuroprosthetics and other motor assistance devices. Control of MI-based BCIs can be acquired by neurofeedback training to reliably induce MI-associated ERD. To develop more effective training conditions, we investigated the effect of static and dynamic visual representations of target movements (a picture of forearms or a video clip of hand grasping movements) during the BCI training. After 4 consecutive training days, the group that performed MI while viewing the video showed significant improvement in generating MI-associated ERD compared with the group that viewed the static image. This result suggests that passively observing the target movement during MI would improve the associated mental imagery and enhance MI-based BCIs skills.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the past decade, brain–computer interfaces (BCIs) have rapidly developed, both in technological and application domains. However, most of these interfaces rely on the visual modality. Only some research groups have been studying non-visual BCIs, primarily based on auditory and, sometimes, on somatosensory signals. These non-visual BCI approaches are especially useful for severely disabled patients with poor vision. From a broader perspective, multisensory BCIs may offer more versatile and user-friendly paradigms for control and feedback. This chapter describes current systems that are used within auditory and somatosensory BCI research. Four categories of noninvasive BCI paradigms are employed: (1) P300 evoked potentials, (2) steady-state evoked potentials, (3) slow cortical potentials, and (4) mental tasks. Comparing visual and non-visual BCIs, we propose and discuss different possible multisensory combinations, as well as their pros and cons. We conclude by discussing potential future research directions of multisensory BCIs and related research questions

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores the comparative computational values of using a variety of visual cues in 3D environments. The authors reflect upon the possible repercussions of computationally less expensive visual cues on users' ability to efficiently and accurately interact with three-dimensional images. This study compares the effectiveness of expensive soft shadows against less expensive hard shadows and expensive partial-occlusion (obtained by semi-transparent surface) against less expensive occlusion on users' ability to accurately position objects in imminent contact with other objects in a three-dimensional environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research over the last decade has shown that auditorily cuing the location of visual targets reduces the time taken to locate and identify targets for both free-field and virtually presented sounds. The first study conducted for this thesis confirmed these findings over an extensive region of free-field space. However, the number of sound locations that are measured and stored in the data library of most 3-D audio spatial systems is limited, so that there is often a discrepancy in position between the cued and physical location of the target. Sampling limitations in the systems also produce temporal delays in which the stored data can be conveyed to operators. To investigate the effects of spatial and temporal disparities in audio cuing of visual search, and to provide evidence to alleviate concerns that psychological research lags behind the capabilities to design and implement synthetic interfaces, experiments were conducted to examine (a) the magnitude of spatial separation, and (b) the duration of temporal delay that intervened between auditory spatial cues and visual targets to alter response times to locate targets and discriminate their shape, relative to when the stimuli were spatially aligned, and temporally synchronised, respectively. Participants listened to free-field sound localisation cues that were presented with a single, highly visible target that could appear anywhere across 360° of azimuthal space on the vertical mid-line (spatial separation), or extended to 45° above and below the vertical mid-line (temporal delay). A vertical or horizontal spatial separation of 40° between the stimuli significantly increased response times, while separations of 30° or less did not reach significance. Response times were slowed at most target locations when auditory cues occurred 770 msecs prior to the appearance of targets, but not with similar durations of temporal delay (i.e., 440 msecs or less). When sounds followed the appearance of targets, the stimulus onset asynchrony that affected response times was dependent on target location, and ranged from 440 msecs at higher elevations and rearward of participants, to 1,100 msecs on the vertical mid-line. If targets appeared in the frontal field of view, no delay of acoustical stimulation affected performance. Finally, when conditions of spatial separation and temporal delay were combined, visual search times were degraded with a shorter stimulus onset asynchrony than when only the temporal relationship between the stimuli was varied, but responses to spatial separation were unaffected. The implications of the results for the development of synthetic audio spatial systems to aid visual search tasks was discussed.