21 resultados para Visual Information

em Cambridge University Engineering Department Publications Database


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Visual information is difficult to search and interpret when the density of the displayed information is high or the layout is chaotic. Visual information that exhibits such properties is generally referred to as being "cluttered." Clutter should be avoided in information visualizations and interface design in general because it can severely degrade task performance. Although previous studies have identified computable correlates of clutter (such as local feature variance and edge density), understanding of why humans perceive some scenes as being more cluttered than others remains limited. Here, we explore an account of clutter that is inspired by findings from visual perception studies. Specifically, we test the hypothesis that the so-called "crowding" phenomenon is an important constituent of clutter. We constructed an algorithm to predict visual clutter in arbitrary images by estimating the perceptual impairment due to crowding. After verifying that this model can reproduce crowding data we tested whether it can also predict clutter. We found that its predictions correlate well with both subjective clutter assessments and search performance in cluttered scenes. These results suggest that crowding and clutter may indeed be closely related concepts and suggest avenues for further research.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The human motor system is remarkably proficient in the online control of visually guided movements, adjusting to changes in the visual scene within 100 ms [1-3]. This is achieved through a set of highly automatic processes [4] translating visual information into representations suitable for motor control [5, 6]. For this to be accomplished, visual information pertaining to target and hand need to be identified and linked to the appropriate internal representations during the movement. Meanwhile, other visual information must be filtered out, which is especially demanding in visually cluttered natural environments. If selection of relevant sensory information for online control was achieved by visual attention, its limited capacity [7] would substantially constrain the efficiency of visuomotor feedback control. Here we demonstrate that both exogenously and endogenously cued attention facilitate the processing of visual target information [8], but not of visual hand information. Moreover, distracting visual information is more efficiently filtered out during the extraction of hand compared to target information. Our results therefore suggest the existence of a dedicated visuomotor binding mechanism that links the hand representation in visual and motor systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a gradient-based motion capture system that robustly tracks a human hand, based on abstracted visual information - silhouettes. Despite the ambiguity in the visual data and despite the vulnerability of gradient-based methods in the face of such ambiguity, we minimise problems related to misfit by using a model of the hand's physiology, which is entirely non-visual, subject-invariant, and assumed to be known a priori. By modelling seven distinct aspects of the hand's physiology we derive prior densities which are incorporated into the tracking system within a Bayesian framework. We demonstrate how the posterior is formed, and how our formulation leads to the extraction of the maximum a posteriori estimate using a gradient-based search. Our results demonstrate an enormous improvement in tracking precision and reliability, while also achieving near real-time performance. © 2009 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Physical modelling of interesting geotechnical problems has helped clarify behaviours and failure mechanisms of many civil engineering systems. Interesting visual information from physical modelling can also be used in teaching to foster interest in geotechnical engineering and recruit young researchers to our field. With this intention, the Teaching Committee of TC2 developed a web-based teaching resources centre. In this paper, the development and organisation of the resource centre using Wordpress. Wordpress is an open-source content management system which allows user content to be edited and site administration to be controlled remotely via a built-in interface. Example data from a centrifuge test on shallow foundations which could be used for undergraduate or graduate level courses is presented and its use illustrated. A discussion on the development of wiki-style addition to the resource centre for commonly used physical model terms is also presented. © 2010 Taylor & Francis Group, London.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we present a flexible Electrostatic Tactile (ET) surface/display realized by using new emerging material graphene. The graphene is transparent conductor which successfully replaces previous solution based on indium-thin oxide (ITO) and delivers more reliable solution for flexible and bendable displays. The electrostatic tactile surface is capable of delivering programmable, location specific tactile textures. The ET device has an area of 25 cm 2, and consists of 130 μm thin optically transparent (>76%) and mechanically flexible structure overlaid unobtrusively on top of a display. The ET system exploits electro vibration phenomena to enable on-demand control of the frictional force between the user's fingertip and the device surface. The ET device is integrated through a controller on a mobile display platform to generate fully programmable range of stimulating signals. The ET haptic feedback is formed in accordance with the visual information displayed underneath, with the magnitude and pattern of the frictional force correlated with both the images and the coordinates of the actual touch in real time forming virtual textures on the display surface (haptic virtual silhouette). To quantify rate of change in friction force we performed a dynamic friction coefficient measurement with a system involving an artificial finger mimicking the actual touch. During operation, the dynamic friction between the ET surface and an artificial finger stimulation increases by 26% when the load is 0.8 N and by 24% when the load is 1 N. © 2012 ACM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The brain encodes visual information with limited precision. Contradictory evidence exists as to whether the precision with which an item is encoded depends on the number of stimuli in a display (set size). Some studies have found evidence that precision decreases with set size, but others have reported constant precision. These groups of studies differed in two ways. The studies that reported a decrease used displays with heterogeneous stimuli and tasks with a short-term memory component, while the ones that reported constancy used homogeneous stimuli and tasks that did not require short-term memory. To disentangle the effects of heterogeneity and short-memory involvement, we conducted two main experiments. In Experiment 1, stimuli were heterogeneous, and we compared a condition in which target identity was revealed before the stimulus display with one in which it was revealed afterward. In Experiment 2, target identity was fixed, and we compared heterogeneous and homogeneous distractor conditions. In both experiments, we compared an optimal-observer model in which precision is constant with set size with one in which it depends on set size. We found that precision decreases with set size when the distractors are heterogeneous, regardless of whether short-term memory is involved, but not when it is homogeneous. This suggests that heterogeneity, not short-term memory, is the critical factor. In addition, we found that precision exhibits variability across items and trials, which may partly be caused by attentional fluctuations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper investigates how the efficiency and robustness of a skilled rhythmic task compete against each other in the control of a bimanual movement. Human subjects juggled a puck in 2D through impacts with two metallic arms, requiring rhythmic bimanual actuation. The arms kinematics were only constrained by the position, velocity and time of impacts while the rest of the trajectory did not influence the movement of the puck. In order to expose the task robustness, we manipulated the task context in two distinct manners: the task tempo was assigned at four different values (hence manipulating the time available to plan and execute each impact movement individually); and vision was withdrawn during half of the trials (hence reducing the sensory inflows). We show that when the tempo was fast, the actuation was rhythmic (no pause in the trajectory) while at slow tempo, the actuation was discrete (with pause intervals between individual movements). Moreover, the withdrawal of visual information encouraged the rhythmic behavior at the four tested tempi. The discrete versus rhythmic behavior give different answers to the efficiency/robustness trade-off: discrete movements result in energy efficient movements, while rhythmic movements impact the puck with negative acceleration, a property preserving robustness. Moreover, we report that in all conditions the impact velocity of the arms was negatively correlated with the energy of the puck. This correlation tended to stabilize the task and was influenced by vision, revealing again different control strategies. In conclusion, this task involves different modes of control that balance efficiency and robustness, depending on the context. © 2008 Springer-Verlag.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Temporal synchronization of multiple video recordings of the same dynamic event is a critical task in many computer vision applications e.g. novel view synthesis and 3D reconstruction. Typically this information is implied, since recordings are made using the same timebase, or time-stamp information is embedded in the video streams. Recordings using consumer grade equipment do not contain this information; hence, there is a need to temporally synchronize signals using the visual information itself. Previous work in this area has either assumed good quality data with relatively simple dynamic content or the availability of precise camera geometry. In this paper, we propose a technique which exploits feature trajectories across views in a novel way, and specifically targets the kind of complex content found in consumer generated sports recordings, without assuming precise knowledge of fundamental matrices or homographies. Our method automatically selects the moving feature points in the two unsynchronized videos whose 2D trajectories can be best related, thereby helping to infer the synchronization index. We evaluate performance using a number of real recordings and show that synchronization can be achieved to within 1 sec, which is better than previous approaches. Copyright 2013 ACM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Temporal synchronization of multiple video recordings of the same dynamic event is a critical task in many computer vision applications e.g. novel view synthesis and 3D reconstruction. Typically this information is implied through the time-stamp information embedded in the video streams. User-generated videos shot using consumer grade equipment do not contain this information; hence, there is a need to temporally synchronize signals using the visual information itself. Previous work in this area has either assumed good quality data with relatively simple dynamic content or the availability of precise camera geometry. Our first contribution is a synchronization technique which tries to establish correspondence between feature trajectories across views in a novel way, and specifically targets the kind of complex content found in consumer generated sports recordings, without assuming precise knowledge of fundamental matrices or homographies. We evaluate performance using a number of real video recordings and show that our method is able to synchronize to within 1 sec, which is significantly better than previous approaches. Our second contribution is a robust and unsupervised view-invariant activity recognition descriptor that exploits recurrence plot theory on spatial tiles. The descriptor is individually shown to better characterize the activities from different views under occlusions than state-of-the-art approaches. We combine this descriptor with our proposed synchronization method and show that it can further refine the synchronization index. © 2013 ACM.