973 resultados para Visual discrimination
Resumo:
In this paper, we present a new algorithm for boosting visual template recall performance through a process of visual expectation. Visual expectation dynamically modifies the recognition thresholds of learnt visual templates based on recently matched templates, improving the recall of sequences of familiar places while keeping precision high, without any feedback from a mapping backend. We demonstrate the performance benefits of visual expectation using two 17 kilometer datasets gathered in an outdoor environment at two times separated by three weeks. The visual expectation algorithm provides up to a 100% improvement in recall. We also combine the visual expectation algorithm with the RatSLAM SLAM system and show how the algorithm enables successful mapping
Resumo:
Power relations and small and medium-sized enterprise strategies for capturing value in global production networks: visual effects (VFX) service firms in the Hollywood film industry, Regional Studies. This paper provides insights into the way in which non-lead firms manoeuvre in global value chains in the pursuit of a larger share of revenue and how power relations affect these manoeuvres. It examines the nature of value capture and power relations in the global supply of visual effects (VFX) services and the range of strategies VFX firms adopt to capture higher value in the global value chain. The analysis is based on a total of thirty-six interviews with informants in the industry in Australia, the United Kingdom and Canada, and a database of VFX credits for 3323 visual products for 640 VFX firms.
Resumo:
Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.
Resumo:
This study aimed to examine the effects on driving, usability and subjective workload of performing music selection tasks using a touch screen interface. Additionally, to explore whether the provision of visual and/or auditory feedback offers any performance and usability benefits. Thirty participants performed music selection tasks with a touch screen interface while driving. The interface provided four forms of feedback: no feedback, auditory feedback, visual feedback, and a combination of auditory and visual feedback. Performance on the music selection tasks significantly increased subjective workload and degraded performance on a range of driving measures including lane keeping variation and number of lane excursions. The provision of any form of feedback on the touch screen interface did not significantly affect driving performance, usability or subjective workload, but was preferred by users over no feedback. Overall, the results suggest that touch screens may not be a suitable input device for navigating scrollable lists.
Resumo:
The interactive effects of emotion and attention on attentional startle modulation were investigated in two experiments. Participants performed a discrimination and counting task with two visual stimuli during which acoustic eyeblink startle-eliciting probes were presented at long lead intervals. In Experiment 1, this task was combined with aversive Pavlovian conditioning. In Group Attend CS+, the attended stimulus was followed by an aversive unconditional stimulus (US) and the ignored stimulus was presented alone whereas the ignored stimulus was paired with the US in Group Attend CS−. In Experiment 2, a non-aversive reaction time task US replaced the aversive US. Regardless of the conditioning manipulation and consistent with a modality non-specific account of attentional startle modulation, startle magnitude was larger during attended than ignored stimuli in both experiments. Blink latency shortening was differentially affected by the conditioning manipulations suggesting additive effects of conditioning and discrimination and counting task on blink startle.
Resumo:
Abstract Purpose: To determine how high and low contrast visual acuities are affected by blur caused by crossed-cylinder lenses. Method: Crossed-cylinder lenses of power zero (no added lens), +0.12 DS/-0.25 DC, +0.25 DS/-0.50 DC and +0.37/-0.75 DC were placed over the correcting lenses of the right eyes of eight subjects. Negative cylinder axes used were 15-180 degrees in 15 degree step for the two higher crossed-cylinders and 30-180 degrees in 30 degree steps for the lowest crossed cylinder. Targets were single lines of letters based on the Bailey-Lovie chart. Successively smaller lines were read until the subject could not read any of the letters correctly. Two contrasts were used: high (100%) and low (10%). The screen luminance of 100 cd/m2, together with the room lighting, gave pupil sizes of 4.5 to 6 mm. Results: High contrast visual acuities were better than low contrast visual acuities by 0.1 to 0.2 log unit (1 to 2 chart lines) for the no added lens condition. Based on comparing the average of visual acuities for the 0.75 D crossed-cylinder with the best visual acuity for a given contrast and subject, the rates of change of visual acuity per unit blur strength were similar for high contrast (0.34± 0.05 logMAR/D) and low contrast (0.37± 0.09 logMAR/D). There were considerable asymmetry effects, with the average loss in visual acuity across the two contrasts and the 0.50D/0.75 D crossed-cylinders doubling between the 165± and 60± negative cylinder axes. The loss of visual acuity with 0.75 D crossed-cylinders was approximately twice times that occurring for defocus of the same blur strength. Conclusion: Small levels of crossed-cylinder blur (≤0.75D) produce losses in visual acuity that are dependent on the cylinder axis. 0.75 D crossed-cylinders produce losses in visual acuity that are twice those produced by defocus of the same blur strength.
Resumo:
Purpose: Investigations of foveal aberrations assume circular pupils. However, the pupil becomes increasingly elliptical with increase in visual field eccentricity. We address this and other issues concerning peripheral aberration specification. Methods: One approach uses an elliptical pupil similar to the actual pupil shape, stretched along its minor axis to become a circle so that Zernike circular aberration polynomials may be used. Another approach uses a circular pupil whose diameter matches either the larger or smaller dimension of the elliptical pupil. Pictorial presentation of aberrations, influence of wavelength on aberrations, sign differences between aberrations for fellow eyes, and referencing position to either the visual field or the retina are considered. Results: Examples show differences between the two approaches. Each has its advantages and disadvantages, but there are ways to compensate for most disadvantages. Two representations of data are pupil aberration maps at each position in the visual field and maps showing the variation in individual aberration coefficients across the field. Conclusions: Based on simplicity of use, adequacy of approximation, possible departures of off-axis pupils from ellipticity, and ease of understanding by clinicians, the circular pupil approach is preferable to the stretched elliptical approach for studies involving field angles up to 30 deg.
Resumo:
This article provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, position-based and image-based systems, are then discussed in detail. Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.
Resumo:
Purpose: To determine the effect of moderate levels of refractive blur and simulated cataracts on nighttime pedestrian conspicuity in the presence and absence of headlamp glare. Methods: The ability to recognize pedestrians at night was measured in 28 young adults (M=27.6 years) under three visual conditions: normal vision, refractive blur and simulated cataracts; mean acuity was 20/40 or better in all conditions. Pedestrian recognition distances were recorded while participants drove an instrumented vehicle along a closed road course at night. Pedestrians wore one of three clothing conditions and oncoming headlamps were present for 16 participants and absent for 12 participants. Results: Simulated visual impairment and glare significantly reduced the frequency with which drivers recognized pedestrians and the distance at which the drivers first recognized them. Simulated cataracts were significantly more disruptive than blur even though photopic visual acuity levels were matched. With normal vision, drivers responded to pedestrians at 3.6x and 5.5x longer distances on average than for the blur or cataract conditions, respectively. Even in the presence of visual impairment and glare, pedestrians were recognized more often and at longer distances when they wore a “biological motion” reflective clothing configuration than when they wore a reflective vest or black clothing. Conclusions: Drivers’ ability to recognize pedestrians at night is degraded by common visual impairments even when the drivers’ mean visual acuity meets licensing requirements. To maximize drivers’ ability to see pedestrians, drivers should wear their optimum optical correction, and cataract surgery should be performed early enough to avoid potentially dangerous reductions in visual performance.
Resumo:
A whole tradition is said to be based on the hierarchical distinction between the perceptual and conceptual. In art, Niklas Luhmann argues, this schism is played out and repeated in conceptual art. This paper complicates this depiction by examining Ian Burn's last writings in which I argue the artist-writer reviews the challenge of minimal-conceptual art in terms of its perceptual pre-occupations. Burn revisits his own work and the legacy of minimal-conceptual by moving away from the kind of ideology critique he is best known for internationally in order to reassert the long overlooked visual-perceptual preoccupations of the conceptual in art.
Resumo:
Rapid prototyping environments can speed up the research of visual control algorithms. We have designed and implemented a software framework for fast prototyping of visual control algorithms for Micro Aerial Vehicles (MAV). We have applied a combination of a proxy-based network communication architecture and a custom Application Programming Interface. This allows multiple experimental configurations, like drone swarms or distributed processing of a drone's video stream. Currently, the framework supports a low-cost MAV: the Parrot AR.Drone. Real tests have been performed on this platform and the results show comparatively low figures of the extra communication delay introduced by the framework, while adding new functionalities and flexibility to the selected drone. This implementation is open-source and can be downloaded from www.vision4uav.com/?q=VC4MAV-FW
Resumo:
Process-aware information systems, ranging from generic workflow systems to dedicated enterprise information systems, use work-lists to offer so-called work items to users. In real scenarios, users can be confronted with a very large number of work items that stem from multiple cases of different processes. In this jungle of work items, users may find it hard to choose the right item to work on next. The system cannot autonomously decide which is the right work item, since the decision is also dependent on conditions that are somehow outside the system. For instance, what is “best” for an organisation should be mediated with what is “best” for its employees. Current work-list handlers show work items as a simple sorted list and therefore do not provide much decision support for choosing the right work item. Since the work-list handler is the dominant interface between the system and its users, it is worthwhile to provide an intuitive graphical interface that uses contextual information about work items and users to provide suggestions about prioritisation of work items. This paper uses the so-called map metaphor to visualise work items and resources (e.g., users) in a sophisticated manner. Moreover, based on distance notions, the work-list handler can suggest the next work item by considering different perspectives. For example, urgent work items of a type that suits the user may be highlighted. The underlying map and distance notions may be of a geographical nature (e.g., a map of a city or office building), but may also be based on process designs, organisational structures, social networks, due dates, calendars, etc. The framework proposed in this paper is generic and can be applied to any process-aware information system. Moreover, in order to show its practical feasibility, the paper discusses a full-fledged implementation developed in the context of the open-source workflow environment YAWL, together with two real examples stemming from two very different scenarios. The results of an initial usability evaluation of the implementation are also presented, which provide a first indication of the validity of the approach.
Resumo:
This paper presents an image-based visual servoing system that was used to track the atmospheric Earth re-entry of Hayabusa. The primary aim of this ground based tracking platform was to record the emission spectrum radiating from the superheated gas of the shock layer and the surface of the heat shield during re-entry. To the author's knowledge, this is the first time that a visual servoing system has successfully tracked a super-orbital re-entry of a spacecraft and recorded its pectral signature. Furthermore, we improved the system by including a simplified dynamic model for feed-forward control and demonstrate improved tracking performance on the International Space Station (ISS). We present comparisons between simulation and experimental results on different target trajectories including tracking results from Hayabusa and ISS. The required performance for tracking both spacecraft is demanding when combined with a narrow field of view (FOV). We also briefly discuss the preliminary results obtained from the spectroscopy of the Hayabusa's heat shield during re-entry.
Resumo:
In this paper we use a sequence-based visual localization algorithm to reveal surprising answers to the question, how much visual information is actually needed to conduct effective navigation? The algorithm actively searches for the best local image matches within a sliding window of short route segments or 'sub-routes', and matches sub-routes by searching for coherent sequences of local image matches. In contract to many existing techniques, the technique requires no pre-training or camera parameter calibration. We compare the algorithm's performance to the state-of-the-art FAB-MAP 2.0 algorithm on a 70 km benchmark dataset. Performance matches or exceeds the state of the art feature-based localization technique using images as small as 4 pixels, fields of view reduced by a factor of 250, and pixel bit depths reduced to 2 bits. We present further results demonstrating the system localizing in an office environment with near 100% precision using two 7 bit Lego light sensors, as well as using 16 and 32 pixel images from a motorbike race and a mountain rally car stage. By demonstrating how little image information is required to achieve localization along a route, we hope to stimulate future 'low fidelity' approaches to visual navigation that complement probabilistic feature-based techniques.