993 resultados para Visual Pigments
Resumo:
Purpose. To investigate how temporal processing is altered in myopia and during myopic progression. Methods. In backward visual masking, a target's visibility is reduced by a mask presented quickly after the target. Thirty emmetropes, 40 low myopes, and 22 high myopes aged 18 to 26 years completed location and resolution masking tasks. The location task examined the ability to detect letters with low contrast and large stimulus size. The resolution task involved identifying a small letter and tested resolution and color discrimination. Target and mask stimuli were presented at nine short interstimulus intervals (12 to 259 ms) and at 1000 ms (long interstimulus interval condition). Results. In comparison with emmetropes, myopes had reduced ability in both locating and identifying briefly presented stimuli but were more affected by backward masking for a low contrast location task than for a resolution task. Performances of low and high myopes, as well as stable and progressing myopes, were similar for both masking tasks. Task performance was not correlated with myopia magnitude. Conclusions. Myopes were more affected than emmetropes by masking stimuli for the location task. This was not affected by magnitude or progression rate of myopia, suggesting that myopes have the propensity for poor performance in locating briefly presented low contrast objects at an early stage of myopia development.
Resumo:
Purpose: Investigations of foveal aberrations assume circular pupils. However, the pupil becomes increasingly elliptical with increase in visual field eccentricity. We address this and other issues concerning peripheral aberration specification. Methods: One approach uses an elliptical pupil similar to the actual pupil shape, stretched along its minor axis to become a circle so that Zernike circular aberration polynomials may be used. Another approach uses a circular pupil whose diameter matches either the larger or smaller dimension of the elliptical pupil. Pictorial presentation of aberrations, influence of wavelength on aberrations, sign differences between aberrations for fellow eyes, and referencing position to either the visual field or the retina are considered. Results: Examples show differences between the two approaches. Each has its advantages and disadvantages, but there are ways to compensate for most disadvantages. Two representations of data are pupil aberration maps at each position in the visual field and maps showing the variation in individual aberration coefficients across the field. Conclusions: Based on simplicity of use, adequacy of approximation, possible departures of off-axis pupils from ellipticity, and ease of understanding by clinicians, the circular pupil approach is preferable to the stretched elliptical approach for studies involving field angles up to 30 deg.
Resumo:
This article provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, position-based and image-based systems, are then discussed in detail. Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.
Resumo:
Purpose: To determine the effect of moderate levels of refractive blur and simulated cataracts on nighttime pedestrian conspicuity in the presence and absence of headlamp glare. Methods: The ability to recognize pedestrians at night was measured in 28 young adults (M=27.6 years) under three visual conditions: normal vision, refractive blur and simulated cataracts; mean acuity was 20/40 or better in all conditions. Pedestrian recognition distances were recorded while participants drove an instrumented vehicle along a closed road course at night. Pedestrians wore one of three clothing conditions and oncoming headlamps were present for 16 participants and absent for 12 participants. Results: Simulated visual impairment and glare significantly reduced the frequency with which drivers recognized pedestrians and the distance at which the drivers first recognized them. Simulated cataracts were significantly more disruptive than blur even though photopic visual acuity levels were matched. With normal vision, drivers responded to pedestrians at 3.6x and 5.5x longer distances on average than for the blur or cataract conditions, respectively. Even in the presence of visual impairment and glare, pedestrians were recognized more often and at longer distances when they wore a “biological motion” reflective clothing configuration than when they wore a reflective vest or black clothing. Conclusions: Drivers’ ability to recognize pedestrians at night is degraded by common visual impairments even when the drivers’ mean visual acuity meets licensing requirements. To maximize drivers’ ability to see pedestrians, drivers should wear their optimum optical correction, and cataract surgery should be performed early enough to avoid potentially dangerous reductions in visual performance.
Resumo:
A whole tradition is said to be based on the hierarchical distinction between the perceptual and conceptual. In art, Niklas Luhmann argues, this schism is played out and repeated in conceptual art. This paper complicates this depiction by examining Ian Burn's last writings in which I argue the artist-writer reviews the challenge of minimal-conceptual art in terms of its perceptual pre-occupations. Burn revisits his own work and the legacy of minimal-conceptual by moving away from the kind of ideology critique he is best known for internationally in order to reassert the long overlooked visual-perceptual preoccupations of the conceptual in art.
Resumo:
Rapid prototyping environments can speed up the research of visual control algorithms. We have designed and implemented a software framework for fast prototyping of visual control algorithms for Micro Aerial Vehicles (MAV). We have applied a combination of a proxy-based network communication architecture and a custom Application Programming Interface. This allows multiple experimental configurations, like drone swarms or distributed processing of a drone's video stream. Currently, the framework supports a low-cost MAV: the Parrot AR.Drone. Real tests have been performed on this platform and the results show comparatively low figures of the extra communication delay introduced by the framework, while adding new functionalities and flexibility to the selected drone. This implementation is open-source and can be downloaded from www.vision4uav.com/?q=VC4MAV-FW
Resumo:
Process-aware information systems, ranging from generic workflow systems to dedicated enterprise information systems, use work-lists to offer so-called work items to users. In real scenarios, users can be confronted with a very large number of work items that stem from multiple cases of different processes. In this jungle of work items, users may find it hard to choose the right item to work on next. The system cannot autonomously decide which is the right work item, since the decision is also dependent on conditions that are somehow outside the system. For instance, what is “best” for an organisation should be mediated with what is “best” for its employees. Current work-list handlers show work items as a simple sorted list and therefore do not provide much decision support for choosing the right work item. Since the work-list handler is the dominant interface between the system and its users, it is worthwhile to provide an intuitive graphical interface that uses contextual information about work items and users to provide suggestions about prioritisation of work items. This paper uses the so-called map metaphor to visualise work items and resources (e.g., users) in a sophisticated manner. Moreover, based on distance notions, the work-list handler can suggest the next work item by considering different perspectives. For example, urgent work items of a type that suits the user may be highlighted. The underlying map and distance notions may be of a geographical nature (e.g., a map of a city or office building), but may also be based on process designs, organisational structures, social networks, due dates, calendars, etc. The framework proposed in this paper is generic and can be applied to any process-aware information system. Moreover, in order to show its practical feasibility, the paper discusses a full-fledged implementation developed in the context of the open-source workflow environment YAWL, together with two real examples stemming from two very different scenarios. The results of an initial usability evaluation of the implementation are also presented, which provide a first indication of the validity of the approach.
Resumo:
This paper presents an image-based visual servoing system that was used to track the atmospheric Earth re-entry of Hayabusa. The primary aim of this ground based tracking platform was to record the emission spectrum radiating from the superheated gas of the shock layer and the surface of the heat shield during re-entry. To the author's knowledge, this is the first time that a visual servoing system has successfully tracked a super-orbital re-entry of a spacecraft and recorded its pectral signature. Furthermore, we improved the system by including a simplified dynamic model for feed-forward control and demonstrate improved tracking performance on the International Space Station (ISS). We present comparisons between simulation and experimental results on different target trajectories including tracking results from Hayabusa and ISS. The required performance for tracking both spacecraft is demanding when combined with a narrow field of view (FOV). We also briefly discuss the preliminary results obtained from the spectroscopy of the Hayabusa's heat shield during re-entry.
Resumo:
In this paper we use a sequence-based visual localization algorithm to reveal surprising answers to the question, how much visual information is actually needed to conduct effective navigation? The algorithm actively searches for the best local image matches within a sliding window of short route segments or 'sub-routes', and matches sub-routes by searching for coherent sequences of local image matches. In contract to many existing techniques, the technique requires no pre-training or camera parameter calibration. We compare the algorithm's performance to the state-of-the-art FAB-MAP 2.0 algorithm on a 70 km benchmark dataset. Performance matches or exceeds the state of the art feature-based localization technique using images as small as 4 pixels, fields of view reduced by a factor of 250, and pixel bit depths reduced to 2 bits. We present further results demonstrating the system localizing in an office environment with near 100% precision using two 7 bit Lego light sensors, as well as using 16 and 32 pixel images from a motorbike race and a mountain rally car stage. By demonstrating how little image information is required to achieve localization along a route, we hope to stimulate future 'low fidelity' approaches to visual navigation that complement probabilistic feature-based techniques.
Resumo:
Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100% precision with recall rates of up to 60%.
Resumo:
Audio-visualspeechrecognition, or the combination of visual lip-reading with traditional acoustic speechrecognition, has been previously shown to provide a considerable improvement over acoustic-only approaches in noisy environments, such as that present in an automotive cabin. The research presented in this paper will extend upon the established audio-visualspeechrecognition literature to show that further improvements in speechrecognition accuracy can be obtained when multiple frontal or near-frontal views of a speaker's face are available. A series of visualspeechrecognition experiments using a four-stream visual synchronous hidden Markov model (SHMM) are conducted on the four-camera AVICAR automotiveaudio-visualspeech database. We study the relative contribution between the side and central orientated cameras in improving visualspeechrecognition accuracy. Finally combination of the four visual streams with a single audio stream in a five-stream SHMM demonstrates a relative improvement of over 56% in word recognition accuracy when compared to the acoustic-only approach in the noisiest conditions of the AVICAR database.
Resumo:
Visual sea-floor mapping is a rapidly growing application for Autonomous Underwater Vehicles (AUVs). AUVs are well-suited to the task as they remove humans from a potentially dangerous environment, can reach depths human divers cannot, and are capable of long-term operation in adverse conditions. The output of sea-floor maps generated by AUVs has a number of applications in scientific monitoring: from classifying coral in high biological value sites to surveying sea sponges to evaluate marine environment health.
Resumo:
Purpose: To investigate the correlations of the global flash multifocal electroretinogram (MOFO mfERG) with common clinical visual assessments – Humphrey perimetry and Stratus circumpapillary retinal nerve fiber layer (RNFL) thickness measurement in type II diabetic patients. Methods: Forty-two diabetic patients participated in the study: ten were free from diabetic retinopathy (DR) while the remainder suffered from mild to moderate non-proliferative diabetic retinopathy (NPDR). Fourteen age-matched controls were recruited for comparison. MOFO mfERG measurements were made under high and low contrast conditions. Humphrey central 30-2 perimetry and Stratus OCT circumpapillary RNFL thickness measurements were also performed. Correlations between local values of implicit time and amplitude of the mfERG components (direct component (DC) and induced component (IC)), and perimetric sensitivity and RNFL thickness were evaluated by mapping the localized responses for the three subject groups. Results: MOFO mfERG was superior to perimetry and RNFL assessments in showing differences between the diabetic groups (with and without DR) and the controls. All the MOFO mfERG amplitudes (except IC amplitude at high contrast) correlated better with perimetry findings (Pearson’s r ranged from 0.23 to 0.36, p<0.01) than did the mfERG implicit time at both high and low contrasts across all subject groups. No consistent correlation was found between the mfERG and RNFL assessments for any group or contrast conditions. The responses of the local MOFO mfERG correlated with local perimetric sensitivity but not with RNFL thickness. Conclusion: Early functional changes in the diabetic retina seem to occur before morphological changes in the RNFL.
Resumo:
Aims/hypothesis: Impaired central vision has been shown to predict diabetic peripheral neuropathy (DPN). Several studies have demonstrated diffuse retinal neurodegenerative changes in diabetic patients prior to retinopathy development, raising the prospect that non-central vision may also be compromised by primary neural damage. We hypothesise that type 2 diabetic patients with DPN exhibit visual sensitivity loss in a distinctive pattern across the visual field, compared with a control group of type 2 diabetic patients without DPN. Methods: Increment light sensitivity was measured by standard perimetry in the central 30 degree of visual field for two age-matched groups of type 2 diabetic patients, with and without neuropathy (n=40/30). Neuropathy status was assigned using the neuropathy disability score. Mean visual sensitivity values were calculated globally, for each quadrant and for three eccentricities (0-10 degree , 11-20 degree and 21-30 degree ). Data were analysed using a generalised additive mixed model (GAMM). Results: Global and quadrant between-group visual sensitivity mean differences were marginally but consistently lower (by about 1 dB) in the neuropathy cohort compared with controls. Between-group mean differences increased from 0.36 to 1.81 dB with increasing eccentricity. GAMM analysis, after adjustment for age, showed these differences to be significant beyond 15 degree eccentricity and monotonically increasing. Retinopathy levels and disease duration were not significant factors within the model (p=0.90). Conclusions/interpretation: Visual sensitivity reduces disproportionately with increasing eccentricity in type 2 diabetic patients with peripheral neuropathy. This sensitivity reduction within the central 30 degree of visual field may be indicative of more consequential loss in the far periphery.