44 resultados para Visual Attention
Resumo:
Human visual function declines with age. Much of this decline is mediated by changes in the central visual pathways. In this study we compared the spatial and temporal sensitivities of striate cortical cells in young and old paralysed macaque monkeys. Ext
Resumo:
Whether mice perceive the depth of space dependent on the visual size of object targets was explored when visual cues such as perspective and partial occlusion in space were excluded. A mouse was placed on a platform the height of which is adjustable. The platform located inside a box in which all other walls were dark exception its bottom through that light was projected as a sole visual cue. The visual object cue was composed of 4x4 grids to allow a mouse estimating the distance of the platform relative to the grids. Three sizes of grids reduced in a proportion of 2/3 and seven distances with an equal interval between the platform and the grids at the bottom were applied in the experiments. The duration of a mouse staying on the platform at each height was recorded when the different sizes of the grids were presented randomly to test whether the Judgment of the mouse for the depth of the platform from the bottom was affected by the size information of the visual target. The results from all conditions of three object sizes show that time of mice staying on the platform became longer with the increase in height. In distance of 20 similar to 30 cm, the mice did not use the size information of a target to judge the depth, while mainly used the information of binocular disparity. In distance less than 20 cm or more than 30 cm, however, especially in much higher distance 50 cm, 60 cm and 70 cm, the mice were able to use the size information to do so in order to compensate the lack of binocular disparity information from both eyes. Because the mice have only 1/3 of the visual field that is binocular. This behavioral paradigm established in the current study is a useful model and can be applied to the experiments using transgenic mouse as an animal model to investigate the relationships between behaviors and gene functions.
Resumo:
Special thanks to Christopher Blair and Mumtaz Baig for their suggestions. This work was supported by National Basic Research Program of China (973 Program, 2007CB411600), National Natural Science Foundation of China (30621092), and Bureau of Science and Technology of Yunnan Province.
Resumo:
The detection performance regarding stationary acoustic monitoring of Yangtze finless porpoises Neophocaena phocaenoides asiaeorientalis was compared to visual observations. Three stereo acoustic data loggers (A-tag) were placed at different locations near the confluence of Poyang Lake and the Yangtze River, China. The presence and number of porpoises were determined acoustically and visually during each 1-min time bin. On average, porpoises were acoustically detected 81.7 +/- 9.7% of the entire effective observation time, while the presence of animals was confirmed visually 12.7 +/- 11.0% of the entire time. Acoustic monitoring indicated areas of high and low porpoise densities that were consistent with visual observations. The direction of porpoise movement was monitored using stereo beams, which agreed with visual observations at all monitoring locations. Acoustic and visual methods could determine group sizes up to five and ten individuals, respectively. While the acoustic monitoring method had the advantage of high detection probability, it tended to underestimate group size due to the limited resolution of sound source bearing angles. The stationary acoustic monitoring method proved to be a practical and useful alternative to visual observations, especially in areas of low porpoise density for long-term monitoring.
Resumo:
Recently, sonar signals and other sounds produced by cetaceans have been used for acoustic detection of individuals and groups in the wild. However, the detection probability ascertained by concomitant visual survey has not been demonstrated extensively. The finless porpoises (Neophocaena phocaenoides) have narrow band and high-frequency sonar signals, which are distinctive from background noises. Underwater sound monitoring with hydrophones (B&K8103) placed along the sides of a research vessel, concurrent with visual observations was conducted in the Yangtze River from Wuhan to Poyang Lake in 1998 in China. The peak to peak detection threshold was set at 133 dB re 1 mu Pa. With this threshold level, porpoises could be detected reliably within 300 m of the hydrophone. In a total of 774-km cruise, 588 finless porpoises were sighted by visual observation and 44 864 ultrasonic pulses were recorded by the acoustical observation system. The acoustic monitoring system could detect the presence of the finless porpoises 82% of the time. A false alarm in the system occurred with a frequency of 0.9%. The high-frequency acoustical observation is suggested as an effective method for field surveys of small cetaceans, which produce high-frequency sonar signals. (C) 2001 Acoustical Society of America.
Resumo:
The distinguishment between the object appearance and the background is the useful cues available for visual tracking in which the discriminant analysis is widely applied However due to the diversity of the background observation there are not adequate negative samples from the background which usually lead the discriminant method to tracking failure Thus a natural solution is to construct an object-background pair constrained by the spatial structure which could not only reduce the neg-sample number but also make full use of the background information surrounding the object However this Idea is threatened by the variant of both the object appearance and the spatial-constrained background observation especially when the background shifts as the moving of the object Thus an Incremental pairwise discriminant subspace is constructed in this paper to delineate the variant of the distinguishment In order to maintain the correct the ability of correctly describing the subspace we enforce two novel constraints for the optimal adaptation (1) pairwise data discriminant constraint and (2) subspace smoothness The experimental results demonstrate that the proposed approach can alleviate adaptation drift and achieve better visual tracking results for a large variety of nonstationary scenes (C) 2010 Elsevier B V All rights reserved
Resumo:
It is important for practical application to design an effective and efficient metric for video quality. The most reliable way is by subjective evaluation. Thus, to design an objective metric by simulating human visual system (HVS) is quite reasonable and available. In this paper, the video quality assessment metric based on visual perception is proposed. Three-dimensional wavelet is utilized to decompose video and then extract features to mimic the multichannel structure of HVS. Spatio-temporal contrast sensitivity function (S-T CSF) is employed to weight coefficient obtained by three-dimensional wavelet to simulate nonlinearity feature of the human eyes. Perceptual threshold is exploited to obtain visual sensitive coefficients after S-T CSF filtered. Visual sensitive coefficients are normalized representation and then visual sensitive errors are calculated between reference and distorted video. Finally, temporal perceptual mechanism is applied to count values of video quality for reducing computational cost. Experimental results prove the proposed method outperforms the most existing methods and is comparable to LHS and PVQM.
Resumo:
Both commercial and scientific applications often need to transform color images into gray-scale images, e. g., to reduce the publication cost in printing color images or to help color blind people see visual cues of color images. However, conventional color to gray algorithms are not ready for practical applications because they encounter the following problems: 1) Visual cues are not well defined so it is unclear how to preserve important cues in the transformed gray-scale images; 2) some algorithms have extremely high time cost for computation; and 3) some require human-computer interactions to have a reasonable transformation. To solve or at least reduce these problems, we propose a new algorithm based on a probabilistic graphical model with the assumption that the image is defined over a Markov random field. Thus, color to gray procedure can be regarded as a labeling process to preserve the newly well-defined visual cues of a color image in the transformed gray-scale image. Visual cues are measurements that can be extracted from a color image by a perceiver. They indicate the state of some properties of the image that the perceiver is interested in perceiving. Different people may perceive different cues from the same color image and three cues are defined in this paper, namely, color spatial consistency, image structure information, and color channel perception priority. We cast color to gray as a visual cue preservation procedure based on a probabilistic graphical model and optimize the model based on an integral minimization problem. We apply the new algorithm to both natural color images and artificial pictures, and demonstrate that the proposed approach outperforms representative conventional algorithms in terms of effectiveness and efficiency. In addition, it requires no human-computer interactions.
Resumo:
Eye detection plays an important role in many practical applications. This paper presents a novel two-step scheme for eye detection. The first step models an eye by a newly defined visual-context pattern (VCP), and the second step applies semisupervised boosting for precise detection. VCP describes both the space and appearance relations between an eye region (region of eye) and a reference region (region of reference). The context feature of a VCP is extracted by using the integral image. Aiming to reduce the human labeling efforts, we apply semisupervised boosting, which integrates the context feature and the Haar-like features for precise eye detection. Experimental results on several standard face data sets demonstrate that the proposed approach is effective, robust, and efficient. We finally show that this approach is ready for practical applications.
Resumo:
Watermarking aims to hide particular information into some carrier but does not change the visual cognition of the carrier itself. Local features are good candidates to address the watermark synchronization error caused by geometric distortions and have attracted great attention for content-based image watermarking. This paper presents a novel feature point-based image watermarking scheme against geometric distortions. Scale invariant feature transform (SIFT) is first adopted to extract feature points and to generate a disk for each feature point that is invariant to translation and scaling. For each disk, orientation alignment is then performed to achieve rotation invariance. Finally, watermark is embedded in middle-frequency discrete Fourier transform (DFT) coefficients of each disk to improve the robustness against common image processing operations. Extensive experimental results and comparisons with some representative image watermarking methods confirm the excellent performance of the proposed method in robustness against various geometric distortions as well as common image processing operations.