921 resultados para Visual Attention
Resumo:
Strategic planning can be an arduous and complex task; and, once a plan has been devised, it is often quite a challenge to effectively communicate the principal missions and key priorities to the array of different stakeholders. The communication challenge can be addressed through the application of a clearly and concisely designed visualisation of the strategic plan - to that end, this paper proposes the use of a roadmapping framework to structure a visual canvas. The canvas provides a template in the form of a single composite visual output that essentially allows a 'plan-on-a-page' to be generated. Such a visual representation provides a high-level depiction of the future context, end-state capabilities and the system-wide transitions needed to realise the strategic vision. To demonstrate this approach, an illustrative case study based on the Australian Government's Defence White Paper and the Royal Australian Navy's fleet plan will be presented. The visual plan plots the in-service upgrades for addressing the capability shortfalls and gaps in the Navy's fleet as it transitions from its current configuration to its future end-state vision. It also provides a visualisation of project timings in terms of the decision gates (approval, service release) and specific phases (proposal, contract, delivery) together with how these projects are rated against the key performance indicators relating to the technology acquisition process and associated management activities. © 2013 Taylor & Francis.
Resumo:
The detection performance regarding stationary acoustic monitoring of Yangtze finless porpoises Neophocaena phocaenoides asiaeorientalis was compared to visual observations. Three stereo acoustic data loggers (A-tag) were placed at different locations near the confluence of Poyang Lake and the Yangtze River, China. The presence and number of porpoises were determined acoustically and visually during each 1-min time bin. On average, porpoises were acoustically detected 81.7 +/- 9.7% of the entire effective observation time, while the presence of animals was confirmed visually 12.7 +/- 11.0% of the entire time. Acoustic monitoring indicated areas of high and low porpoise densities that were consistent with visual observations. The direction of porpoise movement was monitored using stereo beams, which agreed with visual observations at all monitoring locations. Acoustic and visual methods could determine group sizes up to five and ten individuals, respectively. While the acoustic monitoring method had the advantage of high detection probability, it tended to underestimate group size due to the limited resolution of sound source bearing angles. The stationary acoustic monitoring method proved to be a practical and useful alternative to visual observations, especially in areas of low porpoise density for long-term monitoring.
Resumo:
Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon (Ma, Husain, Bays (Nature Neuroscience 17, 347-356, 2014). Zhang and Luck (Nature 453, (7192), 233-235, 2008) and Anderson, Vogel, and Awh (Attention, Perception, Psychophys 74, (5), 891-910, 2011) noticed that as more items need to be remembered, "memory noise" seems to first increase and then reach a "stable plateau." They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided at most 0.15 % of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99 % correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. Therefore, at realistic numbers of trials, plateau-related summary statistics are highly unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (Attention, Perception, Psychophys 74, (5), 891-910, 2011), we found that the evidence in the summary statistics was at most 0.12 % of the evidence in the raw data and far too weak to warrant any conclusions. The evidence in the raw data, in fact, strongly favored the slotless model. These findings call into question claims about working memory that are based on summary statistics.
Resumo:
Relative (comparative) attributes are promising for thematic ranking of visual entities, which also aids in recognition tasks. However, attribute rank learning often requires a substantial amount of relational supervision, which is highly tedious, and apparently impractical for real-world applications. In this paper, we introduce the Semantic Transform, which under minimal supervision, adaptively finds a semantic feature space along with a class ordering that is related in the best possible way. Such a semantic space is found for every attribute category. To relate the classes under weak supervision, the class ordering needs to be refined according to a cost function in an iterative procedure. This problem is ideally NP-hard, and we thus propose a constrained search tree formulation for the same. Driven by the adaptive semantic feature space representation, our model achieves the best results to date for all of the tasks of relative, absolute and zero-shot classification on two popular datasets. © 2013 IEEE.
Resumo:
Experimental research in biology has uncovered a number of different ways in which flying insects use cues derived from optical flow for navigational purposes, such as safe landing, obstacle avoidance and dead reckoning. In this study, we use a synthetic methodology to gain additional insights into the navigation behavior of bees. Specifically, we focus on the mechanisms of course stabilization behavior and visually mediated odometer by using a biological model of motion detector for the purpose of long-range goal-directed navigation in 3D environment. The performance tests of the proposed navigation method are conducted by using a blimp-type flying robot platform in uncontrolled indoor environments. The result shows that the proposed mechanism can be used for goal-directed navigation. Further analysis is also conducted in order to enhance the navigation performance of autonomous aerial vehicles. © 2003 Elsevier B.V. All rights reserved.
Resumo:
Recently, sonar signals and other sounds produced by cetaceans have been used for acoustic detection of individuals and groups in the wild. However, the detection probability ascertained by concomitant visual survey has not been demonstrated extensively. The finless porpoises (Neophocaena phocaenoides) have narrow band and high-frequency sonar signals, which are distinctive from background noises. Underwater sound monitoring with hydrophones (B&K8103) placed along the sides of a research vessel, concurrent with visual observations was conducted in the Yangtze River from Wuhan to Poyang Lake in 1998 in China. The peak to peak detection threshold was set at 133 dB re 1 mu Pa. With this threshold level, porpoises could be detected reliably within 300 m of the hydrophone. In a total of 774-km cruise, 588 finless porpoises were sighted by visual observation and 44 864 ultrasonic pulses were recorded by the acoustical observation system. The acoustic monitoring system could detect the presence of the finless porpoises 82% of the time. A false alarm in the system occurred with a frequency of 0.9%. The high-frequency acoustical observation is suggested as an effective method for field surveys of small cetaceans, which produce high-frequency sonar signals. (C) 2001 Acoustical Society of America.
Resumo:
The distinguishment between the object appearance and the background is the useful cues available for visual tracking in which the discriminant analysis is widely applied However due to the diversity of the background observation there are not adequate negative samples from the background which usually lead the discriminant method to tracking failure Thus a natural solution is to construct an object-background pair constrained by the spatial structure which could not only reduce the neg-sample number but also make full use of the background information surrounding the object However this Idea is threatened by the variant of both the object appearance and the spatial-constrained background observation especially when the background shifts as the moving of the object Thus an Incremental pairwise discriminant subspace is constructed in this paper to delineate the variant of the distinguishment In order to maintain the correct the ability of correctly describing the subspace we enforce two novel constraints for the optimal adaptation (1) pairwise data discriminant constraint and (2) subspace smoothness The experimental results demonstrate that the proposed approach can alleviate adaptation drift and achieve better visual tracking results for a large variety of nonstationary scenes (C) 2010 Elsevier B V All rights reserved
Resumo:
It is important for practical application to design an effective and efficient metric for video quality. The most reliable way is by subjective evaluation. Thus, to design an objective metric by simulating human visual system (HVS) is quite reasonable and available. In this paper, the video quality assessment metric based on visual perception is proposed. Three-dimensional wavelet is utilized to decompose video and then extract features to mimic the multichannel structure of HVS. Spatio-temporal contrast sensitivity function (S-T CSF) is employed to weight coefficient obtained by three-dimensional wavelet to simulate nonlinearity feature of the human eyes. Perceptual threshold is exploited to obtain visual sensitive coefficients after S-T CSF filtered. Visual sensitive coefficients are normalized representation and then visual sensitive errors are calculated between reference and distorted video. Finally, temporal perceptual mechanism is applied to count values of video quality for reducing computational cost. Experimental results prove the proposed method outperforms the most existing methods and is comparable to LHS and PVQM.
Resumo:
Both commercial and scientific applications often need to transform color images into gray-scale images, e. g., to reduce the publication cost in printing color images or to help color blind people see visual cues of color images. However, conventional color to gray algorithms are not ready for practical applications because they encounter the following problems: 1) Visual cues are not well defined so it is unclear how to preserve important cues in the transformed gray-scale images; 2) some algorithms have extremely high time cost for computation; and 3) some require human-computer interactions to have a reasonable transformation. To solve or at least reduce these problems, we propose a new algorithm based on a probabilistic graphical model with the assumption that the image is defined over a Markov random field. Thus, color to gray procedure can be regarded as a labeling process to preserve the newly well-defined visual cues of a color image in the transformed gray-scale image. Visual cues are measurements that can be extracted from a color image by a perceiver. They indicate the state of some properties of the image that the perceiver is interested in perceiving. Different people may perceive different cues from the same color image and three cues are defined in this paper, namely, color spatial consistency, image structure information, and color channel perception priority. We cast color to gray as a visual cue preservation procedure based on a probabilistic graphical model and optimize the model based on an integral minimization problem. We apply the new algorithm to both natural color images and artificial pictures, and demonstrate that the proposed approach outperforms representative conventional algorithms in terms of effectiveness and efficiency. In addition, it requires no human-computer interactions.
Resumo:
Eye detection plays an important role in many practical applications. This paper presents a novel two-step scheme for eye detection. The first step models an eye by a newly defined visual-context pattern (VCP), and the second step applies semisupervised boosting for precise detection. VCP describes both the space and appearance relations between an eye region (region of eye) and a reference region (region of reference). The context feature of a VCP is extracted by using the integral image. Aiming to reduce the human labeling efforts, we apply semisupervised boosting, which integrates the context feature and the Haar-like features for precise eye detection. Experimental results on several standard face data sets demonstrate that the proposed approach is effective, robust, and efficient. We finally show that this approach is ready for practical applications.
Resumo:
Watermarking aims to hide particular information into some carrier but does not change the visual cognition of the carrier itself. Local features are good candidates to address the watermark synchronization error caused by geometric distortions and have attracted great attention for content-based image watermarking. This paper presents a novel feature point-based image watermarking scheme against geometric distortions. Scale invariant feature transform (SIFT) is first adopted to extract feature points and to generate a disk for each feature point that is invariant to translation and scaling. For each disk, orientation alignment is then performed to achieve rotation invariance. Finally, watermark is embedded in middle-frequency discrete Fourier transform (DFT) coefficients of each disk to improve the robustness against common image processing operations. Extensive experimental results and comparisons with some representative image watermarking methods confirm the excellent performance of the proposed method in robustness against various geometric distortions as well as common image processing operations.