24 resultados para visual method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To report any differences in the visual acuity (VA) recording method used in peer-reviewed ophthalmology clinical studies over the past decade. Methods: We reviewed the method of assessing and reporting VA in 160 clinical studies from 2 UK and 2 US peer-reviewed journals, published in 1994 and 2004. Results: The method used to assess VA was specified in 62.5% of UK-published and 60% of US-published papers. In the results sections of the UK publications the VA measurements presented were Snellen acuity (n = 58), logMAR acuity (n = 20) and symbol acuity (n = 1). Similarly in the US publications the VA was recorded in the results section using Snellen acuity (n = 60) and logMAR acuity (n = 14). Overall 10% of the authors appeared to convert Snellen acuity measurements to logMAR format. Five studies (3%) chose to express Snellen-type acuities in decimal form, a method which can easily lead to confusion given the increased use of logMAR scoring systems. Conclusion: The authors recommend that to ensure comparable visual results between studies and different study populations it would be useful if clinical scientists worked to standardized VA testing protocols and reported results in a manner consistent with the way in which they are measured. Copyright © 2008 S. Karger AG.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a novel visual tracking framework, based on a decision-theoretic online learning algorithm namely NormalHedge. To make NormalHedge more robust against noise, we propose an adaptive NormalHedge algorithm, which exploits the historic information of each expert to perform more accurate prediction than the standard NormalHedge. Technically, we use a set of weighted experts to predict the state of the target to be tracked over time. The weight of each expert is online learned by pushing the cumulative regret of the learner towards that of the expert. Our simulation experiments demonstrate the effectiveness of the proposed adaptive NormalHedge, compared to the standard NormalHedge method. Furthermore, the experimental results of several challenging video sequences show that the proposed tracking method outperforms several state-of-the-art methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sparse representation based visual tracking approaches have attracted increasing interests in the community in recent years. The main idea is to linearly represent each target candidate using a set of target and trivial templates while imposing a sparsity constraint onto the representation coefficients. After we obtain the coefficients using L1-norm minimization methods, the candidate with the lowest error, when it is reconstructed using only the target templates and the associated coefficients, is considered as the tracking result. In spite of promising system performance widely reported, it is unclear if the performance of these trackers can be maximised. In addition, computational complexity caused by the dimensionality of the feature space limits these algorithms in real-time applications. In this paper, we propose a real-time visual tracking method based on structurally random projection and weighted least squares techniques. In particular, to enhance the discriminative capability of the tracker, we introduce background templates to the linear representation framework. To handle appearance variations over time, we relax the sparsity constraint using a weighed least squares (WLS) method to obtain the representation coefficients. To further reduce the computational complexity, structurally random projection is used to reduce the dimensionality of the feature space while preserving the pairwise distances between the data points in the feature space. Experimental results show that the proposed approach outperforms several state-of-the-art tracking methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Spatially localized duration compression of a briefly presented moving stimulus following adaptation in the same location is taken as evidence for modality-specific neural timing mechanisms.

Aims: The present study used random dot motion stimuli to investigate where these mechanisms may be located.

Method: Experiment 1 measured duration compression of the test stimulus as a function of adaptor speed and revealed that duration compression is speed tuned. These data were then used to make predictions of duration compression responses for various models which were tested in experiment 2. Here a mixed-speed adaptor stimulus was used with duration compression being measured as a function of the adaptor’s ‘speed notch’ (the removal of a central band from the speed range).

Results: The results were consistent with a local-mean model.

Conclusions: Local-motion mechanisms are involved in duration perception of brief events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES:

To compare methods to estimate the incidence of visual field progression used by 3 large randomized trials of glaucoma treatment by applying these methods to a common data set of annually obtained visual field measurements of patients with glaucoma followed up for an average of 6 years.

METHODS:

The methods used by the Advanced Glaucoma Intervention Study (AGIS), the Collaborative Initial Glaucoma Treatment Study (CIGTS), and the Early Manifest Glaucoma Treatment study (EMGT) were applied to 67 eyes of 56 patients with glaucoma enrolled in a 10-year natural history study of glaucoma using Program 30-2 of the Humphrey Field Analyzer (Humphrey Instruments, San Leandro, Calif). The incidence of apparent visual field progression was estimated for each method. Extent of agreement between the methods was calculated, and time to apparent progression was compared.

RESULTS:

The proportion of patients progressing was 11%, 22%, and 23% with AGIS, CIGTS, and EMGT methods, respectively. Clinical assessment identified 23% of patients who progressed, but only half of these were also identified by CIGTS or EMGT methods. The CIGTS and the EMGT had comparable incidence rates, but only half of those identified by 1 method were also identified by the other.

CONCLUSIONS:

The EMGT and CIGTS methods produced rates of apparent progression that were twice those of the AGIS method. Although EMGT, CIGTS, and clinical assessment rates were comparable, they did not identify the same patients as having had field progression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we propose a biologically inspired appearance model for robust visual tracking. Motivated in part by the success of the hierarchical organization of the primary visual cortex (area V1), we establish an architecture consisting of five layers: whitening, rectification, normalization, coding and polling. The first three layers stem from the models developed for object recognition. In this paper, our attention focuses on the coding and pooling layers. In particular, we use a discriminative sparse coding method in the coding layer along with spatial pyramid representation in the pooling layer, which makes it easier to distinguish the target to be tracked from its background in the presence of appearance variations. An extensive experimental study shows that the proposed method has higher tracking accuracy than several state-of-the-art trackers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective
Pedestrian detection under video surveillance systems has always been a hot topic in computer vision research. These systems are widely used in train stations, airports, large commercial plazas, and other public places. However, pedestrian detection remains difficult because of complex backgrounds. Given its development in recent years, the visual attention mechanism has attracted increasing attention in object detection and tracking research, and previous studies have achieved substantial progress and breakthroughs. We propose a novel pedestrian detection method based on the semantic features under the visual attention mechanism.
Method
The proposed semantic feature-based visual attention model is a spatial-temporal model that consists of two parts: the static visual attention model and the motion visual attention model. The static visual attention model in the spatial domain is constructed by combining bottom-up with top-down attention guidance. Based on the characteristics of pedestrians, the bottom-up visual attention model of Itti is improved by intensifying the orientation vectors of elementary visual features to make the visual saliency map suitable for pedestrian detection. In terms of pedestrian attributes, skin color is selected as a semantic feature for pedestrian detection. The regional and Gaussian models are adopted to construct the skin color model. Skin feature-based visual attention guidance is then proposed to complete the top-down process. The bottom-up and top-down visual attentions are linearly combined using the proper weights obtained from experiments to construct the static visual attention model in the spatial domain. The spatial-temporal visual attention model is then constructed via the motion features in the temporal domain. Based on the static visual attention model in the spatial domain, the frame difference method is combined with optical flowing to detect motion vectors. Filtering is applied to process the field of motion vectors. The saliency of motion vectors can be evaluated via motion entropy to make the selected motion feature more suitable for the spatial-temporal visual attention model.
Result
Standard datasets and practical videos are selected for the experiments. The experiments are performed on a MATLAB R2012a platform. The experimental results show that our spatial-temporal visual attention model demonstrates favorable robustness under various scenes, including indoor train station surveillance videos and outdoor scenes with swaying leaves. Our proposed model outperforms the visual attention model of Itti, the graph-based visual saliency model, the phase spectrum of quaternion Fourier transform model, and the motion channel model of Liu in terms of pedestrian detection. The proposed model achieves a 93% accuracy rate on the test video.
Conclusion
This paper proposes a novel pedestrian method based on the visual attention mechanism. A spatial-temporal visual attention model that uses low-level and semantic features is proposed to calculate the saliency map. Based on this model, the pedestrian targets can be detected through focus of attention shifts. The experimental results verify the effectiveness of the proposed attention model for detecting pedestrians.