949 resultados para Counting, binocular


Relevância:

20.00% 20.00%

Publicador:

Resumo:

To decouple interocular suppression and binocular summation we varied the relative phase of mask and target in a 2IFC contrast-masking paradigm. In Experiment I, dichoptic mask gratings had the same orientation and spatial frequency as the target. For in-phase masking, suppression was strong (a log-log slope of ∼1) and there was weak facilitation at low mask contrasts. Anti-phase masking was weaker (a log-log slope of ∼0.7) and there was no facilitation. A two-stage model of contrast gain control [Meese, T.S., Georgeson, M.A. and Baker, D.H. (2006). Binocular contrast vision at and above threshold. Journal of Vision, 6: 1224-1243] provided a good fit to the in-phase results and fixed its free parameters. It made successful predictions (with no free parameters) for the anti-phase results when (A) interocular suppression was phase-indifferent but (B) binocular summation was phase sensitive. Experiments II and III showed that interocular suppression comprised two components: (i) a tuned effect with an orientation bandwidth of ∼±33° and a spatial frequency bandwidth of >3 octaves, and (ii) an untuned effect that elevated threshold by a factor of between 2 and 4. Operationally, binocular summation was more tightly tuned, having an orientation bandwidth of ∼±8°, and a spatial frequency bandwidth of ∼0.5 octaves. Our results replicate the unusual shapes of the in-phase dichoptic tuning functions reported by Legge [Legge, G.E. (1979). Spatial frequency masking in human vision: Binocular interactions. Journal of the Optical Society of America, 69: 838-847]. These can now be seen as the envelope of the direct effects from interocular suppression and the indirect effect from binocular summation, which contaminates the signal channel with a mask that has been suppressed by the target. © 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We assessed summation of contrast across eyes and area at detection threshold ( C t). Stimuli were sine-wave gratings (2.5 c/deg) spatially modulated by cosine- and anticosine-phase raised plaids (0.5 c/deg components oriented at ±45°). When presented dichoptically the signal regions were interdigitated across eyes but produced a smooth continuous grating following their linear binocular sum. The average summation ratio ( C t1/([ C t1+2]) for this stimulus pair was 1.64 (4.3 dB). This was only slightly less than the binocular summation found for the same patch type presented to both eyes, and the area summation found for the two different patch types presented to the same eye. We considered 192 model architectures containing each of the following four elements in all possible orders: (i) linear summation or a MAX operator across eyes, (ii) linear summation or a MAX operator across area, (iii) linear or accelerating contrast transduction, and (iv) additive Gaussian, stochastic noise. Formal equivalences reduced this to 62 different models. The most successful four-element model was: linear summation across eyes followed by nonlinear contrast transduction, linear summation across area, and late noise. Model performance was enhanced when additional nonlinearities were placed before binocular summation and after area summation. The implications for models of probability summation and uncertainty are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new behavioural technique solves a long-standing puzzle of binocular suppression, demonstrating that adapting reciprocal inhibition governs visual sensitivity, and raising key questions about visual awareness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: (1) To devise a model-based method for estimating the probabilities of binocular fusion, interocular suppression and diplopia from psychophysical judgements, (2) To map out the way fusion, suppression and diplopia vary with binocular disparity and blur of single edges shown to each eye, (3) To compare the binocular interactions found for edges of the same vs opposite contrast polarity. Methods: Test images were single, horizontal, Gaussian-blurred edges, with blur B = 1-32 min arc, and vertical disparity 0-8.B, shown for 200 ms. In the main experiment, observers reported whether they saw one central edge, one offset edge, or two edges. We argue that the relation between these three response categories and the three perceptual states (fusion, suppression, diplopia) is indirect and likely to be distorted by positional noise and criterion effects, and so we developed a descriptive, probabilistic model to estimate both the perceptual states and the noise/criterion parameters from the data. Results: (1) Using simulated data, we validated the model-based method by showing that it recovered fairly accurately the disparity ranges for fusion and suppression, (2) The disparity range for fusion (Panum's limit) increased greatly with blur, in line with previous studies. The disparity range for suppression was similar to the fusion limit at large blurs, but two or three times the fusion limit at small blurs. This meant that diplopia was much more prevalent at larger blurs, (3) Diplopia was much more frequent when the two edges had opposite contrast polarity. A formal comparison of models indicated that fusion occurs for same, but not opposite, polarities. Probability of suppression was greater for unequal contrasts, and it was always the lower-contrast edge that was suppressed. Conclusions: Our model-based data analysis offers a useful tool for probing binocular fusion and suppression psychophysically. The disparity range for fusion increased with edge blur but fell short of complete scale-invariance. The disparity range for suppression also increased with blur but was not close to scale-invariance. Single vision occurs through fusion, but also beyond the fusion range, through suppression. Thus suppression can serve as a mechanism for extending single vision to larger disparities, but mainly for sharper edges where the fusion range is small (5-10 min arc). For large blurs the fusion range is so much larger that no such extension may be needed. © 2014 The College of Optometrists.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background/aim: The technique of photoretinoscopy is unique in being able to measure the dynamics of the oculomotor system (ocular accommodation, vergence, and pupil size) remotely (working distance typically 1 metre) and objectively in both eyes simultaneously. The aim af this study was to evaluate clinically the measurement of refractive error by a recent commercial photoretinoscopic device, the PowerRefractor (PlusOptiX, Germany). Method: The validity and repeatability of the PowerRefractor was compared to: subjective (non-cycloplegic) refraction on 100 adult subjects (mean age 23.8 (SD 5.7) years) and objective autarefractian (Shin-Nippon SRW-5000, Japan) on 150 subjects (20.1 (4.2) years). Repeatability was assessed by examining the differences between autorefractor readings taken from each eye and by re-measuring the objective prescription of 100 eyes at a subsequent session. Results: On average the PowerRefractor prescription was not significantly different from the subjective refraction, although quite variable (difference -0.05 (0.63) D, p = 0.41) and more negative than the SRW-5000 prescription (by -0.20 (0.72) D, p<0.001). There was no significant bias in the accuracy of the instrument with regard to the type or magnitude of refractive error. The PowerRefractor was found to be repeatable over the prescription range of -8.75D to +4.00D (mean spherical equivalent) examined. Conclusion: The PowerRefractor is a useful objective screening instrument and because of its remote and rapid measurement of both eyes simultaneously is able to assess the oculomotor response in a variety of unrestricted viewing conditions and patient types.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the present paper the results from designing of device, which is a part of the automated information system for counting, reporting and documenting the quantity of produced bottles in a factory for glass processing are presented. The block diagram of the device is given. The introduced system can be applied in other discrete productions for counting of the quantity of bottled production.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Binocular combination for first-order (luminancedefined) stimuli has been widely studied, but we know rather little about this binocular process for spatial modulations of contrast (second-order stimuli). We used phase-matching and amplitude-matching tasks to assess binocular combination of second-order phase and modulation depth simultaneously. With fixed modulation in one eye, we found that binocularly perceived phase was shifted, and perceived amplitude increased almost linearly as modulation depth in the other eye increased. At larger disparities, the phase shift was larger and the amplitude change was smaller. The degree of interocular correlation of the carriers had no influence. These results can be explained by an initial extraction of the contrast envelopes before binocular combination (consistent with the lack of dependence on carrier correlation) followed by a weighted linear summation of second-order modulations in which the weights (gains) for each eye are driven by the first-order carrier contrasts as previously found for first-order binocular combination. Perceived modulation depth fell markedly with increasing phase disparity unlike previous findings that perceived first-order contrast was almost independent of phase disparity. We present a simple revision to a widely used interocular gain-control theory that unifies first- and second-order binocular summation with a single principle-contrast-weighted summation-and we further elaborate the model for first-order combination. Conclusion: Second-order combination is controlled by first-order contrast.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present an innovative topic segmentation system based on a new informative similarity measure that takes into account word co-occurrence in order to avoid the accessibility to existing linguistic resources such as electronic dictionaries or lexico-semantic databases such as thesauri or ontology. Topic segmentation is the task of breaking documents into topically coherent multi-paragraph subparts. Topic segmentation has extensively been used in information retrieval and text summarization. In particular, our architecture proposes a language-independent topic segmentation system that solves three main problems evidenced by previous research: systems based uniquely on lexical repetition that show reliability problems, systems based on lexical cohesion using existing linguistic resources that are usually available only for dominating languages and as a consequence do not apply to less favored languages and finally systems that need previously existing harvesting training data. For that purpose, we only use statistics on words and sequences of words based on a set of texts. This solution provides a flexible solution that may narrow the gap between dominating languages and less favored languages thus allowing equivalent access to information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In global policy documents, the language of Technology-Enhanced Learning (TEL) now firmly structures a perception of educational technology which ‘subsumes’ terms like Networked Learning and e-Learning. Embedded in these three words though is a deterministic, economic assumption that technology has now enhanced learning, and will continue to do so. In a market-driven, capitalist society this is a ‘trouble free’, economically focused discourse which suggests there is no need for further debate about what the use of technology achieves in learning. Yet this raises a problem too: if technology achieves goals for human beings, then in education we are now simply counting on ‘use of technology’ to enhance learning. This closes the door on a necessary and ongoing critical pedagogical conversation that reminds us it is people that design learning, not technology. Furthermore, such discourse provides a vehicle for those with either strong hierarchical, or neoliberal agendas to make simplified claims politically, in the name of technology. This chapter is a reflection on our use of language in the educational technology community through a corpus-based Critical Discourse Analysis (CDA). In analytical examples that are ‘loaded’ with economic expectation, we can notice how the policy discourse of TEL narrows conversational space for learning so that people may struggle to recognise their own subjective being in this language. Through the lens of Lieras’s externality, desubjectivisation and closure (Lieras, 1996) we might examine possible effects of this discourse and seek a more emancipatory approach. A return to discussing Networked Learning is suggested, as a first step towards a more multi-directional conversation than TEL, that acknowledges the interrelatedness of technology, language and learning in people’s practice. Secondly, a reconsideration of how we write policy for educational technology is recommended, with a critical focus on how people learn, rather than on what technology is assumed to enhance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop a simplified implementation of the Hoshen-Kopelman cluster counting algorithm adapted for honeycomb networks. In our implementation of the algorithm we assume that all nodes in the network are occupied and links between nodes can be intact or broken. The algorithm counts how many clusters there are in the network and determines which nodes belong to each cluster. The network information is stored into two sets of data. The first one is related to the connectivity of the nodes and the second one to the state of links. The algorithm finds all clusters in only one scan across the network and thereafter cluster relabeling operates on a vector whose size is much smaller than the size of the network. Counting the number of clusters of each size, the algorithm determines the cluster size probability distribution from which the mean cluster size parameter can be estimated. Although our implementation of the Hoshen-Kopelman algorithm works only for networks with a honeycomb (hexagonal) structure, it can be easily changed to be applied for networks with arbitrary connectivity between the nodes (triangular, square, etc.). The proposed adaptation of the Hoshen-Kopelman cluster counting algorithm is applied to studying the thermal degradation of a graphene-like honeycomb membrane by means of Molecular Dynamics simulation with a Langevin thermostat. ACM Computing Classification System (1998): F.2.2, I.5.3.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The visual system combines spatial signals from the two eyes to achieve single vision. But if binocular disparity is too large, this perceptual fusion gives way to diplopia. We studied and modelled the processes underlying fusion and the transition to diplopia. The likely basis for fusion is linear summation of inputs onto binocular cortical cells. Previous studies of perceived position, contrast matching and contrast discrimination imply the computation of a dynamicallyweighted sum, where the weights vary with relative contrast. For gratings, perceived contrast was almost constant across all disparities, and this can be modelled by allowing the ocular weights to increase with disparity (Zhou, Georgeson & Hess, 2014). However, when a single Gaussian-blurred edge was shown to each eye perceived blur was invariant with disparity (Georgeson & Wallis, ECVP 2012) – not consistent with linear summation (which predicts that perceived blur increases with disparity). This blur constancy is consistent with a multiplicative form of combination (the contrast-weighted geometric mean) but that is hard to reconcile with the evidence favouring linear combination. We describe a 2-stage spatial filtering model with linear binocular combination and suggest that nonlinear output transduction (eg. ‘half-squaring’) at each stage may account for the blur constancy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents an analysis of the behavior of some algorithms usually available in stereo correspondence literature, with full HD images (1920x1080 pixels) to establish, within the precision dilemma versus runtime applications which these methods can be better used. The images are obtained by a system composed of a stereo camera coupled to a computer via a capture board. The OpenCV library is used for computer vision operations and processing images involved. The algorithms discussed are an overall method of search for matching blocks with the Sum of the Absolute Value of the difference (Sum of Absolute Differences - SAD), a global technique based on cutting energy graph cuts, and a so-called matching technique semi -global. The criteria for analysis are processing time, the consumption of heap memory and the mean absolute error of disparity maps generated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to reconstruct regional vegetation changes and local conditions during the fen-bog transition in the Borsteler Moor (northwestern Germany), a sediment core covering the period between 7.1 and 4.5 cal kyrs BP was palynologically in vestigated. The pollen diagram demonstrates the dominance of oak forests and a gradual replacement of trees by raised bog vegetation with the wetter conditions in the Late Atlantic. At ~ 6 cal kyrs BP, the non-pollen palynomorphs (NPP) demonstrate the succession from mesotrophic conditions, clearly indicated by a number of fungal spore types, to oligotrophic conditions, indicated by Sphagnum spores, Bryophytomyces sphagni, and testate amoebae Amphitrema, Assulina and Arcella, etc. Four relatively dry phases during the transition from fen to bog are clearly indicated by the dominance of Calluna and associated fungi as well as by the increase of microcharcoal. Several new NPP types are described and known NPP types are identified. All NPP are discussed in the context of their palaeoecological indicator values.