113 resultados para psychophysics
Resumo:
The 2015 FRVT gender classification (GC) report evidences the problems that current approaches tackle in situations with large variations in pose, illumination, background and facial expression. The report suggests that both commercial and research solutions are hardly able to reach an accuracy over 90% for The Images of Groups dataset, a proven scenario exhibiting unrestricted or in the wild conditions. In this paper, we focus on this challenging dataset, stepping forward in GC performance by observing: 1) recent literature results combining multiple local descriptors, and 2) the psychophysics evidences of the greater importance of the ocular and mouth areas to solve this task...
Resumo:
Four Ss were run in a visual span of apprehension experiment to determine whether second choices made following incorrect first responses are at the chance level, as implied by various high threshold models proposed for this situation. The relationships between response biases on first and second choices, and between first choice biases on trials with two or three possible responses, were also examined in terms of Luce's (1959) choice theory. The results were: (a) second choice performance in this task appears to be determined by response bias alone, i.e., second choices were at the chance level; (b)first and second choice response biases were not related according to Luce's choice axiom; and (c) the choice axiom predicted with reasonable accuracy the relationships between first choice response biases corresponding to trials with different numbers of possible response alternatives. © 1967 Psychonomic Society, Inc.
Resumo:
Common computational principles underlie processing of various visual features in the cortex. They are considered to create similar patterns of contextual modulations in behavioral studies for different features as orientation and direction of motion. Here, I studied the possibility that a single theoretical framework, implemented in different visual areas, of circular feature coding and processing could explain these similarities in observations. Stimuli were created that allowed direct comparison of the contextual effects on orientation and motion direction with two different psychophysical probes: changes in weak and strong signal perception. One unique simplified theoretical model of circular feature coding including only inhibitory interactions, and decoding through standard vector average, successfully predicted the similarities in the two domains, while different feature population characteristics explained well the differences in modulation on both experimental probes. These results demonstrate how a single computational principle underlies processing of various features across the cortices.
Resumo:
Erratum to: A single theoretical framework for circular features processing in humans: orientation and direction of motion compared. In: Frontiers in computational neuroscience 6 (2012), 28
Resumo:
Visual inputs to artificial and biological visual systems are often quantized: cameras accumulate photons from the visual world, and the brain receives action potentials from visual sensory neurons. Collecting more information quanta leads to a longer acquisition time and better performance. In many visual tasks, collecting a small number of quanta is sufficient to solve the task well. The ability to determine the right number of quanta is pivotal in situations where visual information is costly to obtain, such as photon-starved or time-critical environments. In these situations, conventional vision systems that always collect a fixed and large amount of information are infeasible. I develop a framework that judiciously determines the number of information quanta to observe based on the cost of observation and the requirement for accuracy. The framework implements the optimal speed versus accuracy tradeoff when two assumptions are met, namely that the task is fully specified probabilistically and constant over time. I also extend the framework to address scenarios that violate the assumptions. I deploy the framework to three recognition tasks: visual search (where both assumptions are satisfied), scotopic visual recognition (where the model is not specified), and visual discrimination with unknown stimulus onset (where the model is dynamic over time). Scotopic classification experiments suggest that the framework leads to dramatic improvement in photon-efficiency compared to conventional computer vision algorithms. Human psychophysics experiments confirmed that the framework provides a parsimonious and versatile explanation for human behavior under time pressure in both static and dynamic environments.
Resumo:
Recent developments in interactive technologies have seen major changes in the manner in which artists, performers, and creative individuals interact with digital music technology; this is due to the increasing variety of interactive technologies that are readily available today. Digital Musical Instruments (DMIs) present musicians with performance challenges that are unique to this form of computer music. One of the most significant deviations from conventional acoustic musical instruments is the level of physical feedback conveyed by the instrument to the user. Currently, new interfaces for musical expression are not designed to be as physically communicative as acoustic instruments. Specifically, DMIs are often void of haptic feedback and therefore lack the ability to impart important performance information to the user. Moreover, there currently is no standardised way to measure the effect of this lack of physical feedback. Best practice would expect that there should be a set of methods to effectively, repeatedly, and quantifiably evaluate the functionality, usability, and user experience of DMIs. Earlier theoretical and technological applications of haptics have tried to address device performance issues associated with the lack of feedback in DMI designs and it has been argued that the level of haptic feedback presented to a user can significantly affect the user’s overall emotive feeling towards a musical device. The outcome of the investigations contained within this thesis are intended to inform new haptic interface.
Resumo:
A main prediction from the zoom lens model for visual attention is that performance is an inverse function of the size of the attended area. The "attention shift paradigm" developed by Sperling and Reeves (1980) was adapted here to study predictions from the zoom lens model. In two experiments two lists of items were simultaneously presented using the rapid serial visual presentation technique. Subjects were to report the first item he/she was able to identify in the series that did not include the target (the letter T) after he/she saw the target. In one condition, subjects knew in which list the target would appear, in another condition, they did not have this knowledge, having to attend to both positions in order to detect the target. The zoom lens model predicts an interaction between this variable and the distance separating the two positions where the lists are presented. In both experiments, this interaction was observed. The results are also discussed as a solution to the apparently contradictory results with regard to the analog movement model.