16 resultados para alfabetización visual
em Indian Institute of Science - Bangalore - Índia
Resumo:
Most bees are diurnal, with behaviour that is largely visually mediated, but several groups have made evolutionary shifts to nocturnality, despite having apposition compound eyes unsuited to vision in dim light. We compared the anatomy and optics of the apposition eyes and the ocelli of the nocturnal carpenter bee, Xylocopa tranquebarica, with two sympatric species, the strictly diurnal X. leucothorax and the occasionally crepuscular X. tenuiscapa. The ocelli of the nocturnal X. tranquebarica are unusually large (diameter ca. 1 mm) and poorly focussed. Moreover, their apposition eyes show specific visual adaptations for vision in dim light, including large size, large facets and very wide rhabdoms, which together make these eyes 9 times more sensitive than those of X. tenuiscapa and 27 times more sensitive than those of X. leucothorax. These differences in optical sensitivity are surprisingly small considering that X. tranquebarica can fly on moonless nights when background luminance is as low as 10(-5) cd m(-2), implying that this bee must employ additional visual strategies to forage and find its way back to the nest. These strategies may include photoreceptors with longer integration times and higher contrast gains as well as higher neural summation mechanisms for increasing visual reliability in dim light.
Resumo:
In this paper we propose a hypothetical scheme for recognizing the alphanumerics. The scheme is based on the known physiological structure of the visual cortex and the concept of a short Lino extractor nouron (SLEN). We assumo four basic typca of such units for extracting vertical, horizontal, right and left inclined straight line segments. The patterns reconstructed from the scheme show perfect agreement with the test patterns. The model indicates that the recognition of letters T and H requires extraction of the largest number of features.
Resumo:
Surface models of biomolecules have become crucially important for the study and understanding of interaction between biomolecules and their environment. We argue for the need for a detailed understanding of biomolecular surfaces by describing several applications in computational and structural biology. We review methods used to model, represent, characterize, and visualize biomolecular surfaces focusing on the role that geometry and topology play in identifying features on the surface. These methods enable the development of efficient computational and visualization tools for studying the function of biomolecules.
Resumo:
Several pi-electron rich fluorescent aromatic compounds containing trimethylsilylethynyl functionality have been synthesized by employing Sonogashira coupling reaction and they were characterized fully by NMR (H-1, C-13)/IR spectroscopy. Incorporation of bulky trimethylsilylethynyl groups on the peripheral of the fluorophores prevents self-quenching of the initial intensity through pi-pi interaction and thereby maintains the spectroscopic stability in solution. These compounds showed fluorescence behavior in chloroform solution and were used as selective fluorescence sensors for the detection of electron deficient nitroaromatics. All these fluorophores showed the largest quenching response with high selectivity for nitroaromatics among the various electron deficient aromatic compounds tested. Quantitative analysis of the fluorescence titration profile of 9,10-bis(trimethylsilylethynyl) anthracene with picric acid provided evidence that this particular fluorophore detects picric acid even at ppb level. A sharp visual detection of 2,4,6-trinitrotoluene was observed upon subjecting 1,3,6,8-tetrakis (trimethylsilylethynyl) pyrene fluorophore to increasing quantities of 2,4,6-trinitrotoluene in chloroform. Furthermore, thin film of the fluorophores was made by spin coating of a solution of 1.0 x 10(-3) M in chloroform or dichloromethane on a quartz plate and was used for the detection of vapors of nitroaromatics at room temperature. The vapor-phase sensing experiments suggested that the sensing process is reproducible and quite selective for nitroaromatic compounds. Selective fluorescence quenching response including a sharp visual color change for nitroaromatics makes these fluorophores as promising fluorescence sensory materials for nitroaromatic compounds (NAC) with a detection limit of even ppb level as judged with picric acid.
Resumo:
We introduce a multifield comparison measure for scalar fields that helps in studying relations between them. The comparison measure is insensitive to noise in the scalar fields and to noise in their gradients. Further, it can be computed robustly and efficiently. Results from the visual analysis of various data sets from climate science and combustion applications demonstrate the effective use of the measure.
Resumo:
How the brain maintains perceptual continuity across eye movements that yield discontinuous snapshots of the world is still poorly understood. In this study, we adapted a framework from the dual-task paradigm, well suited to reveal bottlenecks in mental processing, to study how information is processed across sequential saccades. The pattern of RTs allowed us to distinguish among three forms of trans-saccadic processing (no trans-saccadic processing, trans-saccadic visual processing and trans-saccadic visual processing and saccade planning models). Using a cued double-step saccade task, we show that even though saccade execution is a processing bottleneck, limiting access to incoming visual information, partial visual and motor processing that occur prior to saccade execution is used to guide the next eye movement. These results provide insights into how the oculomotor system is designed to process information across multiple fixations that occur during natural scanning.
Resumo:
A new phenanthrene based chemosensor has been synthesized and investigated to act as highly selective fluorescence and visual sensor for Cu2+ ion with very low detection limit of 1.58 nM: this has also been used to image Cu2+ in human cervical HeLa cancer cells. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Our everyday visual experience frequently involves searching for objects in clutter. Why are some searches easy and others hard? It is generally believed that the time taken to find a target increases as it becomes similar to its surrounding distractors. Here, I show that while this is qualitatively true, the exact relationship is in fact not linear. In a simple search experiment, when subjects searched for a bar differing in orientation from its distractors, search time was inversely proportional to the angular difference in orientation. Thus, rather than taking search reaction time (RT) to be a measure of target-distractor similarity, we can literally turn search time on its head (i.e. take its reciprocal 1/RT) to obtain a measure of search dissimilarity that varies linearly over a large range of target-distractor differences. I show that this dissimilarity measure has the properties of a distance metric, and report two interesting insights come from this measure: First, for a large number of searches, search asymmetries are relatively rare and when they do occur, differ by a fixed distance. Second, search distances can be used to elucidate object representations that underlie search - for example, these representations are roughly invariant to three-dimensional view. Finally, search distance has a straightforward interpretation in the context of accumulator models of search, where it is proportional to the discriminative signal that is integrated to produce a response. This is consistent with recent studies that have linked this distance to neuronal discriminability in visual cortex. Thus, while search time remains the more direct measure of visual search, its reciprocal also has the potential for interesting and novel insights. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Bhutani N, Ray S, Murthy A. Is saccade averaging determined by visual processing or movement planning? J Neurophysiol 108: 3161-3171, 2012. First published September 26, 2012; doi:10.1152/jn.00344.2012.-Saccadic averaging that causes subjects' gaze to land between the location of two targets when faced with simultaneously or sequentially presented stimuli has been often used as a probe to investigate the nature of computations that transform sensory representations into an oculomotor plan. Since saccadic movements involve at least two processing stages-a visual stage that selects a target and a movement stage that prepares the response-saccade averaging can either occur due to interference in visual processing or movement planning. By having human subjects perform two versions of a saccadic double-step task, in which the stimuli remained the same, but different instructions were provided (REDIRECT gaze to the later-appearing target vs. FOLLOW the sequence of targets in their order of appearance), we tested two alternative hypotheses. If saccade averaging were due to visual processing alone, the pattern of saccade averaging is expected to remain the same across task conditions. However, whereas subjects produced averaged saccades between two targets in the FOLLOW condition, they produced hypometric saccades in the direction of the initial target in the REDIRECT condition, suggesting that the interaction between competing movement plans produces saccade averaging.
Resumo:
We consider a visual search problem studied by Sripati and Olson where the objective is to identify an oddball image embedded among multiple distractor images as quickly as possible. We model this visual search task as an active sequential hypothesis testing problem (ASHT problem). Chernoff in 1959 proposed a policy in which the expected delay to decision is asymptotically optimal. The asymptotics is under vanishing error probabilities. We first prove a stronger property on the moments of the delay until a decision, under the same asymptotics. Applying the result to the visual search problem, we then propose a ``neuronal metric'' on the measured neuronal responses that captures the discriminability between images. From empirical study we obtain a remarkable correlation (r = 0.90) between the proposed neuronal metric and speed of discrimination between the images. Although this correlation is lower than with the L-1 metric used by Sripati and Olson, this metric has the advantage of being firmly grounded in formal decision theory.
Resumo:
How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation.
Resumo:
Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search.
Resumo:
Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features ( intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and coactivation models ( based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features-in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search.
Resumo:
Regions in video streams attracting human interest contribute significantly to human understanding of the video. Being able to predict salient and informative Regions of Interest (ROIs) through a sequence of eye movements is a challenging problem. Applications such as content-aware retargeting of videos to different aspect ratios while preserving informative regions and smart insertion of dialog (closed-caption text) into the video stream can significantly be improved using the predicted ROIs. We propose an interactive human-in-the-loop framework to model eye movements and predict visual saliency into yet-unseen frames. Eye tracking and video content are used to model visual attention in a manner that accounts for important eye-gaze characteristics such as temporal discontinuities due to sudden eye movements, noise, and behavioral artifacts. A novel statistical-and algorithm-based method gaze buffering is proposed for eye-gaze analysis and its fusion with content-based features. Our robust saliency prediction is instantiated for two challenging and exciting applications. The first application alters video aspect ratios on-the-fly using content-aware video retargeting, thus making them suitable for a variety of display sizes. The second application dynamically localizes active speakers and places dialog captions on-the-fly in the video stream. Our method ensures that dialogs are faithful to active speaker locations and do not interfere with salient content in the video stream. Our framework naturally accommodates personalisation of the application to suit biases and preferences of individual users.
Resumo:
Designing a robust algorithm for visual object tracking has been a challenging task since many years. There are trackers in the literature that are reasonably accurate for many tracking scenarios but most of them are computationally expensive. This narrows down their applicability as many tracking applications demand real time response. In this paper, we present a tracker based on random ferns. Tracking is posed as a classification problem and classification is done using ferns. We used ferns as they rely on binary features and are extremely fast at both training and classification as compared to other classification algorithms. Our experiments show that the proposed tracker performs well on some of the most challenging tracking datasets and executes much faster than one of the state-of-the-art trackers, without much difference in tracking accuracy.