985 resultados para visual objects
Resumo:
The basal ganglia are known to receive inputs from widespread regions of the cerebral cortex, such as the frontal, parietal, and temporal lobes. Of these cortical areas, only the frontal lobe is thought to be the target of basal ganglia output. One of the cortical regions that is a source of input to the basal ganglia is area TE, in inferotemporal cortex. This cortical area is thought to be critically involved in the recognition and discrimination of visual objects. Using retrograde transneuronal transport of herpes simplex virus type 1, we have found that one of the output nuclei of the basal ganglia, the substantia nigra pars reticulata, projects via the thalamus to TE. Thus, TE is not only a source of input to the basal ganglia, but also is a target of basal ganglia output. This result implies that the output of the basal ganglia influences higher order aspects of visual processing. In addition, we propose that dysfunction of the basal ganglia loop with TE leads to alterations in visual perception, including visual hallucinations.
Resumo:
Behavioural studies on normal and brain-damaged individuals provide convincing evidence that the perception of objects results in the generation of both visual and motor signals in the brain, irrespective of whether or not there is an intention to act upon the object. In this paper we sought to determine the basis of the motor signals generated by visual objects. By examining how the properties of an object affect an observer's reaction time for judging its orientation, we provide evidence to indicate that directed visual attention is responsible for the automatic generation of motor signals associated with the spatial characteristics of perceived objects.
Resumo:
A novel approach of normal ECG recognition based on scale-space signal representation is proposed. The approach utilizes curvature scale-space signal representation used to match visual objects shapes previously and dynamic programming algorithm for matching CSS representations of ECG signals. Extraction and matching processes are fast and experimental results show that the approach is quite robust for preliminary normal ECG recognition.
Resumo:
This work presents the design of a real-time system to model visual objects with the use of self-organising networks. The architecture of the system addresses multiple computer vision tasks such as image segmentation, optimal parameter estimation and object representation. We first develop a framework for building non-rigid shapes using the growth mechanism of the self-organising maps, and then we define an optimal number of nodes without overfitting or underfitting the network based on the knowledge obtained from information-theoretic considerations. We present experimental results for hands and faces, and we quantitatively evaluate the matching capabilities of the proposed method with the topographic product. The proposed method is easily extensible to 3D objects, as it offers similar features for efficient mesh reconstruction.
Resumo:
Growing models have been widely used for clustering or topology learning. Traditionally these models work on stationary environments, grow incrementally and adapt their nodes to a given distribution based on global parameters. In this paper, we present an enhanced unsupervised self-organising network for the modelling of visual objects. We first develop a framework for building non-rigid shapes using the growth mechanism of the self-organising maps, and then we define an optimal number of nodes without overfitting or underfitting the network based on the knowledge obtained from information-theoretic considerations. We present experimental results for hands and we quantitatively evaluate the matching capabilities of the proposed method with the topographic product.
Resumo:
Previous studies have demonstrated that a region in the left ventral occipito-temporal (LvOT) cortex is highly selective to the visual forms of written words and objects relative to closely matched visual stimuli. Here, we investigated why LvOT activation is not higher for reading than picture naming even though written words and pictures of objects have grossly different visual forms. To compare neuronal responses for words and pictures within the same LvOT area, we used functional magnetic resonance imaging adaptation and instructed participants to name target stimuli that followed briefly presented masked primes that were either presented in the same stimulus type as the target (word-word, picture-picture) or a different stimulus type (picture-word, word-picture). We found that activation throughout posterior and anterior parts of LvOT was reduced when the prime had the same name/response as the target irrespective of whether the prime-target relationship was within or between stimulus type. As posterior LvOT is a visual form processing area, and there was no visual form similarity between different stimulus types, we suggest that our results indicate automatic top-down influences from pictures to words and words to pictures. This novel perspective motivates further investigation of the functional properties of this intriguing region.
Resumo:
A method is presented for the visual analysis of objects by computer. It is particularly well suited for opaque objects with smoothly curved surfaces. The method extracts information about the object's surface properties, including measures of its specularity, texture, and regularity. It also aids in determining the object's shape. The application of this method to a simple recognition task ??e recognition of fruit ?? discussed. The results on a more complex smoothly curved object, a human face, are also considered.
Resumo:
We use a detailed study of the knowledge work around visual representations to draw attention to the multidimensional nature of `objects'. Objects are variously described in the literatures as relatively stable or in flux; as abstract or concrete; and as used within or across practices. We clarify these dimensions, drawing on and extending the literature on boundary objects, and connecting it with work on epistemic and technical objects. In particular, we highlight the epistemic role of objects, using our observations of knowledge work on an architectural design project to show how, in this setting, visual representations are characterized by a `lack' or incompleteness that precipitates unfolding. The conceptual design of a building involves a wide range of technical, social and aesthetic forms of knowledge that need to be developed and aligned. We explore how visual representations are used, and how these are meaningful to different stakeholders, eliciting their distinct contributions. As the project evolves and the drawings change, new issues and needs for knowledge work arise. These objects have an `unfolding ontology' and are constantly in flux, rather than fully formed. We discuss the implications for wider understandings of objects in organizations and for how knowledge work is achieved in practice.
Resumo:
Recent interest in material objects - the things of everyday interaction - has led to articulations of their role in the literature on organizational knowledge and learning. What is missing is a sense of how the use of these 'things' is patterned across both industrial settings and time. This research addresses this gap with a particular emphasis on visual materials. Practices are analysed in two contrasting design settings: a capital goods manufacturer and an architectural firm. Materials are observed to be treated both as frozen, and hence unavailable for change; and as fluid, open and dynamic. In each setting temporal patterns of unfreezing and refreezing are associated with the different types of materials used. The research suggests that these differing patterns or rhythms of visual practice are important in the evolution of knowledge and in structuring social relations for delivery. Hence, to improve their performance practitioners should not only consider the types of media they use, but also reflect on the pace and style of their interactions.
Resumo:
Defensive behaviors, such as withdrawing your hand to avoid potentially harmful approaching objects, rely on rapid sensorimotor transformations between visual and motor coordinates. We examined the reference frame for coding visual information about objects approaching the hand during motor preparation. Subjects performed a simple visuomanual task while a task-irrelevant distractor ball rapidly approached a location either near to or far from their hand. After the distractor ball appearance, single pulses of transcranial magnetic stimulation were delivered over the subject's primary motor cortex, eliciting motor evoked potentials (MEPs) in their responding hand. MEP amplitude was reduced when the ball approached near the responding hand, both when the hand was on the left and the right of the midline. Strikingly, this suppression occurred very early, at 70-80ms after ball appearance, and was not modified by visual fixation location. Furthermore, it was selective for approaching balls, since static visual distractors did not modulate MEP amplitude. Together with additional behavioral measurements, we provide converging evidence for automatic hand-centered coding of visual space in the human brain.
Resumo:
The authors assessed rats' encoding of the appearance or egocentric position of objects within visual scenes containing 3 objects (Experiment 1) or I object (Experiment 2A). Experiment 2B assessed encoding of the shape and fill pattern of single objects, and encoding of configurations (object + position, shape + fill). All were assessed by testing rats' ability to discriminate changes from familiar scenes (constant-negative paradigm). Perirhinal cortex lesions impaired encoding of objects and their shape; postrhinal cortex lesions impaired encoding of egocentric position, but the effect may have been partly due to entorhinal involvement. Neither lesioned group was impaired in detecting configural change. In Experiment 1, both lesion groups were impaired in detecting small changes in relative position of the 3 objects, suggesting that more sensitive tests might reveal configural encoding deficits.
Resumo:
We report an extension of the procedure devised by Weinstein and Shanks (Memory & Cognition 36:1415-1428, 2008) to study false recognition and priming of pictures. Participants viewed scenes with multiple embedded objects (seen items), then studied the names of these objects and the names of other objects (read items). Finally, participants completed a combined direct (recognition) and indirect (identification) memory test that included seen items, read items, and new items. In the direct test, participants recognized pictures of seen and read items more often than new pictures. In the indirect test, participants' speed at identifying those same pictures was improved for pictures that they had actually studied, and also for falsely recognized pictures whose names they had read. These data provide new evidence that a false-memory induction procedure can elicit memory-like representations that are difficult to distinguish from "true" memories of studied pictures. © 2012 Psychonomic Society, Inc.
Resumo:
In this article we describe a semantic localization dataset for indoor environments named ViDRILO. The dataset provides five sequences of frames acquired with a mobile robot in two similar office buildings under different lighting conditions. Each frame consists of a point cloud representation of the scene and a perspective image. The frames in the dataset are annotated with the semantic category of the scene, but also with the presence or absence of a list of predefined objects appearing in the scene. In addition to the frames and annotations, the dataset is distributed with a set of tools for its use in both place classification and object recognition tasks. The large number of labeled frames in conjunction with the annotation scheme make this dataset different from existing ones. The ViDRILO dataset is released for use as a benchmark for different problems such as multimodal place classification and object recognition, 3D reconstruction or point cloud data compression.
Resumo:
To understand how bees, birds, and fish may use colour vision for food selection and mate choice, we reconstructed views of biologically important objects taking into account the receptor spectral sensitivities. Reflectance spectra a of flowers, bird plumage, and fish skin were used to calculate receptor quantum catches. The quantum catches were then coded by red, green, and blue of a computer monitor; and powers, birds, and fish were visualized in animal colours. Calculations were performed for different illumination conditions. To simulate colour constancy, we used a von Kries algorithm, i.e., the receptor quantum catches were scaled so that the colour of illumination remained invariant. We show that on land this algorithm compensates reasonably well for changes of object appearance caused by natural changes of illumination, while in water failures of von Kries colour constancy are prominent. (C) 2000 John Wiley & Sons, Inc.