841 resultados para object representations
Resumo:
This paper describes the novel use of agent and cellular neural Hopfield network techniques in the design of a self-contained, object detecting retina. The agents, which are used to detect features within an image, are trained using the Hebbian method which has been modified for the cellular architecture. The success of each agent is communicated with adjacent agents in order to verify the detection of an object. Initial work used the method to process bipolar images. This has now been extended to handle grey scale images. Simulations have demonstrated the success of the method and further work is planned in which the device is to be implemented in hardware.
Resumo:
This paper presents an enhanced hypothesis verification strategy for 3D object recognition. A new learning methodology is presented which integrates the traditional dichotomic object-centred and appearance-based representations in computer vision giving improved hypothesis verification under iconic matching. The "appearance" of a 3D object is learnt using an eigenspace representation obtained as it is tracked through a scene. The feature representation implicitly models the background and the objects observed enabling the segmentation of the objects from the background. The method is shown to enhance model-based tracking, particularly in the presence of clutter and occlusion, and to provide a basis for identification. The unified approach is discussed in the context of the traffic surveillance domain. The approach is demonstrated on real-world image sequences and compared to previous (edge-based) iconic evaluation techniques.
Resumo:
A form of three-dimensional X-ray imaging, called Object 3-D, is introduced, where the relevant subject material is represented as discrete ‘objects’. The surface of each such object is derived accurately from the projections of its outline, and of its other discontinuities, in about ten conventional X-ray views, distributed in solid angle. This technique is suitable for many applications, and permits dramatic savings in radiation exposure and in data acquisition and manipulation. It is well matched to user-friendly interactive displays.
Resumo:
We studied how the integration of seen and felt tactile stimulation modulates somatosensory processing, and investigated whether visuotactile integration depends on temporal contiguity of stimulation, and its coherence with a pre-existing body representation. During training, participants viewed a rubber hand or a rubber object that was tapped either synchronously with stimulation of their own hand, or in an uncorrelated fashion. In a subsequent test phase, somatosensory event-related potentials (ERPs) were recorded to tactile stimulation of the left or right hand, to assess how tactile processing was affected by previous visuotactile experience during training. An enhanced somatosensory N140 component was elicited after synchronous, compared with uncorrelated, visuotactile training, irrespective of whether participants viewed a rubber hand or rubber object. This early effect of visuotactile integration on somatosensory processing is interpreted as a candidate electrophysiological correlate of the rubber hand illusion that is determined by temporal contiguity, but not by pre-existing body representations. ERPmodulations were observed beyond 200msec post-stimulus, suggesting an attentional bias induced by visuotactile training. These late modulations were absent when the stimulation of a rubber hand and the participant’s own hand was uncorrelated during training, suggesting that pre-existing body representations may affect later stages of tactile processing.