7 resultados para Object length perception
em CentAUR: Central Archive University of Reading - UK
Resumo:
The classical computer vision methods can only weakly emulate some of the multi-level parallelisms in signal processing and information sharing that takes place in different parts of the primates’ visual system thus enabling it to accomplish many diverse functions of visual perception. One of the main functions of the primates’ vision is to detect and recognise objects in natural scenes despite all the linear and non-linear variations of the objects and their environment. The superior performance of the primates’ visual system compared to what machine vision systems have been able to achieve to date, motivates scientists and researchers to further explore this area in pursuit of more efficient vision systems inspired by natural models. In this paper building blocks for a hierarchical efficient object recognition model are proposed. Incorporating the attention-based processing would lead to a system that will process the visual data in a non-linear way focusing only on the regions of interest and hence reducing the time to achieve real-time performance. Further, it is suggested to modify the visual cortex model for recognizing objects by adding non-linearities in the ventral path consistent with earlier discoveries as reported by researchers in the neuro-physiology of vision.
Resumo:
We present here a method for calibrating an optical see-through Head Mounted Display (HMD) using techniques usually applied to camera calibration (photogrammetry). Using a camera placed inside the HMD to take pictures simultaneously of a tracked object and features in the HMD display, we could exploit established camera calibration techniques to recover both the intrinsic and extrinsic properties of the~HMD (width, height, focal length, optic centre and principal ray of the display). Our method gives low re-projection errors and, unlike existing methods, involves no time-consuming and error-prone human measurements, nor any prior estimates about the HMD geometry.
Resumo:
We report two studies of the distinct effects that a word's age of acquisition (AoA) and frequency have on the mental lexicon. In the first study, a purely statistical analysis, we show that AoA and frequency are related in different ways to the phonological form and imageability of different words. In the second study, three groups of participants (34 seven-year-olds, 30 ten-year-olds, and 17 adults) took part in an auditory lexical decision task, with stimuli varying in AoA, frequency, length, neighbourhood density, and imageability. The principal result is that the influence of these different variables changes as a function of AoA: Neighbourhood density effects are apparent for early and late AoA words, but not for intermediate AoA, whereas imageability effects are apparent for intermediate AoA words but not for early or late AoA. These results are discussed from the perspective that AoA affects a word's representation, but frequency affects processing biases.
Resumo:
Background: The cognitive bases of language impairment in specific language impairment (SLI) and autism spectrum disorders (ASD) were investigated in a novel non-word comparison task which manipulated phonological short-term memory (PSTM) and speech perception, both implicated in poor non-word repetition. Aims: This study aimed to investigate the contributions of PSTM and speech perception in non-word processing and whether individuals with SLI and ASD plus language impairment (ALI) show similar or different patterns of deficit in these cognitive processes. Method & Procedures: Three groups of adolescents (aged 14–17 years), 14 with SLI, 16 with ALI, and 17 age and non-verbal IQ matched typically developing (TD) controls, made speeded discriminations between non-word pairs. Stimuli varied in PSTM load (two- or four-syllables) and speech perception load (mismatches on a word-initial or word-medial segment). Outcomes & Results: Reaction times showed effects of both non-word length and mismatch position and these factors interacted: four-syllable and word-initial mismatch stimuli resulted in the slowest decisions. Individuals with language impairment showed the same pattern of performance as those with typical development in the reaction time data. A marginal interaction between group and item length was driven by the SLI and ALI groups being less accurate with long items than short ones, a difference not found in the TD group. Conclusions & Implications: Non-word discrimination suggests that there are similarities and differences between adolescents with SLI and ALI and their TD peers. Reaction times appear to be affected by increasing PSTM and speech perception loads in a similar way. However, there was some, albeit weaker, evidence that adolescents with SLI and ALI are less accurate than TD individuals, with both showing an effect of PSTM load. This may indicate, at some level, the processing substrate supporting both PSTM and speech perception is intact in adolescents with SLI and ALI, but also in both there may be impaired access to PSTM resources.
Resumo:
Perception and action are tightly linked: objects may be perceived not only in terms of visual features, but also in terms of possibilities for action. Previous studies showed that when a centrally located object has a salient graspable feature (e.g., a handle), it facilitates motor responses corresponding with the feature's position. However, such so-called affordance effects have been criticized as resulting from spatial compatibility effects, due to the visual asymmetry created by the graspable feature, irrespective of any affordances. In order to dissociate between affordance and spatial compatibility effects, we asked participants to perform a simple reaction-time task to typically graspable and non-graspable objects with similar visual features (e.g., lollipop and stop sign). Responses were measured using either electromyography (EMG) on proximal arm muscles during reaching-like movements, or with finger key-presses. In both EMG and button press measurements, participants responded faster when the object was either presented in the same location as the responding hand, or was affordable, resulting in significant and independent spatial compatibility and affordance effects, but no interaction. Furthermore, while the spatial compatibility effect was present from the earliest stages of movement preparation and throughout the different stages of movement execution, the affordance effect was restricted to the early stages of movement execution. Finally, we tested a small group of unilateral arm amputees using EMG, and found residual spatial compatibility but no affordance, suggesting that spatial compatibility effects do not necessarily rely on individuals’ available affordances. Our results show dissociation between affordance and spatial compatibility effects, and suggest that rather than evoking the specific motor action most suitable for interaction with the viewed object, graspable objects prompt the motor system in a general, body-part independent fashion
Resumo:
Does language modulate perception and categorisation of everyday objects? Here, we approach this question from the perspective of grammatical gender in bilinguals. We tested Spanish–English bilinguals and control native speakers of English in a semantic categorisation task on triplets of pictures in an all-in-English context while measuring event-related brain potentials (ERPs). Participants were asked to press a button when the third picture of a triplet belonged to the same semantic category as the first two, and another button when it belonged to a different category. Unbeknownst to them, in half of the trials, the gender of the third picture name in Spanish had the same gender as that of the first two, and the opposite gender in the other half. We found no priming in behavioural results of either semantic relatedness or gender consistency. In contrast, ERPs revealed not only the expected semantic priming effect in both groups, but also a negative modulation by gender inconsistency in Spanish–English bilinguals, exclusively. These results provide evidence for spontaneous and unconscious access to grammatical gender in participants functioning in a context requiring no access to such information, thereby providing support for linguistic relativity effects in the grammatical domain.
Resumo:
Perception is linked to action via two routes: a direct route based on affordance information in the environment and an indirect route based on semantic knowledge about objects. The present study explored the factors modulating the recruitment of the two routes, in particular which factors affecting the selection of paired objects. In Experiment 1, we presented real objects among semantically related or unrelated distracters. Participants had to select two objects that can interact. The presence of distracters affected selection times, but not the semantic relations of the objects with the distracters. Furthermore, participants first selected the active object (e.g. teaspoon) with their right hand, followed by the passive object (e.g. mug), often with their left hand. In Experiment 2, we presented pictures of the same objects with no hand grip, congruent or incongruent hand grip. Participants had to decide whether the two objects can interact. Action decisions were faster when the presentation of the active object preceded the presentation of the passive object, and when the grip was congruent. Interestingly, participants were slower when the objects were semantically but not functionally related; this effect increased with congruently gripped objects. Our data showed that action decisions in the presence of strong affordance cues (real objects, pictures of congruently gripped objects) relied on sensory-motor representation, supporting the direct route from perception-to-action that bypasses semantic knowledge. However, in the case of weak affordance cues (pictures), semantic information interfered with action decisions, indicating that semantic knowledge impacts action decisions. The data support the dual-route account from perception-to-action.