59 resultados para Object-Z
em CentAUR: Central Archive University of Reading - UK
Resumo:
This workshop paper reports recent developments to a vision system for traffic interpretation which relies extensively on the use of geometrical and scene context. Firstly, a new approach to pose refinement is reported, based on forces derived from prominent image derivatives found close to an initial hypothesis. Secondly, a parameterised vehicle model is reported, able to represent different vehicle classes. This general vehicle model has been fitted to sample data, and subjected to a Principal Component Analysis to create a deformable model of common car types having 6 parameters. We show that the new pose recovery technique is also able to operate on the PCA model, to allow the structure of an initial vehicle hypothesis to be adapted to fit the prevailing context. We report initial experiments with the model, which demonstrate significant improvements to pose recovery.
Resumo:
The classical computer vision methods can only weakly emulate some of the multi-level parallelisms in signal processing and information sharing that takes place in different parts of the primates’ visual system thus enabling it to accomplish many diverse functions of visual perception. One of the main functions of the primates’ vision is to detect and recognise objects in natural scenes despite all the linear and non-linear variations of the objects and their environment. The superior performance of the primates’ visual system compared to what machine vision systems have been able to achieve to date, motivates scientists and researchers to further explore this area in pursuit of more efficient vision systems inspired by natural models. In this paper building blocks for a hierarchical efficient object recognition model are proposed. Incorporating the attention-based processing would lead to a system that will process the visual data in a non-linear way focusing only on the regions of interest and hence reducing the time to achieve real-time performance. Further, it is suggested to modify the visual cortex model for recognizing objects by adding non-linearities in the ventral path consistent with earlier discoveries as reported by researchers in the neuro-physiology of vision.
Resumo:
Recent work has suggested that for some tasks, graphical displays which visually integrate information from more than one source offer an advantage over more traditional displays which present the same information in a separated format. Three experiments are described which investigate this claim using a task which requires subjects to control a dynamic system. In the first experiment, the integrated display is compared to two separated displays, one an animated mimic diagram, the other an alphanumeric display. The integrated display is shown to support better performance in a control task, but experiment 2 shows that part of this advantage may be due to its analogue nature. Experiment 3 considers performance on a fault detection task, and shows no difference between the integrated and separated displays. The paper concludes that previous claims made for integrated displays may not generalize from monitoring to control tasks.
Resumo:
Halberda (2003) demonstrated that 17-month-old infants, but not 14- or 16-month-olds, use a strategy known as mutual exclusivity (ME) to identify the meanings of new words. When 17-month-olds were presented with a novel word in an intermodal preferential looking task, they preferentially fixated a novel object over an object for which they already had a name. We explored whether the development of this word-learning strategy is driven by children's experience of hearing only one name for each referent in their environment by comparing the behavior of infants from monolingual and bilingual homes. Monolingual infants aged 17–22 months showed clear evidence of using an ME strategy, in that they preferentially fixated the novel object when they were asked to "look at the dax." Bilingual infants of the same age and vocabulary size failed to show a similar pattern of behavior. We suggest that children who are raised with more than one language fail to develop an ME strategy in parallel with monolingual infants because development of the bias is a consequence of the monolingual child's everyday experiences with words.
Resumo:
Ten mothers were observed prospectively, interacting with their infants aged 0 ; 10 in two contexts (picture description and noun description). Maternal communicative behaviours were coded for volubility, gestural production and labelling style. Verbal labelling events were categorized into three exclusive categories: label only; label plus deictic gesture; label plus iconic gesture. We evaluated the predictive relations between maternal communicative style and children's subsequent acquisition of ten target nouns. Strong relations were observed between maternal communicative style and children's acquisition of the target nouns. Further, even controlling for maternal volubility and maternal labelling, maternal use of iconic gestures predicted the timing of acquisition of nouns in comprehension. These results support the proposition that maternal gestural input facilitates linguistic development, and suggest that such facilitation may be a function of gesture type.
Resumo:
In recent years, a large number of papers have reported the response of the cusp to solar wind variations under conditions of northward or southward Interplanetary Magnetic Field (IMF) Z-component (BZ). These studies have shown the importance of both temporal and spatial factors in determining the extent and morphology of the cusp and the changes in its location, connected to variations in the reconnection geometry. Here we present a comparative study of the cusp, focusing on an interval characterised by a series of rapid reversals in the BZ-dominated IMF, based on observations from space-borne and ground-based instrumentation. During this interval, from 08:00 to 12:00 UT on 12 February 2003, the IMF BZ component underwent four reversals, remaining for around 30 min in each orientation. The Cluster spacecraft were, at the time, on an outbound trajectory through the Northern Hemisphere magnetosphere, whilst the mainland VHF and Svalbard (ESR) radars of the EISCAT facility were operating in support of the Cluster mission. Both Cluster and the EISCAT were, on occasion during the interval, observing the cusp region. The series of IMF reversal resulted in a sequence of poleward and equatorward motions of the cusp; consequently Cluster crossed the high altitude cusp twice before finally exiting the dayside magnetopause, both times under conditions of northward IMF BZ. The first magnetospheric cusp encounter, by all four Cluster spacecraft, showed reverse ion dispersion typical of lobe reconnection; subsequently, Cluster spacecraft 1 and 3 (only) crossed the cusp for a second time. We suggest that, during this second cusp crossing, these two spacecraft were likely to have been on newly closed field lines, which were first reconnected (opened) at low latitudes and later reconnected again (re-closed) poleward of the northern cusp.
Resumo:
A common method for testing preference for objects is to determine which of a pair of objects is approached first in a paired-choice paradigm. In comparison, many studies of preference for environmental enrichment (EE) devices have used paradigms in which total time spent with each of a pair of objects is used to determine preference. While each of these paradigms gives a specific measure of the preference for one object in comparison to another, neither method allows comparisons between multiple objects simultaneously. Since it is possible that several EE objects would be placed in a cage together to improve animal welfare, it is important to determine measures for rats' preferences in conditions that mimic this potential home cage environment. While it would be predicted that each type of measure would produce similar rankings of objects, this has never been tested empirically. In this study, we compared two paradigms: EE objects were either presented in pairs (paired-choice comparison) or four objects were presented simultaneously (simultaneous presentation comparison). We used frequency of first interaction and time spent with each object to rank the objects in the paired-choice experiment, and time spent with each object to rank the objects in the simultaneous presentation experiment. We also considered the behaviours elicited by the objects to determine if these might be contributing to object preference. We demonstrated that object ranking based on time spent with objects from the paired-choice experiment predicted object ranking in the simultaneous presentation experiment. Additionally, we confirmed that behaviours elicited were an important determinant of time spent with an object. This provides convergent evidence that both paired choice and simultaneous comparisons provide valid measures of preference for EE objects in rats. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Previous functional imaging studies have shown that facilitated processing of a visual object on repeated, relative to initial, presentation (i.e., repetition priming) is associated with reductions in neural activity in multiple regions, including fusiforin/lateral occipital cortex. Moreover, activity reductions have been found, at diminished levels, when a different exemplar of an object is presented on repetition. In one previous study, the magnitude of diminished priming across exemplars was greater in the right relative to the left fusiform, suggesting greater exemplar specificity in the right. Another previous study, however, observed fusiform lateralization modulated by object viewpoint, but not object exemplar. The present fMRI study sought to determine whether the result of differential fusiform responses for perceptually different exemplars could be replicated. Furthermore, the role of the left fusiform cortex in object recognition was investigated via the inclusion of a lexical/semantic manipulation. Right fusiform cortex showed a significantly greater effect of exemplar change than left fusiform, replicating the previous result of exemplar-specific fusiform lateralization. Right fusiform and lateral occipital cortex were not differentially engaged by the lexical/semantic manipulation, suggesting that their role in visual object recognition is predominantly in the. C visual discrimination of specific objects. Activation in left fusiform cortex, but not left lateral occipital cortex, was modulated by both exemplar change and lexical/semantic manipulation, with further analysis suggesting a posterior-to-anterior progression between regions involved in processing visuoperceptual and lexical/semantic information about objects. The results are consistent with the view that the right fusiform plays a greater role in processing specific visual form information about objects, whereas the left fusiform is also involved in lexical/semantic processing. (C) 2003 Elsevier Science (USA). All rights reserved.
Resumo:
This paper addresses the requirements for a Work/flow Management System that is intended to automate the production and distribution chain for cross-media content which is by nature multi-partner and multi-site. It advocates the requirements for an ontology-based object lifecycle tracking within work/flow integration by identifying various types of interfaces, object life cycles and the work-flow interaction environments within the AXMEDIS Framework.
Resumo:
There has been a clear lack of common data exchange semantics for inter-organisational workflow management systems where the research has mainly focused on technical issues rather than language constructs. This paper presents the neutral data exchanges semantics required for the workflow integration within the AXAEDIS framework and presents the mechanism for object discovery from the object repository where little or no knowledge about the object is available. The paper also presents workflow independent integration architecture with the AXAEDIS Framework.
Resumo:
A technique is presented for locating and tracking objects in cluttered environments. Agents are randomly distributed across the image, and subsequently grouped around targets. Each agent uses a weightless neural network and a histogram intersection technique to score its location. The system has been used to locate and track a head in 320x240 resolution video at up to 15fps.