347 resultados para object recognition
Resumo:
This paper presents Sequence Matching Across Route Traversals (SMART); a generally applicable sequence-based place recognition algorithm. SMART provides invariance to changes in illumination and vehicle speed while also providing moderate pose invariance and robustness to environmental aliasing. We evaluate SMART on vehicles travelling at highly variable speeds in two challenging environments; firstly, on an all-terrain vehicle in an off-road, forest track and secondly, using a passenger car traversing an urban environment across day and night. We provide comparative results to the current state-of-the-art SeqSLAM algorithm and investigate the effects of altering SMART’s image matching parameters. Additionally, we conduct an extensive study of the relationship between image sequence length and SMART’s matching performance. Our results show viable place recognition performance in both environments with short 10-metre sequences, and up to 96% recall at 100% precision across extreme day-night cycles when longer image sequences are used.
Resumo:
Recognising that charitable behaviour can be motivated by public recognition and emotional satisfaction, not-for-profit organisations have developed strategies that leverage self-interest over altruism by facilitating individuals to donate conspicuously. Initially developed as novel marketing programs to increase donation income, such conspicuous tokens of recognition are being recognised as important value propositions to nurture donor relationships. Despite this, there is little empirical evidence that identifies when donations can be increased through conspicuous recognition. Furthermore, social media’s growing popularity for self-expression, as well as the increasing use of technology in donor relationship management strategies, makes an examination of virtual conspicuous tokens of recognition in relation to what value donors seek particularly insightful. Therefore, this research examined the impact of experiential donor value and virtual conspicuous tokens of recognition on blood donor intentions. Using online survey data from 186 Australian blood donors, results show that in fact emotional value is a stronger predictor of intentions to donate blood than altruistic value, while social value is the strongest predictor of intentions if provided with recognition. Clear linkages between dimensions of donor value (altruistic, emotional and social) and conspicuous donation behaviour (CDB) were identified. The findings provide valuable insights into the use of conspicuous donation tokens of recognition on social media, and contribute to our understanding into the under-researched areas of donor value and CDB.
Resumo:
We investigated memories of room-sized spatial layouts learned by sequentially or simultaneously viewing objects from a stationary position. In three experiments, sequential viewing (one or two objects at a time) yielded subsequent memory performance that was equivalent or superior to simultaneous viewing of all objects, even though sequential viewing lacked direct access to the entire layout. This finding was replicated by replacing sequential viewing with directed viewing in which all objects were presented simultaneously and participants’ attention was externally focused on each object sequentially, indicating that the advantage of sequential viewing over simultaneous viewing may have originated from focal attention to individual object locations. These results suggest that memory representation of object-to-object relations can be constructed efficiently by encoding each object location separately, when those locations are defined within a single spatial reference system. These findings highlight the importance of considering object presentation procedures when studying spatial learning mechanisms.
Resumo:
The present study was conducted to investigate whether ob- servers are equally prone to overlook any kinds of visual events in change blindness. Capitalizing on the finding from visual search studies that abrupt appearance of an object effectively captures observers' attention, the onset of a new object and the offset of an existing object were contrasted regarding their detectability when they occurred in a naturalistic scene. In an experiment, participants viewed a series of photograph pairs in which layouts of seven or eight objects were depicted. One object either appeared in or disappeared from the layout, and participants tried to detect this change. Results showed that onsets were detected more quickly than offsets, while they were detected with equivalent ac- curacy. This suggests that the primacy of onset over offset is a robust phenomenon that likely makes onsets more resistant to change blindness under natural viewing conditions.
Resumo:
The present study investigated how object locations learned separately are integrated and represented as a single spatial layout in memory. Two experiments were conducted in which participants learned a room-sized spatial layout that was divided into two sets of five objects. Results suggested that integration across sets was performed efficiently when it was done during initial encoding of the environment but entailed cost in accuracy when it was attempted at the time of memory retrieval. These findings suggest that, once formed, spatial representations in memory generally remain independent and integrating them into a single representation requires additional cognitive processes.
Resumo:
language (such as C++ and Java). The model used allows to insert watermarks on three “orthogonal” levels. For the first level, watermarks are injected into objects. The second level watermarking is used to select proper variants of the source code. The third level uses transition function that can be used to generate copies with different functionalities. Generic watermarking schemes were presented and their security discussed.
Resumo:
This paper presents a novel place recognition algorithm inspired by the recent discovery of overlapping and multi-scale spatial maps in the rodent brain. We mimic this hierarchical framework by training arrays of Support Vector Machines to recognize places at multiple spatial scales. Place match hypotheses are then cross-validated across all spatial scales, a process which combines the spatial specificity of the finest spatial map with the consensus provided by broader mapping scales. Experiments on three real-world datasets including a large robotics benchmark demonstrate that mapping over multiple scales uniformly improves place recognition performance over a single scale approach without sacrificing localization accuracy. We present analysis that illustrates how matching over multiple scales leads to better place recognition performance and discuss several promising areas for future investigation.
Resumo:
It is well established that the time to name target objects can be influenced by the presence of categorically related versus unrelated distractor items. A variety of paradigms have been developed to determine the level at which this semantic interference effect occurs in the speech production system. In this study, we investigated one of these tasks, the postcue naming paradigm, for the first time with fMRI. Previous behavioural studies using this paradigm have produced conflicting interpretations of the processing level at which the semantic interference effect takes place, ranging from pre- to post-lexical. Here we used fMRI with a sparse, event-related design to adjudicate between these competing explanations. We replicated the behavioural postcue naming effect for categorically related target/distractor pairs, and observed a corresponding increase in neuronal activation in the right lingual and fusiform gyri-regions previously associated with visual object processing and colour-form integration. We interpret these findings as being consistent with an account that places the semantic interference effect in the postcue paradigm at a processing level involving integration of object attributes in short-term memory.
Resumo:
This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.
Resumo:
To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).
Resumo:
RNA polymerase II (pol II) transcription termination requires co‐transcriptional recognition of a functional polyadenylation signal, but the molecular mechanisms that transduce this signal to pol II remain unclear. We show that Yhh1p/Cft1p, the yeast homologue of the mammalian AAUAAA interacting protein CPSF 160, is an RNA‐binding protein and provide evidence that it participates in poly(A) site recognition. Interestingly, RNA binding is mediated by a central domain composed of predicted β‐propeller‐forming repeats, which occurs in proteins of diverse cellular functions. We also found that Yhh1p/Cft1p bound specifically to the phosphorylated C‐terminal domain (CTD) of pol II in vitro and in a two‐hybrid test in vivo. Furthermore, transcriptional run‐on analysis demonstrated that yhh1 mutants were defective in transcription termination, suggesting that Yhh1p/Cft1p functions in the coupling of transcription and 3′‐end formation. We propose that direct interactions of Yhh1p/Cft1p with both the RNA transcript and the CTD are required to communicate poly(A) site recognition to elongating pol II to initiate transcription termination.
Resumo:
Throughout a lifetime of operation, a mobile service robot needs to acquire, store and update its knowledge of a working environment. This includes the ability to identify and track objects in different places, as well as using this information for interaction with humans. This paper introduces a long-term updating mechanism, inspired by the modal model of human memory, to enable a mobile robot to maintain its knowledge of a changing environment. The memory model is integrated with a hybrid map that represents the global topology and local geometry of the environment, as well as the respective 3D location of objects. We aim to enable the robot to use this knowledge to help humans by suggesting the most likely locations of specific objects in its map. An experiment using omni-directional vision demonstrates the ability to track the movements of several objects in a dynamic environment over an extended period of time.
Resumo:
There is substantial evidence for facial emotion recognition (FER) deficits in autism spectrum disorder (ASD). The extent of this impairment, however, remains unclear, and there is some suggestion that clinical groups might benefit from the use of dynamic rather than static images. High-functioning individuals with ASD (n = 36) and typically developing controls (n = 36) completed a computerised FER task involving static and dynamic expressions of the six basic emotions. The ASD group showed poorer overall performance in identifying anger and disgust and were disadvantaged by dynamic (relative to static) stimuli when presented with sad expressions. Among both groups, however, dynamic stimuli appeared to improve recognition of anger. This research provides further evidence of specific impairment in the recognition of negative emotions in ASD, but argues against any broad advantages associated with the use of dynamic displays.
Resumo:
The solutions proposed in this thesis contribute to improve gait recognition performance in practical scenarios that further enable the adoption of gait recognition into real world security and forensic applications that require identifying humans at a distance. Pioneering work has been conducted on frontal gait recognition using depth images to allow gait to be integrated with biometric walkthrough portals. The effects of gait challenging conditions including clothing, carrying goods, and viewpoint have been explored. Enhanced approaches are proposed on segmentation, feature extraction, feature optimisation and classification elements, and state-of-the-art recognition performance has been achieved. A frontal depth gait database has been developed and made available to the research community for further investigation. Solutions are explored in 2D and 3D domains using multiple images sources, and both domain-specific and independent modality gait features are proposed.
Resumo:
This thesis investigates face recognition in video under the presence of large pose variations. It proposes a solution that performs simultaneous detection of facial landmarks and head poses across large pose variations, employs discriminative modelling of feature distributions of faces with varying poses, and applies fusion of multiple classifiers to pose-mismatch recognition. Experiments on several benchmark datasets have demonstrated that improved performance is achieved using the proposed solution.