150 resultados para Recontextualised found object
Resumo:
This practice-led research project explores the possibilities for restaging and reconfiguring contemporary art installations in multiple and different locations. By exploring ideas and art that demonstrate a kaleidoscopic approach to creative practice, this project examines how analysing artists' particular processes can achieve new understandings and experiences of installation art. This project achieves this through reflection on, and analysis of creative works made throughout the research, and a critical examination of contemporary art practices.
Resumo:
A robust visual tracking system requires an object appearance model that is able to handle occlusion, pose, and illumination variations in the video stream. This can be difficult to accomplish when the model is trained using only a single image. In this paper, we first propose a tracking approach based on affine subspaces (constructed from several images) which are able to accommodate the abovementioned variations. We use affine subspaces not only to represent the object, but also the candidate areas that the object may occupy. We furthermore propose a novel approach to measure affine subspace-to-subspace distance via the use of non-Euclidean geometry of Grassmann manifolds. The tracking problem is then considered as an inference task in a Markov Chain Monte Carlo framework via particle filtering. Quantitative evaluation on challenging video sequences indicates that the proposed approach obtains considerably better performance than several recent state-of-the-art methods such as Tracking-Learning-Detection and MILtrack.
Resumo:
Robots currently recognise and use objects through algorithms that are hand-coded or specifically trained. Such robots can operate in known, structured environments but cannot learn to recognise or use novel objects as they appear. This thesis demonstrates that a robot can develop meaningful object representations by learning the fundamental relationship between action and change in sensory state; the robot learns sensorimotor coordination. Methods based on Markov Decision Processes are experimentally validated on a mobile robot capable of gripping objects, and it is found that object recognition and manipulation can be learnt as an emergent property of sensorimotor coordination.
Resumo:
This paper presents an object-oriented world model for the road traffic environment of autonomous (driver-less) city vehicles. The developed World Model is a software component of the autonomous vehicle's control system, which represents the vehicle's view of its road environment. Regardless whether the information is a priori known, obtained through on-board sensors, or through communication, the World Model stores and updates information in real-time, notifies the decision making subsystem about relevant events, and provides access to its stored information. The design is based on software design patterns, and its application programming interface provides both asynchronous and synchronous access to its information. Experimental results of both a 3D simulation and real-world experiments show that the approach is applicable and real-time capable.
Resumo:
We investigated memories of room-sized spatial layouts learned by sequentially or simultaneously viewing objects from a stationary position. In three experiments, sequential viewing (one or two objects at a time) yielded subsequent memory performance that was equivalent or superior to simultaneous viewing of all objects, even though sequential viewing lacked direct access to the entire layout. This finding was replicated by replacing sequential viewing with directed viewing in which all objects were presented simultaneously and participants’ attention was externally focused on each object sequentially, indicating that the advantage of sequential viewing over simultaneous viewing may have originated from focal attention to individual object locations. These results suggest that memory representation of object-to-object relations can be constructed efficiently by encoding each object location separately, when those locations are defined within a single spatial reference system. These findings highlight the importance of considering object presentation procedures when studying spatial learning mechanisms.
Resumo:
The present study was conducted to investigate whether ob- servers are equally prone to overlook any kinds of visual events in change blindness. Capitalizing on the finding from visual search studies that abrupt appearance of an object effectively captures observers' attention, the onset of a new object and the offset of an existing object were contrasted regarding their detectability when they occurred in a naturalistic scene. In an experiment, participants viewed a series of photograph pairs in which layouts of seven or eight objects were depicted. One object either appeared in or disappeared from the layout, and participants tried to detect this change. Results showed that onsets were detected more quickly than offsets, while they were detected with equivalent ac- curacy. This suggests that the primacy of onset over offset is a robust phenomenon that likely makes onsets more resistant to change blindness under natural viewing conditions.
Resumo:
The present study investigated how object locations learned separately are integrated and represented as a single spatial layout in memory. Two experiments were conducted in which participants learned a room-sized spatial layout that was divided into two sets of five objects. Results suggested that integration across sets was performed efficiently when it was done during initial encoding of the environment but entailed cost in accuracy when it was attempted at the time of memory retrieval. These findings suggest that, once formed, spatial representations in memory generally remain independent and integrating them into a single representation requires additional cognitive processes.
Resumo:
language (such as C++ and Java). The model used allows to insert watermarks on three “orthogonal” levels. For the first level, watermarks are injected into objects. The second level watermarking is used to select proper variants of the source code. The third level uses transition function that can be used to generate copies with different functionalities. Generic watermarking schemes were presented and their security discussed.
Resumo:
It is well established that the time to name target objects can be influenced by the presence of categorically related versus unrelated distractor items. A variety of paradigms have been developed to determine the level at which this semantic interference effect occurs in the speech production system. In this study, we investigated one of these tasks, the postcue naming paradigm, for the first time with fMRI. Previous behavioural studies using this paradigm have produced conflicting interpretations of the processing level at which the semantic interference effect takes place, ranging from pre- to post-lexical. Here we used fMRI with a sparse, event-related design to adjudicate between these competing explanations. We replicated the behavioural postcue naming effect for categorically related target/distractor pairs, and observed a corresponding increase in neuronal activation in the right lingual and fusiform gyri-regions previously associated with visual object processing and colour-form integration. We interpret these findings as being consistent with an account that places the semantic interference effect in the postcue paradigm at a processing level involving integration of object attributes in short-term memory.
Resumo:
Previous behavioral studies reported a robust effect of increased naming latencies when objects to be named were blocked within semantic category, compared to items blocked between category. This semantic context effect has been attributed to various mechanisms including inhibition or excitation of lexico-semantic representations and incremental learning of associations between semantic features and names, and is hypothesized to increase demands on verbal self-monitoring during speech production. Objects within categories also share many visual structural features, introducing a potential confound when interpreting the level at which the context effect might occur. Consistent with previous findings, we report a significant increase in response latencies when naming categorically related objects within blocks, an effect associated with increased perfusion fMRI signal bilaterally in the hippocampus and in the left middle to posterior superior temporal cortex. No perfusion changes were observed in the middle section of the left middle temporal cortex, a region associated with retrieval of lexical-semantic information in previous object naming studies. Although a manipulation of visual feature similarity did not influence naming latencies, we observed perfusion increases in the perirhinal cortex for naming objects with similar visual features that interacted with the semantic context in which objects were named. These results provide support for the view that the semantic context effect in object naming occurs due to an incremental learning mechanism, and involves increased demands on verbal self-monitoring.
Resumo:
This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.
Resumo:
By virtue of its widespread afferent projections, perirhinal cortex is thought to bind polymodal information into abstract object-level representations. Consistent with this proposal, deficits in cross-modal integration have been reported after perirhinal lesions in nonhuman primates. It is therefore surprising that imaging studies of humans have not observed perirhinal activation during visual-tactile object matching. Critically, however, these studies did not differentiate between congruent and incongruent trials. This is important because successful integration can only occur when polymodal information indicates a single object (congruent) rather than different objects (incongruent). We scanned neurologically intact individuals using functional magnetic resonance imaging (fMRI) while they matched shapes. We found higher perirhinal activation bilaterally for cross-modal (visual-tactile) than unimodal (visual-visual or tactile-tactile) matching, but only when visual and tactile attributes were congruent. Our results demonstrate that the human perirhinal cortex is involved in cross-modal, visual-tactile, integration and, thus, indicate a functional homology between human and monkey perirhinal cortices.
Resumo:
This chapter focuses on the physicality of the iPad as an object, and how that physicality affects the interactions children have with the device generally, and the apps specifically. Thinking about the physicality of the iPad is important because the materials, size, weight and appearance make the iPad quite unlike most other toys and equipment in the kindergarten space. Most strikingly, this physicality does not ‘represent’ the virtual vast dimensions of the iPad brought about through the diverse functions and contents of the apps contained in it. While the iPad is small enough and functional enough to be easily handled and operated even by young children, it is capable of performing highly complex, highly technological tasks that take it beyond its diminutive dimensions. This virtual-actual contrast is interesting to consider in relation to the other resources more commonly found in a kindergarten space. While objects such as toys, bricks, building materials often do prompt the child to imagine and invent beyond the physical boundaries of the toy, they not have the same types of virtual-actual contrasts of a digital device such as the iPad. How then, might children be drawn to the iPad because of its physical, technological and virtual difference? Particularly, how might this virtual-actual difference impact on the physical skills associated with writing and drawing: skills usually learnt through the use of a pencil and paper? While the research project did not set out to compare how digital and paper-based resources affect writing and drawing skills there was great interest to see how young children negotiated drawing and writing on the shiny glass surface of the iPad.
Resumo:
Throughout a lifetime of operation, a mobile service robot needs to acquire, store and update its knowledge of a working environment. This includes the ability to identify and track objects in different places, as well as using this information for interaction with humans. This paper introduces a long-term updating mechanism, inspired by the modal model of human memory, to enable a mobile robot to maintain its knowledge of a changing environment. The memory model is integrated with a hybrid map that represents the global topology and local geometry of the environment, as well as the respective 3D location of objects. We aim to enable the robot to use this knowledge to help humans by suggesting the most likely locations of specific objects in its map. An experiment using omni-directional vision demonstrates the ability to track the movements of several objects in a dynamic environment over an extended period of time.