870 resultados para foreground object removal
Resumo:
This paper provides a solution for predicting moving/moving and moving/static collisions of objects within a virtual environment. Feasible prediction in real-time virtual worlds can be obtained by encompassing moving objects within a sphere and static objects within a convex polygon. Fast solutions are then attainable by describing the movement of objects parametrically in time as a polynomial.
Resumo:
The existence of hand-centred visual processing has long been established in the macaque premotor cortex. These hand-centred mechanisms have been thought to play some general role in the sensory guidance of movements towards objects, or, more recently, in the sensory guidance of object avoidance movements. We suggest that these hand-centred mechanisms play a specific and prominent role in the rapid selection and control of manual actions following sudden changes in the properties of the objects relevant for hand-object interactions. We discuss recent anatomical and physiological evidence from human and non-human primates, which indicates the existence of rapid processing of visual information for hand-object interactions. This new evidence demonstrates how several stages of the hierarchical visual processing system may be bypassed, feeding the motor system with hand-related visual inputs within just 70 ms following a sudden event. This time window is early enough, and this processing rapid enough, to allow the generation and control of rapid hand-centred avoidance and acquisitive actions, for aversive and desired objects, respectively
Resumo:
This paper presents a video surveillance framework that robustly and efficiently detects abandoned objects in surveillance scenes. The framework is based on a novel threat assessment algorithm which combines the concept of ownership with automatic understanding of social relations in order to infer abandonment of objects. Implementation is achieved through development of a logic-based inference engine based on Prolog. Threat detection performance is conducted by testing against a range of datasets describing realistic situations and demonstrates a reduction in the number of false alarms generated. The proposed system represents the approach employed in the EU SUBITO project (Surveillance of Unattended Baggage and the Identification and Tracking of the Owner).
Resumo:
This study investigates biomass, density, photosynthetic activity, and accumulation of nitrogen (N) and phosphorus (P) in three wetland plants (Canna indica, Typha augustifolia, and Phragmites austrail) in response to the introduction of the earthworm Eisenia fetida into a constructed wetland. The removal efficiency of N and P in constructed wetlands were also investigated. Results showed that the photosynthetic rate (P n), transpiration rate (T r), and stomatal conductance (S cond) of C. indica and P. austrail were (p < 0.05) significantly higher when earthworms were present. The addition of E. fetida increased the N uptake value by above-ground of C. indica, T. augustifolia, and P. australis by 185, 216, and 108 %, respectively; and its P uptake value increased by 300, 355, and 211 %, respectively. Earthworms could enhance photosynthetic activity, density, and biomass of wetland plants in constructed wetland, resulting in the higher N and P uptake. The addition of E. fetida into constructed wetland increased the removal efficiency of TN and TP by 10 and 7 %, respectively. The addition of earthworms into vertical flow constructed wetland increased the removal efficiency of TN and TP, which was related to higher photosynthetic activity and N and P uptake. The addition of earthworms into vertical flow constructed wetland and plant harvests could be the significantly sustainable N and P removal strategy
Resumo:
Perception and action are tightly linked: objects may be perceived not only in terms of visual features, but also in terms of possibilities for action. Previous studies showed that when a centrally located object has a salient graspable feature (e.g., a handle), it facilitates motor responses corresponding with the feature's position. However, such so-called affordance effects have been criticized as resulting from spatial compatibility effects, due to the visual asymmetry created by the graspable feature, irrespective of any affordances. In order to dissociate between affordance and spatial compatibility effects, we asked participants to perform a simple reaction-time task to typically graspable and non-graspable objects with similar visual features (e.g., lollipop and stop sign). Responses were measured using either electromyography (EMG) on proximal arm muscles during reaching-like movements, or with finger key-presses. In both EMG and button press measurements, participants responded faster when the object was either presented in the same location as the responding hand, or was affordable, resulting in significant and independent spatial compatibility and affordance effects, but no interaction. Furthermore, while the spatial compatibility effect was present from the earliest stages of movement preparation and throughout the different stages of movement execution, the affordance effect was restricted to the early stages of movement execution. Finally, we tested a small group of unilateral arm amputees using EMG, and found residual spatial compatibility but no affordance, suggesting that spatial compatibility effects do not necessarily rely on individuals’ available affordances. Our results show dissociation between affordance and spatial compatibility effects, and suggest that rather than evoking the specific motor action most suitable for interaction with the viewed object, graspable objects prompt the motor system in a general, body-part independent fashion
Resumo:
Does language modulate perception and categorisation of everyday objects? Here, we approach this question from the perspective of grammatical gender in bilinguals. We tested Spanish–English bilinguals and control native speakers of English in a semantic categorisation task on triplets of pictures in an all-in-English context while measuring event-related brain potentials (ERPs). Participants were asked to press a button when the third picture of a triplet belonged to the same semantic category as the first two, and another button when it belonged to a different category. Unbeknownst to them, in half of the trials, the gender of the third picture name in Spanish had the same gender as that of the first two, and the opposite gender in the other half. We found no priming in behavioural results of either semantic relatedness or gender consistency. In contrast, ERPs revealed not only the expected semantic priming effect in both groups, but also a negative modulation by gender inconsistency in Spanish–English bilinguals, exclusively. These results provide evidence for spontaneous and unconscious access to grammatical gender in participants functioning in a context requiring no access to such information, thereby providing support for linguistic relativity effects in the grammatical domain.
Resumo:
This work presents a method of information fusion involving data captured by both a standard CCD camera and a ToF camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time of light information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localization. Further development of these methods will make it possible to identify objects and their position in the real world, and to use this information to prevent possible collisions between the robot and such objects.
Resumo:
This work presents a method of information fusion involving data captured by both a standard charge-coupled device (CCD) camera and a time-of-flight (ToF) camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time-of-flight information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time, this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localisation. Further development of these methods will make it possible to identify objects and their position in the real world and to use this information to prevent possible collisions between the robot and such objects.
Resumo:
The field of museum geography is taking on new significance as geographers and museum-studies scholars make sense of the spatial relations between the people, things, practices and buildings that make and remake museums. In order to strengthen this spatial interest in museums, this paper makes important connections between recent work in cultural geography and museum studies on love, materiality and the museum effect. This paper marks a departure from the preoccupation with the public spaces of museums to go behind the scenes of the Science Museum in London to explore its rarely visited, but nonetheless lively, small-to-medium-sized object storerooms at Blythe House. Incorporating field diary entries and interview extracts from two research projects based upon the museum storerooms at Blythe House, this paper brings to life the social interactions that take place between museum curators and conservators and the objects they care for. This focus on object-love enables scholars to consider anew what museums are and what they are for, the life of the museum object in the storeroom, and the emotional practices of professional curatorship and conservation. This journey into the storeroom at Blythe House makes explicit how object-love shapes museum space.
Resumo:
Perception is linked to action via two routes: a direct route based on affordance information in the environment and an indirect route based on semantic knowledge about objects. The present study explored the factors modulating the recruitment of the two routes, in particular which factors affecting the selection of paired objects. In Experiment 1, we presented real objects among semantically related or unrelated distracters. Participants had to select two objects that can interact. The presence of distracters affected selection times, but not the semantic relations of the objects with the distracters. Furthermore, participants first selected the active object (e.g. teaspoon) with their right hand, followed by the passive object (e.g. mug), often with their left hand. In Experiment 2, we presented pictures of the same objects with no hand grip, congruent or incongruent hand grip. Participants had to decide whether the two objects can interact. Action decisions were faster when the presentation of the active object preceded the presentation of the passive object, and when the grip was congruent. Interestingly, participants were slower when the objects were semantically but not functionally related; this effect increased with congruently gripped objects. Our data showed that action decisions in the presence of strong affordance cues (real objects, pictures of congruently gripped objects) relied on sensory-motor representation, supporting the direct route from perception-to-action that bypasses semantic knowledge. However, in the case of weak affordance cues (pictures), semantic information interfered with action decisions, indicating that semantic knowledge impacts action decisions. The data support the dual-route account from perception-to-action.
Resumo:
Contamination of the electroencephalogram (EEG) by artifacts greatly reduces the quality of the recorded signals. There is a need for automated artifact removal methods. However, such methods are rarely evaluated against one another via rigorous criteria, with results often presented based upon visual inspection alone. This work presents a comparative study of automatic methods for removing blink, electrocardiographic, and electromyographic artifacts from the EEG. Three methods are considered; wavelet, blind source separation (BSS), and multivariate singular spectrum analysis (MSSA)-based correction. These are applied to data sets containing mixtures of artifacts. Metrics are devised to measure the performance of each method. The BSS method is seen to be the best approach for artifacts of high signal to noise ratio (SNR). By contrast, MSSA performs well at low SNRs but at the expense of a large number of false positive corrections.
Resumo:
A fully automated and online artifact removal method for the electroencephalogram (EEG) is developed for use in brain-computer interfacing. The method (FORCe) is based upon a novel combination of wavelet decomposition, independent component analysis, and thresholding. FORCe is able to operate on a small channel set during online EEG acquisition and does not require additional signals (e.g. electrooculogram signals). Evaluation of FORCe is performed offline on EEG recorded from 13 BCI particpants with cerebral palsy (CP) and online with three healthy participants. The method outperforms the state-of the-art automated artifact removal methods Lagged auto-mutual information clustering (LAMIC) and Fully automated statistical thresholding (FASTER), and is able to remove a wide range of artifact types including blink, electromyogram (EMG), and electrooculogram (EOG) artifacts.