947 resultados para Semantic interference
Resumo:
Two experiments examined the extent to which erroneous recall blocks veridical recall using, as a vehicle for study, the disruptive impact of distractors that are semantically similar to a list of words presented for free recall. Instructing participants to avoid erroneous recall of to-be-ignored spoken distractors attenuated their recall but this did not influence the disruptive effect of those distractors on veridical recall (Experiment 1). Using an externalised output-editing procedure—whereby participants recalled all items that came to mind and identified those that were erroneous—the usual between-sequence semantic similarity effect on erroneous and veridical recall was replicated but the relationship between the rate of erroneous and veridical recall was weak (Experiment 2). The results suggest that forgetting is not due to veridical recall being blocked by similar events.
Resumo:
This paper addresses the issue of activity understanding from video and its semantics-rich description. A novel approach is presented where activities are characterised and analysed at different resolutions. Semantic information is delivered according to the resolution at which the activity is observed. Furthermore, the multiresolution activity characterisation is exploited to detect abnormal activity. To achieve these system capabilities, the focus is given on context modelling by employing a soft computing-based algorithm which automatically enables the determination of the main activity zones of the observed scene by taking as input the trajectories of detected mobiles. Such areas are learnt at different resolutions (or granularities). In a second stage, learned zones are employed to extract people activities by relating mobile trajectories to the learned zones. In this way, the activity of a person can be summarised as the series of zones that the person has visited. Employing the inherent soft relation properties, the reported activities can be labelled with meaningful semantics. Depending on the granularity at which activity zones and mobile trajectories are considered, the semantic meaning of the activity shifts from broad interpretation to detailed description.Activity information at different resolutions is also employed to perform abnormal activity detection.
Resumo:
Threat detection is a challenging problem, because threats appear in many variations and differences to normal behaviour can be very subtle. In this paper, we consider threats on a parking lot, where theft of a truck’s cargo occurs. The threats range from explicit, e.g. a person attacking the truck driver, to implicit, e.g. somebody loitering and then fiddling with the exterior of the truck in order to open it. Our goal is a system that is able to recognize a threat instantaneously as they develop. Typical observables of the threats are a person’s activity, presence in a particular zone and the trajectory. The novelty of this paper is an encoding of these threat observables in a semantic, intermediate-level representation, based on low-level visual features that have no intrinsic semantic meaning themselves. The aim of this representation was to bridge the semantic gap between the low-level tracks and motion and the higher-level notion of threats. In our experiments, we demonstrate that our semantic representation is more descriptive for threat detection than directly using low-level features. We find that a person’s activities are the most important elements of this semantic representation, followed by the person’s trajectory. The proposed threat detection system is very accurate: 96.6 % of the tracks are correctly interpreted, when considering the temporal context.
Resumo:
Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain–computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.
Resumo:
Interferences from the spatially adjacent non-target stimuli evoke ERPs during non-target sub-trials and lead to false positives. This phenomenon is commonly seen in visual attention based BCIs and affects the performance of BCI system. Although, users or subjects tried to focus on the target stimulus, they still could not help being affected by conspicuous changes of the stimuli (flashes or presenting images) which were adjacent to the target stimulus. In view of this case, the aim of this study is to reduce the adjacent interference using new stimulus presentation pattern based on facial expression changes. Positive facial expressions can be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast will be big enough to evoke strong ERPs. In this paper, two different conditions (Pattern_1, Pattern_2) were used to compare across objective measures such as classification accuracy and information transfer rate as well as subjective measures. Pattern_1 was a “flash-only” pattern and Pattern_2 was a facial expression change of a dummy face. In the facial expression change patterns, the background is a positive facial expression and the stimulus is a negative facial expression. The results showed that the interferences from adjacent stimuli could be reduced significantly (P<;0.05) by using the facial expression change patterns. The online performance of the BCI system using the facial expression change patterns was significantly better than that using the “flash-only” patterns in terms of classification accuracy (p<;0.01), bit rate (p<;0.01), and practical bit rate (p<;0.01). Subjects reported that the annoyance and fatigue could be significantly decreased (p<;0.05) using the new stimulus presentation pattern presented in this paper.
Resumo:
Higher levels of well-being are associated with longer life expectancies and better physical health. Previous studies suggest that processes involving the self and autobiographical memory are related to well-being, yet these relationships are poorly understood. The present study tested 32 older and 32 younger adults using scales measuring well-being and the affective valence of two types of autobiographical memory: episodic autobiographical memories and semantic self-images. Results showed that valence of semantic self-images, but not episodic autobiographical memories, was highly correlated with well-being,particularly in older adults. In contrast, well-being in older adults was unrelated to performance across a range of standardised memory tasks. These results highlight the role of semantic self-images in well-being, and have implications for the development of therapeutic interventions for well-being in aging.
Resumo:
OBJECTIVE: Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. APPROACH: Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. MAIN RESULTS: The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). SIGNIFICANCE: The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.
Resumo:
There is something peculiar about aesthetic testimony. It seems more difficult to gain knowledge of aesthetic properties based solely upon testimony than it is in the case of other types of property. In this paper, I argue that we can provide an adequate explanation at the level of the semantics of aesthetic language, without defending any substantive thesis in epistemology or about aesthetic value/judgement. If aesthetic predicates are given a non-invariantist semantics, we can explain the supposed peculiar difficulty with aesthetic testimony.
Resumo:
In this paper we present a novel approach to detect people meeting. The proposed approach works by translating people behaviour from trajectory information into semantic terms. Having available a semantic model of the meeting behaviour, the event detection is performed in the semantic domain. The model is learnt employing a soft-computing clustering algorithm that combines trajectory information and motion semantic terms. A stable representation can be obtained from a series of examples. Results obtained on a series of videos with different types of meeting situations show that the proposed approach can learn a generic model that can effectively be applied on the behaviour recognition of meeting situations.
Resumo:
In this paper we propose an innovative approach for behaviour recognition, from a multicamera environment, based on translating video activity into semantics. First, we fuse tracks from individual cameras through clustering employing soft computing techniques. Then, we introduce a higher-level module able to translate fused tracks into semantic information. With our proposed approach, we address the challenge set in PETS 2014 on recognising behaviours of interest around a parked vehicle, namely the abnormal behaviour of someone walking around the vehicle.
Resumo:
Complete information dispositional metasemantics says that our expressions get their meaning in virtue of what our dispositions to apply those terms would be given complete information. The view has recently been advanced and argued to have a number of attractive features. I argue that that it threatens to make the meanings of our words indeterminate and doesn't do what it was that made a dispositional view attractive in the first place.
Resumo:
We present an account of semantic representation that focuses on distinct types of information from which word meanings can be learned. In particular, we argue that there are at least two major types of information from which we learn word meanings. The first is what we call experiential information. This is data derived both from our sensory-motor interactions with the outside world, as well as from our experience of own inner states, particularly our emotions. The second type of information is language-based. In particular, it is derived from the general linguistic context in which words appear. The paper spells out this proposal, summarizes research supporting this view and presents new predictions emerging from this framework.