60 resultados para Semantic annotations
Resumo:
Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.
Resumo:
In 1972, episodic and semantic memories were considered to reflect different types of knowledge (Tulving, 1972). However, these early definitions encountered many difficulties. Now, Episodic and semantic memories are discussed in terms of awareness associated with retrieval (Wheeler, Stuss, & Tulving, 1997): Autonoetic consciousness (i.e., feeling of remembering) is considered associated with retrieval from the episodic memory system, while noetic consciousness (i.e., feeling of knowing) is considered characterized by retrieval from the semantic memory system. The present article investigated determinants of autonoetic consciousness in order to clarify characteristics of perceptual knowledge that is being recalled, the more strongly the individual feels autonoetic consciousness during retrieval, and that autonoetic consciousness is based on rich sensory-perceptual knowledge. Furthermore, we suggested that the parietal and frontal lobes mediate the process of generating autonoetic consciousness. This suggested that sensory-perceptual knowledge, the parietal lobe and the frontal lobe are important factors for discriminating episodic memory afrom semantic memory.
Resumo:
Building Information Modeling (BIM) is the process of structuring, capturing, creating, and managing a digital representation of physical and/or functional characteristics of a built space [1]. Current BIM has limited ability to represent dynamic semantics, social information, often failing to consider building activity, behavior and context; thus limiting integration with intelligent, built-environment management systems. Research, such as the development of Semantic Exchange Modules, and/or the linking of IFC with semantic web structures, demonstrates the need for building models to better support complex semantic functionality. To implement model semantics effectively, however, it is critical that model designers consider semantic information constructs. This paper discusses semantic models with relation to determining the most suitable information structure. We demonstrate how semantic rigidity can lead to significant long-term problems that can contribute to model failure. A sufficiently detailed feasibility study is advised to maximize the value from the semantic model. In addition we propose a set of questions, to be used during a model’s feasibility study, and guidelines to help assess the most suitable method for managing semantics in a built environment.
Resumo:
Two experiments examined the extent to which erroneous recall blocks veridical recall using, as a vehicle for study, the disruptive impact of distractors that are semantically similar to a list of words presented for free recall. Instructing participants to avoid erroneous recall of to-be-ignored spoken distractors attenuated their recall but this did not influence the disruptive effect of those distractors on veridical recall (Experiment 1). Using an externalised output-editing procedure—whereby participants recalled all items that came to mind and identified those that were erroneous—the usual between-sequence semantic similarity effect on erroneous and veridical recall was replicated but the relationship between the rate of erroneous and veridical recall was weak (Experiment 2). The results suggest that forgetting is not due to veridical recall being blocked by similar events.
Resumo:
The past years have shown an enormous advancement in sequencing and array-based technologies, producing supplementary or alternative views of the genome stored in various formats and databases. Their sheer volume and different data scope pose a challenge to jointly visualize and integrate diverse data types. We present AmalgamScope a new interactive software tool focusing on assisting scientists with the annotation of the human genome and particularly the integration of the annotation files from multiple data types, using gene identifiers and genomic coordinates. Supported platforms include next-generation sequencing and microarray technologies. The available features of AmalgamScope range from the annotation of diverse data types across the human genome to integration of the data based on the annotational information and visualization of the merged files within chromosomal regions or the whole genome. Additionally, users can define custom transcriptome library files for any species and use the file exchanging distant server options of the tool.
Resumo:
This paper addresses the issue of activity understanding from video and its semantics-rich description. A novel approach is presented where activities are characterised and analysed at different resolutions. Semantic information is delivered according to the resolution at which the activity is observed. Furthermore, the multiresolution activity characterisation is exploited to detect abnormal activity. To achieve these system capabilities, the focus is given on context modelling by employing a soft computing-based algorithm which automatically enables the determination of the main activity zones of the observed scene by taking as input the trajectories of detected mobiles. Such areas are learnt at different resolutions (or granularities). In a second stage, learned zones are employed to extract people activities by relating mobile trajectories to the learned zones. In this way, the activity of a person can be summarised as the series of zones that the person has visited. Employing the inherent soft relation properties, the reported activities can be labelled with meaningful semantics. Depending on the granularity at which activity zones and mobile trajectories are considered, the semantic meaning of the activity shifts from broad interpretation to detailed description.Activity information at different resolutions is also employed to perform abnormal activity detection.
Resumo:
Threat detection is a challenging problem, because threats appear in many variations and differences to normal behaviour can be very subtle. In this paper, we consider threats on a parking lot, where theft of a truck’s cargo occurs. The threats range from explicit, e.g. a person attacking the truck driver, to implicit, e.g. somebody loitering and then fiddling with the exterior of the truck in order to open it. Our goal is a system that is able to recognize a threat instantaneously as they develop. Typical observables of the threats are a person’s activity, presence in a particular zone and the trajectory. The novelty of this paper is an encoding of these threat observables in a semantic, intermediate-level representation, based on low-level visual features that have no intrinsic semantic meaning themselves. The aim of this representation was to bridge the semantic gap between the low-level tracks and motion and the higher-level notion of threats. In our experiments, we demonstrate that our semantic representation is more descriptive for threat detection than directly using low-level features. We find that a person’s activities are the most important elements of this semantic representation, followed by the person’s trajectory. The proposed threat detection system is very accurate: 96.6 % of the tracks are correctly interpreted, when considering the temporal context.
Resumo:
Higher levels of well-being are associated with longer life expectancies and better physical health. Previous studies suggest that processes involving the self and autobiographical memory are related to well-being, yet these relationships are poorly understood. The present study tested 32 older and 32 younger adults using scales measuring well-being and the affective valence of two types of autobiographical memory: episodic autobiographical memories and semantic self-images. Results showed that valence of semantic self-images, but not episodic autobiographical memories, was highly correlated with well-being,particularly in older adults. In contrast, well-being in older adults was unrelated to performance across a range of standardised memory tasks. These results highlight the role of semantic self-images in well-being, and have implications for the development of therapeutic interventions for well-being in aging.
Resumo:
There is something peculiar about aesthetic testimony. It seems more difficult to gain knowledge of aesthetic properties based solely upon testimony than it is in the case of other types of property. In this paper, I argue that we can provide an adequate explanation at the level of the semantics of aesthetic language, without defending any substantive thesis in epistemology or about aesthetic value/judgement. If aesthetic predicates are given a non-invariantist semantics, we can explain the supposed peculiar difficulty with aesthetic testimony.
Resumo:
Comprehension deficits are common in stroke aphasia, including in cases with (i) semantic aphasia (SA), characterised by poor executive control of semantic processing across verbal and nonverbal modalities, and (ii) Wernicke’s aphasia (WA), associated with poor auditory-verbal comprehension and repetition, plus fluent speech with jargon. However, the varieties of these comprehension problems, and their underlying causes, are not well-understood. Both patient groups exhibit some type of semantic ‘access’ deficit, as opposed to the ‘storage’ deficits observed in semantic dementia. Nevertheless, existing descriptions suggest these patients might have different varieties of ‘access’ impairment – related to difficulty resolving competition (in SA) vs. initial activation of concepts from sensory inputs (in WA). We used a case-series design to compare WA and SA patients on Warrington’s paradigmatic assessment of semantic ‘access’ deficits. In these verbal and non-verbal matching tasks, a small set of semantically-related items are repeatedly presented over several cycles so that the target on one trial becomes a distractor on another (building up interference and eliciting semantic ‘blocking’ effects). WA and SA patients were distinguished according to lesion location in the temporal cortex, but in each group, some individuals had additional prefrontal damage. Both of these aspects of lesion variability – one that mapped onto classical ‘syndromes’ and one that did not – predicted aspects of the semantic ‘access’ deficit. Both SA and WA cases showed multimodal semantic impairment, although as expected the WA group showed greater deficits on auditory-verbal than picture judgements. Distribution of damage in the temporal lobe was crucial for predicting the initially beneficial effects of stimulus repetition: WA cases showed initial improvement with repetition of words and pictures, while in SA, semantic access was initially good but declined in the face of competition from previous targets. Prefrontal damage predicted the harmful effects of repetition: the ability to re-select both word and picture targets in the face of mounting competition was linked to left prefrontal damage in both groups. Therefore, SA and WA patients have partially distinct impairment of semantic ‘access’ but, across these syndromes, prefrontal lesions produce declining comprehension with repetition in both verbal and non-verbal tasks.
Resumo:
In this paper we present a novel approach to detect people meeting. The proposed approach works by translating people behaviour from trajectory information into semantic terms. Having available a semantic model of the meeting behaviour, the event detection is performed in the semantic domain. The model is learnt employing a soft-computing clustering algorithm that combines trajectory information and motion semantic terms. A stable representation can be obtained from a series of examples. Results obtained on a series of videos with different types of meeting situations show that the proposed approach can learn a generic model that can effectively be applied on the behaviour recognition of meeting situations.
Resumo:
In this paper we propose an innovative approach for behaviour recognition, from a multicamera environment, based on translating video activity into semantics. First, we fuse tracks from individual cameras through clustering employing soft computing techniques. Then, we introduce a higher-level module able to translate fused tracks into semantic information. With our proposed approach, we address the challenge set in PETS 2014 on recognising behaviours of interest around a parked vehicle, namely the abnormal behaviour of someone walking around the vehicle.
Resumo:
Complete information dispositional metasemantics says that our expressions get their meaning in virtue of what our dispositions to apply those terms would be given complete information. The view has recently been advanced and argued to have a number of attractive features. I argue that that it threatens to make the meanings of our words indeterminate and doesn't do what it was that made a dispositional view attractive in the first place.
Resumo:
We present an account of semantic representation that focuses on distinct types of information from which word meanings can be learned. In particular, we argue that there are at least two major types of information from which we learn word meanings. The first is what we call experiential information. This is data derived both from our sensory-motor interactions with the outside world, as well as from our experience of own inner states, particularly our emotions. The second type of information is language-based. In particular, it is derived from the general linguistic context in which words appear. The paper spells out this proposal, summarizes research supporting this view and presents new predictions emerging from this framework.