975 resultados para Semantic processing
Resumo:
The storage and processing capacity realised by computing has lead to an explosion of data retention. We now reach the point of information overload and must begin to use computers to process more complex information. In particular, the proposition of the Semantic Web has given structure to this problem, but has yet realised practically. The largest of its problems is that of ontology construction; without a suitable automatic method most will have to be encoded by hand. In this paper we discus the current methods for semi and fully automatic construction and their current shortcomings. In particular we pay attention the application of ontologies to products and the particle application of the ontologies.
Resumo:
Currently many ontologies are available for addressing different domains. However, it is not always possible to deploy such ontologies to support collaborative working, so that their full potential can be exploited to implement intelligent cooperative applications capable of reasoning over a network of context-specific ontologies. The main problem arises from the fact that presently ontologies are created in an isolated way to address specific needs. However we foresee the need for a network of ontologies which will support the next generation of intelligent applications/devices, and, the vision of Ambient Intelligence. The main objective of this paper is to motivate the design of a networked ontology (Meta) model which formalises ways of connecting available ontologies so that they are easy to search, to characterise and to maintain. The aim is to make explicit the virtual and implicit network of ontologies serving the Semantic Web.
Resumo:
Numerous linguistic operations have been assigned to cortical brain areas, but the contributions of subcortical structures to human language processing are still being discussed. Using simultaneous EEG recordings directly from deep brain structures and the scalp, we show that the human thalamus systematically reacts to syntactic and semantic parameters of auditorily presented language in a temporally interleaved manner in coordination with cortical regions. In contrast, two key structures of the basal ganglia, the globus pallidus internus and the subthalamic nucleus, were not found to be engaged in these processes. We therefore propose that syntactic and semantic language analysis is primarily realized within cortico-thalamic networks, whereas a cohesive basal ganglia network is not involved in these essential operations of language analysis.
Resumo:
McDaniel, Robinson-Riegler, and Einstein (1998) recently reported findings in support of the proposal that prospective remembering is largely conceptually driven. In each of the three experiments they reported, however, the task in which the prospective memory target was encountered at test had a predominantly conceptual focus, thereby potentially facilitating retrieval of conceptually encoded features of the studied target event. We report two experiments in which we manipulated the dimension (perceptual or conceptual) along which a target event varied between study and test while using a processing task, at both study and test, compatible with the relevant dimension of target change. When the target was encountered in a sentence validity task at study and test, and the semantic context in which a target was encountered was changed between these two occasions, prospective remembering declined (Experiment 1). A similar decline occurred, using a readability rating task, when the perceptual context (font in which the word was printed) was altered (Experiment 2). These results indicate that both perceptual and conceptual processes can support prospective remembering.
Resumo:
A novel framework referred to as collaterally confirmed labelling (CCL) is proposed, aiming at localising the visual semantics to regions of interest in images with textual keywords. Both the primary image and collateral textual modalities are exploited in a mutually co-referencing and complementary fashion. The collateral content and context-based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix of the visual keywords. A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. We introduce a novel high-level visual content descriptor that is devised for performing semantic-based image classification and retrieval. The proposed image feature vector model is fundamentally underpinned by the CCL framework. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval, respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicate that the proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper addresses the nature and cause of Specific Language Impairment (SLI) by reviewing recent research in sentence processing of children with SLI compared to typically developing (TD) children and research in infant speech perception. These studies have revealed that children with SLI are sensitive to syntactic, semantic, and real-world information, but do not show sensitivity to grammatical morphemes with low phonetic saliency, and they show longer reaction times than age-matched controls. TD children from the age of 4 show trace reactivation, but some children with SLI fail to show this effect, which resembles the pattern of adults and TD children with low working memory. Finally, findings from the German Language Development (GLAD) Project have revealed that a group of children at risk for SLI had a history of an auditory delay and impaired processing of prosodic information in the first months of their life, which is not detectable later in life. Although this is a single project that needs to be replicated with a larger group of children, it provides preliminary support for accounts of SLI which make an explicit link between an early deficit in the processing of phonology and later language deficits, and the Computational Complexity Hypothesis that argues that the language deficit in children with SLI lies in difficulties integrating different types of information at the interfaces.
Resumo:
An ongoing debate on second language (L2) processing revolves around whether or not L2 learners process syntactic information similarly to monolinguals (L1), and what factors lead to a native-like processing. According to the Shallow Structure Hypothesis (Clahsen & Felser, 2006a), L2 learners’ processing does not include abstract syntactic features, such as intermediate gaps of wh-movement, but relies more on lexical/semantic information. Other researchers have suggested that naturalistic L2 exposure can lead to native-like processing (Dussias, 2003). This study investigates the effect of naturalistic exposure in processing wh-dependencies. Twenty-six advanced Greek learners of L2 English with an average nine years of naturalistic exposure, 30 with classroom exposure, and 30 native speakers of English completed a self-paced reading task with sentences involving intermediate gaps. L2 learners with naturalistic exposure showed evidence of native-like processing of the intermediate gaps, suggesting that linguistic immersion can lead to native-like abstract syntactic processing in the L2.
Resumo:
The Retrieval-Induced Forgetting (RIF) paradigm includes three phases: (a) study/encoding of category exemplars, (b) practicing retrieval of a sub-set of those category exemplars, and (c) recall of all exemplars. At the final recall phase, recall of items that belong to the same categories as those items that undergo retrieval-practice, but that do not undergo retrieval-practice, is impaired. The received view is that this is because retrieval of target category-exemplars (e.g., ‘Tiger’ in the category Four-legged animal) requires inhibition of non-target category-exemplars (e.g., ‘Dog’ and ‘Lion’) that compete for retrieval. Here, we used the RIF paradigm to investigate whether ignoring auditory items during the retrieval-practice phase modulates the inhibitory process. In two experiments, RIF was present when retrieval-practice was conducted in quiet and when conducted in the presence of spoken words that belonged to a category other than that of the items that were targets for retrieval-practice. In contrast, RIF was abolished when words that either were identical to the retrieval-practice words or were only semantically related to the retrieval-practice words were presented as background speech. The results suggest that the act of ignoring speech can reduce inhibition of the non-practiced category-exemplars, thereby eliminating RIF, but only when the spoken words are competitors for retrieval (i.e., belong to the same semantic category as the to-be-retrieved items).
Resumo:
Using the eye-movement monitoring technique in two reading comprehension experiments, we investigated the timing of constraints on wh-dependencies (so-called ‘island’ constraints) in native and nonnative sentence processing. Our results show that both native and nonnative speakers of English are sensitive to extraction islands during processing, suggesting that memory storage limitations affect native and nonnative comprehenders in essentially the same way. Furthermore, our results show that the timing of island effects in native compared to nonnative sentence comprehension is affected differently by the type of cue (semantic fit versus filled gaps) signalling whether dependency formation is possible at a potential gap site. Whereas English native speakers showed immediate sensitivity to filled gaps but not to lack of semantic fit, proficient German-speaking learners of L2 English showed the opposite sensitivity pattern. This indicates that initial wh-dependency formation in nonnative processing is based on semantic feature-matching rather than being structurally mediated as in native comprehension.
Resumo:
Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.
Resumo:
Threat detection is a challenging problem, because threats appear in many variations and differences to normal behaviour can be very subtle. In this paper, we consider threats on a parking lot, where theft of a truck’s cargo occurs. The threats range from explicit, e.g. a person attacking the truck driver, to implicit, e.g. somebody loitering and then fiddling with the exterior of the truck in order to open it. Our goal is a system that is able to recognize a threat instantaneously as they develop. Typical observables of the threats are a person’s activity, presence in a particular zone and the trajectory. The novelty of this paper is an encoding of these threat observables in a semantic, intermediate-level representation, based on low-level visual features that have no intrinsic semantic meaning themselves. The aim of this representation was to bridge the semantic gap between the low-level tracks and motion and the higher-level notion of threats. In our experiments, we demonstrate that our semantic representation is more descriptive for threat detection than directly using low-level features. We find that a person’s activities are the most important elements of this semantic representation, followed by the person’s trajectory. The proposed threat detection system is very accurate: 96.6 % of the tracks are correctly interpreted, when considering the temporal context.
Resumo:
Iconicity is the non-arbitrary relation between properties of a phonological form and semantic content (e.g. “moo”, “splash”). It is a common feature of both spoken and signed languages, and recent evidence shows that iconic forms confer an advantage during word learning. We explored whether iconic forms conferred a processing advantage for 13 individuals with aphasia following left-hemisphere stroke. Iconic and control words were compared in four different tasks: repetition, reading aloud, auditory lexical decision and visual lexical decision. An advantage for iconic words was seen for some individuals in all tasks, with consistent group effects emerging in reading aloud and auditory lexical decision. Both these tasks rely on mapping between semantics and phonology. We conclude that iconicity aids spoken word processing for individuals with aphasia. This advantage may be due to a stronger connection between semantic information and phonological forms.
Resumo:
OWL-S is an application of OWL, the Web Ontology Language, that describes the semantics of Web Services so that their discovery, selection, invocation and composition can be automated. The research literature reports the use of UML diagrams for the automatic generation of Semantic Web Service descriptions in OWL-S. This paper demonstrates a higher level of automation by generating complete complete Web applications from OWL-S descriptions that have themselves been generated from UML. Previously, we proposed an approach for processing OWL-S descriptions in order to produce MVC-based skeletons for Web applications. The OWL-S ontology undergoes a series of transformations in order to generate a Model-View-Controller application implemented by a combination of Java Beans, JSP, and Servlets code, respectively. In this paper, we show in detail the documents produced at each processing step. We highlight the connections between OWL-S specifications and executable code in the various Java dialects and show the Web interfaces that result from this process.
Resumo:
This paper presents the overall methodology that has been used to encode both the Brazilian Portuguese WordNet (WordNet.Br) standard language-independent conceptual-semantic relations (hyponymy, co-hyponymy, meronymy, cause, and entailment) and the so-called cross-lingual conceptual-semantic relations between different wordnets. Accordingly, after contextualizing the project and outlining the current lexical database structure and statistics, it describes the WordNet.Br editing GUI that was designed to aid the linguist in carrying out the tasks of building synsets, selecting sample sentences from corpora, writing synset concept glosses, and encoding both language-independent conceptual-semantic relations and cross-lingual conceptual-semantic relations between WordNet.Br and Princeton WordNet © Springer-Verlag Berlin Heidelberg 2006.
Resumo:
We present a general model of brain function (the calcium wave model), distinguishing three processing modes in the perception-action cycle. The model provides an interpretation of the data from experiments on semantic memory conducted by the authors. © 2013 Pereira Jr, Santos and Barros.