946 resultados para Syntactic And Semantic Comprehension Tasks
Resumo:
Geospatio-temporal conceptual models provide a mechanism to explicitly represent geospatial and temporal aspects of applications. Such models, which focus on both what and when/where, need to be more expressive than conventional conceptual models (e.g., the ER model), which primarily focus on what is important for a given application. In this study, we view conceptual schema comprehension of geospatio-temporal data semantics in terms of matching the external problem representation (that is, the conceptual schema) to the problem-solving task (that is, syntactic and semantic comprehension tasks), an argument based on the theory of cognitive fit. Our theory suggests that an external problem representation that matches the problem solver's internal task representation will enhance performance, for example, in comprehending such schemas. To assess performance on geospatio-temporal schema comprehension tasks, we conducted a laboratory experiment using two semantically identical conceptual schemas, one of which mapped closely to the internal task representation while the other did not. As expected, we found that the geospatio-temporal conceptual schema that corresponded to the internal representation of the task enhanced the accuracy of schema comprehension; comprehension time was equivalent for both. Cognitive fit between the internal representation of the task and conceptual schemas with geospatio-temporal annotations was, therefore, manifested in accuracy of schema comprehension and not in time for problem solution. Our findings suggest that the annotated schemas facilitate understanding of data semantics represented on the schema.
Resumo:
Spoken language comprehension is known to involve a large left-dominant network of fronto-temporal brain regions, but there is still little consensus about how the syntactic and semantic aspects of language are processed within this network. In an fMRI study, volunteers heard spoken sentences that contained either syntactic or semantic ambiguities as well as carefully matched low-ambiguity sentences. Results showed ambiguity-related responses in the posterior left inferior frontal gyrus (pLIFG) and posterior left middle temporal regions. The pLIFG activations were present for both syntactic and semantic ambiguities suggesting that this region is not specialised for processing either semantic or syntactic information, but instead performs cognitive operations that are required to resolve different types of ambiguity irrespective of their linguistic nature, for example by selecting between possible interpretations or reinterpreting misparsed sentences. Syntactic ambiguities also produced activation in the posterior middle temporal gyrus. These data confirm the functional relationship between these two brain regions and their importance in constructing grammatical representations of spoken language.
Resumo:
Numerous linguistic operations have been assigned to cortical brain areas, but the contributions of subcortical structures to human language processing are still being discussed. Using simultaneous EEG recordings directly from deep brain structures and the scalp, we show that the human thalamus systematically reacts to syntactic and semantic parameters of auditorily presented language in a temporally interleaved manner in coordination with cortical regions. In contrast, two key structures of the basal ganglia, the globus pallidus internus and the subthalamic nucleus, were not found to be engaged in these processes. We therefore propose that syntactic and semantic language analysis is primarily realized within cortico-thalamic networks, whereas a cohesive basal ganglia network is not involved in these essential operations of language analysis.
Resumo:
In this paper, the authors extend and generalize the methodology based on the dynamics of systems with the use of differential equations as equations of state, allowing that first order transformed functions not only apply to the primitive or original variables, but also doing so to more complex expressions derived from them, and extending the rules that determine the generation of transformed superior to zero order (variable or primitive). Also, it is demonstrated that for all models of complex reality, there exists a complex model from the syntactic and semantic point of view. The theory is exemplified with a concrete model: MARIOLA model.
Resumo:
Although information systems (IS) problem solving involves knowledge of both the IS and application domains, little attention has been paid to the role of application domain knowledge. In this study, which is set in the context of conceptual modeling, we examine the effects of both IS and application domain knowledge on different types of schema understanding tasks: syntactic and semantic comprehension tasks and schema-based problem-solving tasks. Our thesis was that while IS domain knowledge is important in solving all such tasks, the role of application domain knowledge is contingent upon the type of understanding task under investigation. We use the theory of cognitive fit to establish theoretical differences in the role of application domain knowledge among the different types of schema understanding tasks. We hypothesize that application domain knowledge does not influence the solution of syntactic and semantic comprehension tasks for which cognitive fit exists, but does influence the solution of schema-based problem-solving tasks for which cognitive fit does not exist. To assess performance on different types of conceptual schema understanding tasks, we conducted a laboratory experiment in which participants with high- and low-IS domain knowledge responded to two equivalent conceptual schemas that represented high and low levels of application knowledge (familiar and unfamiliar application domains). As expected, we found that IS domain knowledge is important in the solution of all types of conceptual schema understanding tasks in both familiar and unfamiliar applications domains, and that the effect of application domain knowledge is contingent on task type. Our findings for the EER model were similar to those for the ER model. Given the differential effects of application domain knowledge on different types of tasks, this study highlights the importance of considering more than one application domain in designing future studies on conceptual modeling.
Resumo:
This paper presents a study made in a field poorly explored in the Portuguese language – modality and its automatic tagging. Our main goal was to find a set of attributes for the creation of automatic tag- gers with improved performance over the bag-of-words (bow) approach. The performance was measured using precision, recall and F1. Because it is a relatively unexplored field, the study covers the creation of the corpus (composed by eleven verbs), the use of a parser to extract syntac- tic and semantic information from the sentences and a machine learning approach to identify modality values. Based on three different sets of attributes – from trigger itself and the trigger’s path (from the parse tree) and context – the system creates a tagger for each verb achiev- ing (in almost every verb) an improvement in F1 when compared to the traditional bow approach.
Resumo:
Comprehension deficits are common in stroke aphasia, including in cases with (i) semantic aphasia (SA), characterised by poor executive control of semantic processing across verbal and nonverbal modalities, and (ii) Wernicke’s aphasia (WA), associated with poor auditory-verbal comprehension and repetition, plus fluent speech with jargon. However, the varieties of these comprehension problems, and their underlying causes, are not well-understood. Both patient groups exhibit some type of semantic ‘access’ deficit, as opposed to the ‘storage’ deficits observed in semantic dementia. Nevertheless, existing descriptions suggest these patients might have different varieties of ‘access’ impairment – related to difficulty resolving competition (in SA) vs. initial activation of concepts from sensory inputs (in WA). We used a case-series design to compare WA and SA patients on Warrington’s paradigmatic assessment of semantic ‘access’ deficits. In these verbal and non-verbal matching tasks, a small set of semantically-related items are repeatedly presented over several cycles so that the target on one trial becomes a distractor on another (building up interference and eliciting semantic ‘blocking’ effects). WA and SA patients were distinguished according to lesion location in the temporal cortex, but in each group, some individuals had additional prefrontal damage. Both of these aspects of lesion variability – one that mapped onto classical ‘syndromes’ and one that did not – predicted aspects of the semantic ‘access’ deficit. Both SA and WA cases showed multimodal semantic impairment, although as expected the WA group showed greater deficits on auditory-verbal than picture judgements. Distribution of damage in the temporal lobe was crucial for predicting the initially beneficial effects of stimulus repetition: WA cases showed initial improvement with repetition of words and pictures, while in SA, semantic access was initially good but declined in the face of competition from previous targets. Prefrontal damage predicted the harmful effects of repetition: the ability to re-select both word and picture targets in the face of mounting competition was linked to left prefrontal damage in both groups. Therefore, SA and WA patients have partially distinct impairment of semantic ‘access’ but, across these syndromes, prefrontal lesions produce declining comprehension with repetition in both verbal and non-verbal tasks.
Resumo:
Metalinguistic skill is the ability to reflect upon language as an object of thought. Amongst metalinguistic skills, two seem to be associated with reading and spelling: morphological awareness and phonological awareness. Phonological awareness is the ability of reflecting upon the phonemes that compose words, and morphological awareness is the ability of reflecting upon the morphemes that compose the words. The latter seems to be particularly important for reading comprehension and contextual reading, as beyond phonological information, syntactic and semantic information are required. This study is set to investigate - with a longitudinal design - the relation between those abilities and contextual reading measured by the Cloze test. The first part of the study explores the relationship between morphological awareness tasks and Cloze scores through simple correlations and, in the second part, the specificity of such relationship was inquired using multiple regressions. The results give some support to the hypothesis that morphological awareness offers an independent contribution regarding phonological awareness to contextual reading in Brazilian Portuguese.
Resumo:
Ontologies and taxonomies are widely used to organize concepts providing the basis for activities such as indexing, and as background knowledge for NLP tasks. As such, translation of these resources would prove useful to adapt these systems to new languages. However, we show that the nature of these resources is significantly different from the "free-text" paradigm used to train most statistical machine translation systems. In particular, we see significant differences in the linguistic nature of these resources and such resources have rich additional semantics. We demonstrate that as a result of these linguistic differences, standard SMT methods, in particular evaluation metrics, can produce poor performance. We then look to the task of leveraging these semantics for translation, which we approach in three ways: by adapting the translation system to the domain of the resource; by examining if semantics can help to predict the syntactic structure used in translation; and by evaluating if we can use existing translated taxonomies to disambiguate translations. We present some early results from these experiments, which shed light on the degree of success we may have with each approach
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
In this paper we explore the use of semantic classes in an existing information retrieval system in order to improve its results. Thus, we use two different ontologies of semantic classes (WordNet domain and Basic Level Concepts) in order to re-rank the retrieved documents and obtain better recall and precision. Finally, we implement a new method for weighting the expanded terms taking into account the weights of the original query terms and their relations in WordNet with respect to the new ones (which have demonstrated to improve the results). The evaluation of these approaches was carried out in the CLEF Robust-WSD Task, obtaining an improvement of 1.8% in GMAP for the semantic classes approach and 10% in MAP employing the WordNet term weighting approach.
Resumo:
Research on semantic processing focused mainly on isolated units in language, which does not reflect the complexity of language. In order to understand how semantic information is processed in a wider context, the first goal of this thesis was to determine whether Swedish pre-school children are able to comprehend semantic context and if that context is semantically built up over time. The second goal was to investigate how the brain distributes attentional resources by means of brain activation amplitude and processing type. Swedish preschool children were tested in a dichotic listening task with longer children’s narratives. The development of event-related potential N400 component and its amplitude were used to investigate both goals. The decrease of the N400 in the attended and unattended channel indicated semantic comprehension and that semantic context was built up over time. The attended stimulus received more resources, processed the stimuli in more of a top-down manner and displayed prominent N400 amplitude in contrast to the unattended stimulus. The N400 and the late positivity were more complex than expected since endings of utterances longer than nine words were not accounted for. More research on wider linguistic context is needed in order to understand how the human brain comprehends natural language.
Resumo:
Deduction allows us to draw consequences from previous knowledge. Deductive reasoning can be applied to several types of problem, for example, conditional, syllogistic, and relational. It has been assumed that the same cognitive operations underlie solutions to them all; however, this hypothesis remains to be tested empirically. We used event-related fMRI, in the same group of subjects, to compare reasoning-related activity associated with conditional and syllogistic deductive problems. Furthermore, we assessed reasoning-related activity for the two main stages of deduction, namely encoding of premises and their integration. Encoding syllogistic premises for reasoning was associated with activation of BA 44/45 more than encoding them for literal recall. During integration, left fronto-lateral cortex (BA 44/45, 6) and basal ganglia activated with both conditional and syllogistic reasoning. Besides that, integration of syllogistic problems additionally was associated with activation of left parietal (BA 7) and left ventro-lateral frontal cortex (BA 47). This difference suggests a dissociation between conditional and syllogistic reasoning at the integration stage. Our finding indicates that the integration of conditional and syllogistic reasoning is carried out by means of different, but partly overlapping, sets of anatomical regions and by inference, cognitive processes. The involvement of BA 44/45 during both encoding (syllogisms) and premise integration (syllogisms and conditionals) suggests a central role in deductive reasoning for syntactic manipulations and formal/linguistic representations.