873 resultados para Language and languages -- Computer-assisted instruction


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation is part of the Language Toolkit project which is a collaboration between the School of Foreign Languages and Literature, Interpreting and Translation of the University of Bologna, Forlì campus, and the Chamber of Commerce of Forlì-Cesena. This project aims to create an exchange between translation students and companies who want to pursue a process of internationalization. The purpose of this dissertation is demonstrating the benefits that translation systems can bring to businesses. In particular, it consists of the translation into English of documents supplied by the Italian company Technologica S.r.l. and the creation of linguistic resources that can be integrated into computer-assisted translation (CAT) software, in order to optimize the translation process. The latter is claimed to be a priority with respect to the actual translation products (the target texts), since the analysis conducted on the source texts highlighted that the company could streamline and optimize its English language communication thanks to the use of open source CAT tools such as OmegaT. The work consists of five chapters. The first introduces the Language Toolkit project, the company (Technologica S.r.l ) and its products. The second chapter provides some considerations about technical translation, its features and some misconceptions about it. The difference between technical translation and scientific translation is then clarified and an overview is offered of translation aids such as those used for computer-assisted translation, machine translation, termbases and translation memories. The third chapter contains the analysis of the texts commissioned by Technologica S.r.l. and their categorization. The fourth chapter describes the translation process, with particular attention to terminology extraction and the creation of a bilingual glossary based on a specialized corpus. The glossary was integrated into the OmegaT software in order to facilitate the translation process both for the present task and for future applications. The memory deriving from the translation represents a sort of hybrid resource between a translation memory and a glossary. This was found to be the most appropriate format, given the specific nature of the texts to be translated. Finally, in chapter five conclusions are offered about the importance of language training within a company environment, the potentialities of translation aids and the benefits that they would bring to a company wishing to internationalize itself.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background In the 19th century, eminent French sociologist Emile Durkheim found suicide rates to be higher in the Protestant compared with the Catholic cantons of Switzerland. We examined religious affiliation and suicide in modern Switzerland, where assisted suicide is legal. Methods The 2000 census records of 1 722 456 (46.0%) Catholics, 1 565 452 (41.8%) Protestants and 454 397 (12.2%) individuals with no affiliation were linked to mortality records up to December 2005. The association between religious affiliation and suicide, with the Protestant faith serving as the reference category, was examined in Cox regression models. Hazard ratios (HRs) with 95% confidence intervals (CIs) were adjusted for age, marital status, education, type of household, language and degree of urbanization. Results Suicide rates per 100 000 inhabitants were 19.7 in Catholics (1664 suicides), 28.5 in Protestants (2158 suicides) and 39.0 in those with no affiliation (882 suicides). Associations with religion were modified by age and gender (P < 0.0001). Compared with Protestant men aged 35–64 years, HRs (95% CI) for all suicides were 0.80 (0.73–0.88) in Catholic men and 1.09 (0.98–1.22) in men with no affiliation; and 0.60 (0.53–0.67) and 1.96 (1.69–2.27), respectively, in men aged 65–94 years. Corresponding HRs in women aged 35–64 years were 0.90 (0.80–1.03) and 1.46 (1.25–1.72); and 0.67 (0.59–0.77) and 2.63 (2.22–3.12) in women aged 65–94 years. The association was strongest for suicides by poisoning in the 65–94-year-old age group, the majority of which was assisted: HRs were 0.45 (0.35–0.59) for Catholic men and 3.01 (2.37–3.82) for men with no affiliation; 0.44 (0.36–0.55) for Catholic women and 3.14 (2.51–3.94) for women with no affiliation. Conclusions In Switzerland, the protective effect of a religious affiliation appears to be stronger in Catholics than in Protestants, stronger in older than in younger people, stronger in women than in men, and particularly strong for assisted suicides.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An accurate assessment of the computer skills of students is a pre-requisite for the success of any e-learning interventions. The aim of the present study was to assess objectively the computer literacy and attitudes in a group of Greek post-graduate students, using a task-oriented questionnaire developed and validated in the University of Malmö, Sweden. 50 post-graduate students in the Athens University School of Dentistry in April 2005 took part in the study. A total competence score of 0-49 was calculated. Socio-demographic characteristics were recorded. Attitudes towards computer use were assessed. Descriptive statistics and linear regression modeling were employed for data analysis. Total competence score was normally distributed (Shapiro-Wilk test: W = 0.99, V = 0.40, P = 0.97) and ranged from 5 to 42.5, with a mean of 22.6 (+/-8.4). Multivariate analysis revealed 'gender', 'e-mail ownership' and 'enrollment in non-clinical programs' as significant predictors of computer literacy. Conclusively, computer literacy of Greek post-graduate dental students was increased amongst males, students in non-clinical programs and those with more positive attitudes towards the implementation of computer assisted learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As domain-specific modeling begins to attract widespread acceptance, pressure is increasing for the development of new domain-specific languages. Unfortunately these DSLs typically conflict with the grammar of the host language, making it difficult to compose hybrid code except at the level of strings; few mechanisms (if any) exist to control the scope of usage of multiple DSLs; and, most seriously, existing host language tools are typically unaware of the DSL extensions, thus hampering the development process. Language boxes address these issues by offering a simple, modular mechanism to encapsulate (i) compositional changes to the host language, (ii) transformations to address various concerns such as compilation and highlighting, and (iii) scoping rules to control visibility of language extensions. We describe the design and implementation of language boxes, and show with the help of several examples how modular extensions can be introduced to a host language and environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Object-oriented modelling languages such as EMOF are often used to specify domain specific meta-models. However, these modelling languages lack the ability to describe behavior or operational semantics. Several approaches have used a subset of Java mixed with OCL as executable meta-languages. In this experience report we show how we use Smalltalk as an executable meta-language in the context of the Moose reengineering environment. We present how we implemented EMOF and its behavioral aspects. Over the last decade we validated this approach through incrementally building a meta-described reengineering environment. Such an approach bridges the gap between a code-oriented view and a meta-model driven one. It avoids the creation of yet another language and reuses the infrastructure and run-time of the underlying implementation language. It offers an uniform way of letting developers focus on their tasks while at the same time allowing them to meta-describe their domain model. The advantage of our approach is that developers use the same tools and environment they use for their regular tasks. Still the approach is not Smalltalk specific but can be applied to language offering an introspective API such as Ruby, Python, CLOS, Java and C#.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The political philosophy underpinning the Indian Constitution is socialist economy in a multilingual political landscape. The Constitution grants some fundamental rights to all citizens regarding language and to linguistic and other minorities regarding education. It also obligates states to use many languages in school education. Restructuring the economy with free market as its pivot and the growing dominance of English in the information driven global economy give rise to policy changes in language use in education, which undermine the Constitutional provisions relating to language, though these changes reflect the manufactured consent of the citizens. This is made possible by the way the Constitution is interpreted by courts with regard to the fundamental rights of equality and non-discrimination when they apply to language. The unique property of language that it can be acquired, unlike other primordial attributes such as ethnicity or caste, comes into play in this interpretation. The result is that the law of the market takes over the law of the land.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New Zealand English first emerged at the beginning of the 19th century as a result of the dialect contact of British (51%), Scottish (27.3%) and Irish (22%) migrants (Hay and Gordon 2008:6). This variety has subsequently developed into an autonomous and legitimised national variety and enjoys a distinct socio-political status, recognition and codification. In fact, a number of dictionaries of New Zealand English have been published1 and the variety is routinely used as the official medium on TV, radio and other media. This however, has not always been the case, as for long only British standard norms were deemed suitable for media broadcasting. While there is some work already on lay commentary about New Zealand English (see for example Gordon 1983, 1994; Hundt 1998), there is much more to be done especially concerning more recent periods of the history of this variety and the ideologies underlying its development and legitimisation. Consequently, the current project aims at investigating the metalinguistic discourses during the period of transition from a British norm to a New Zealand norm in the media context, this will be done by focusing on debates about language in light of the advent of radio and television. The main purpose of this investigation is thus to examine the (language) ideologies that have shaped and underlain these discourses (e.g. discussions about the appropriateness of New Zealand English vis à vis external, British models of language) and their related practices in these media (e.g. broadcasting norms). The sociolinguistic and pragmatic effects of these ideologies will also be taken into account. Furthermore, a comparison will be carried out, at a later stage in the project, between New Zealand English and a more problematic and less legitimised variety: Estuary English. Despite plenty of evidence of media and other public discourses on Estuary English, in fact, there has been very little metalinguistic analysis of this evidence, nor examinations of the underlying ideologies in these discourses. The comparison will seek to discover whether similar themes emerge in the ideologies played out in publish discourses about these varieties, themes which serve to legitimise one variety, whilst denying such legitimacy to the other.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New Zealand English first emerged at the beginning of the 19th century as a result of the dialect contact of British (51%), Scottish (27.3%) and Irish (22%) migrants (Hay and Gordon 2008:6). This variety has subsequently developed into an autonomous and legitimised national variety and enjoys a distinct socio-political status, recognition and codification. In fact, a number of dictionaries of New Zealand English have been published1 and the variety is routinely used as the official medium on TV, radio and other media. This however, has not always been the case, as for long only British standard norms were deemed suitable for media broadcasting. While there is some work already on lay commentary about New Zealand English (see for example Gordon 1983, 1994; Hundt 1998), there is much more to be done especially concerning more recent periods of the history of this variety and the ideologies underlying its development and legitimisation. Consequently, the current project aims at investigating the metalinguistic discourses during the period of transition from a British norm to a New Zealand norm in the media context, this will be done by focusing on debates about language in light of the advent of radio and television. The main purpose of this investigation is thus to examine the (language) ideologies that have shaped and underlain these discourses (e.g. discussions about the appropriateness of New Zealand English vis à vis external, British models of language) and their related practices in these media (e.g. broadcasting norms). The sociolinguistic and pragmatic effects of these ideologies will also be taken into account. Furthermore, a comparison will be carried out, at a later stage in the project, between New Zealand English and a more problematic and less legitimised variety: Estuary English. Despite plenty of evidence of media and other public discourses on Estuary English, in fact, there has been very little metalinguistic analysis of this evidence, nor examinations of the underlying ideologies in these discourses. The comparison will seek to discover whether similar themes emerge in the ideologies played out in publish discourses about these varieties, themes which serve to legitimise one variety, whilst denying such legitimacy to the other.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE The aim of the paper is to identify, review, analyze, and summarize available evidence in three areas on the use of cross-sectional imaging, specifically maxillofacial cone beam computed tomography (CBCT) in pre- and postoperative dental implant therapy: (1) Available clinical use guidelines, (2) indications and contraindications for use, and (3) assessment of associated radiation dose risk. MATERIALS AND METHODS Three focused questions were developed to address the aims. A systematic literature review was performed using a PICO-based search strategy based on MeSH key words specific to each focused question of English-language publications indexed in the MEDLINE database retrospectively from October 31, 2012. These results were supplemented by a hand search and gray literature search. RESULTS Twelve publications were identified providing guidelines for the use of cross-sectional radiography, particularly CBCT imaging, for the pre- and/or postoperative assessment of potential dental implant sites. The publications discovered by the PICO strategy (43 articles), hand (12), and gray literature searches (1) for the second focus question regarding indications and contraindications for CBCT use in implant dentistry were either cohort or case-controlled studies. For the third question on the assessment of associated radiation dose risk, a total of 22 articles were included. Publication characteristics and themes were summarized in tabular format. CONCLUSIONS The reported indications for CBCT use in implant dentistry vary from preoperative analysis regarding specific anatomic considerations, site development using grafts, and computer-assisted treatment planning to postoperative evaluation focusing on complications due to damage of neurovascular structures. Effective doses for different CBCT devices exhibit a wide range with the lowest dose being almost 100 times less than the highest dose. Significant dose reduction can be achieved by adjusting operating parameters, including exposure factors and reducing the field of view (FOV) to the actual region of interest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose Gender fair language use in job advertisements has been shown to impact the outcome of personnel selections. It is thus important to assess, to what extent gender fair language is used in job advertisements and with which factors it is associated, e.g., language, culture, status, and gender typicality of profession. Design/Methodology In the present research we investigated gender fair language use in job advertisements published online in four European countries with different socio-economic rankings of gender equality (World Economic Forum, 2011), namely Austria (rank 34), Czech Republic (75), Poland (42), and Switzerland (10). From four lines of business with different percentages of female employees – steels/metals, science, restaurants/food services, and health care –we randomly selected 100 job advertisements, summing up to 1600 job advertisements in total. Results A first analysis of the Swiss data indicates that the phrasing of job advertisements is closely related to a profession’s gender typicality (e.g., merely masculine forms are used in steels and metals, gender-fair forms in healthcare). Feminine forms however are almost never used. Cross-cultural comparisons will be presented. Limitations We analyzed job advertisements of four specific lines of business in four European countries. To what extend results can be generalized remains an open question. Research/Practical Implications The present data provide a sound basis for future studies on gender fair language use in job advertisements. Furthermore it sheds a light on how companies comply with national guidelines of gender equality. Originality/Value This is the first time that gender fair language use in job advertisements is investigated (a) across different countries and languages and (b) considering status and gender typicality of professions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The lexical items like and well can serve as discourse markers (DMs), but can also play numerous other roles, such as verb or adverb. Identifying the occurrences that function as DMs is an important step for language understanding by computers. In this study, automatic classifiers using lexical, prosodic/positional and sociolinguistic features are trained over transcribed dialogues, manually annotated with DM information. The resulting classifiers improve state-of-the-art performance of DM identification, at about 90% recall and 79% precision for like (84.5% accuracy, κ = 0.69), and 99% recall and 98% precision for well (97.5% accuracy, κ = 0.88). Automatic feature analysis shows that lexical collocations are the most reliable indicators, followed by prosodic/positional features, while sociolinguistic features are marginally useful for the identification of DM like and not useful for well. The differentiated processing of each type of DM improves classification accuracy, suggesting that these types should be treated individually.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With over 700 million illiterate adults in the world, many governments have implemented adult literacy programs across the world, although typically with low rates of success partly because the quality of teaching is low. One solution may lie in the standardization of teaching provided by computer-aided instruction. We present the first rigorous evidence of the effectiveness of a computer-based adult literacy program. A randomized control trial study of TARA Akshar Plus, an Indian adult literacy program, was implemented in the state of Uttar Pradesh in India. We find large, significant impacts of this computer-aided program on literacy and numeracy outcomes. We compare the improvement in learning to that of other traditional adult literacy programs and conclude that TARA Akshar Plus is effective in increasing literacy and numeracy for illiterate adult women.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the triple structure recommended for annotations in these works (which is isomorphic to the triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTag’s annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTag’s scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTag’s (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTag’s (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTag’s annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTag’s annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTagger’s schema, a concrete instance of OntoTag’s abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely • Bitext’s DataLexica (http://www.bitext.com/EN/datalexica.asp), • LACELL’s (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), • Connexor’s FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and • EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTag’s underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTagger’s configuration, a concrete instance of OntoTag’s abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELL’s tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTagger’s schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAG’S UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. • CONFIRMED by the development of: o OntoTag’s annotation scheme, o OntoTag’s annotation architecture, o OntoTagger’s (XML, RDF, OWL) annotation schemas, o OntoTagger’s configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. • CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. • H.3 was CONFIRMED by means of the development of OntoTagger’s ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexor’s FDG, Bitext’s DataLexica and LACELL’s tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). • CONFIRMED by means of the development of OntoTagger’s RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. • CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. • REJECTED: OntoTagger’s experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTag’s ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTag’s architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTag’s linguistic ontologies). • On the one hand, OntoTag’s network of ontologies consists of − The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; − The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; − The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; − The OIO (OntoTag’s Integration Ontology), which  Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO;  Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. • On the other hand, OntoTag’s ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: − As far as morphosyntactic annotations are concerned, OntoTag’s ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; − As for syntactic annotations, OntoTag’s ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; − Regarding semantic annotations, OntoTag’s ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; − The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTag’s ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELL’s tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. • The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. • These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. • In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTag’s annotation scheme and OntoTagger’s annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reading strategies vary across languages according to orthographic depth - the complexity of the grapheme in relation to phoneme conversion rules - notably at the level of eye movement patterns. We recently demonstrated that a group of early bilinguals, who learned both languages equally under the age of seven, presented a first fixation location (FFL) closer to the beginning of words when reading in German as compared with French. Since German is known to be orthographically more transparent than French, this suggested that different strategies were being engaged depending on the orthographic depth of the used language. Opaque languages induce a global reading strategy, and transparent languages force a local/serial strategy. Thus, pseudo-words were processed using a local strategy in both languages, suggesting that the link between word forms and their lexical representation may also play a role in selecting a specific strategy. In order to test whether corresponding effects appear in late bilinguals with low proficiency in their second language (L2), we present a new study in which we recorded eye movements while two groups of late German-French and French-German bilinguals read aloud isolated French and German words and pseudo-words. Since, a transparent reading strategy is local and serial, with a high number of fixations per stimuli, and the level of the bilingual participants' L2 is low, the impact of language opacity should be observed in L1. We therefore predicted a global reading strategy if the bilinguals' L1 was French (FFL close to the middle of the stimuli with fewer fixations per stimuli) and a local and serial reading strategy if it was German. Thus, the L2 of each group, as well as pseudo-words, should also require a local and serial reading strategy. Our results confirmed these hypotheses, suggesting that global word processing is only achieved by bilinguals with an opaque L1 when reading in an opaque language; the low level in the L2 gives way to a local and serial reading strategy. These findings stress the fact that reading behavior is influenced not only by the linguistic mode but also by top-down factors, such as readers' proficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"UILU-ENG 77 1719."