870 resultados para Language change. Linguistic assessment. Pronominal usage. Grammar teaching
Resumo:
This paper presents a study about the role of grammar in on-line interactions conducted in Portuguese and in English, between Brazilian and English-speaking interactants, with the aim of teaching Portuguese as a foreign language (PFL). The interactions occurred by means of chat and the MSN Messenger, and generated audio and video data for language analysis. Grammar is dealt with from two perspectives, an inductive and a deductive approach, so as to investigate the relevance of systematization of grammar rules in the process of learning PFL in teletandem interactions.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Fonoaudiologia - FFC
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This study was designed to present and discuss some results produced by a research involving the use of English subtitles of some news videos from the webiste Reuters.com (http://www.reuters.com) with pedagogical reasons in a Brazilian context (Academic English for Journalism). We have developed the research during two semesters at UNESP (Universidade Estadual Paulista Júlio de Mesquita Filho). The professor in charge of the study has chosen the students of Journalism as the audience to whom the videos were presented. The assumptions of many theorists and experts in Audiovisual Translation were adopted as our Theoretical Sources. The first step of the study was the assessment of the syllabus of each course. This was very helpful as a guidance in order to choose the most relevant and interesting videos for students. After the evaluation of academic and professional interests, we chose some videos to insert appropriate subtitles, according to some strategies suggested by Panayota Georgakopoulou and Henrik Gottlieb. Finally we presented the videos during the English classes. At the first time, they were presented without subtitles just to notice the comprehension level of the students. After that, the videos were presented with English subtitles. As we first assumed, the students haven’t had the whole comprehension of specific details during the first presentation, they have just used their previous knowledge and the visual aids to help them in a superficial understanding of the news. As the subtitles appear, the process of communication was finally accomplished.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The use of patient-orientated questionnaires is of utmost importance in assessing the outcome of spine surgery. Standardisation, using a common set of outcome measures, is essential to aid comparisons across studies/in registries. The Core Outcome Measures Index (COMI) is a short, multidimensional outcome instrument validated for patients with spinal disorders. This study aimed to produce a Brazilian-Portuguese version of the COMI. A cross-cultural adaptation of the COMI into Brazilian-Portuguese was carried out using established guidelines. 104 outpatients with chronic LBP (> 3 months) were recruited from a Public Health Spine Medical Care Centre. They completed a questionnaire booklet containing the newly translated COMI, and other validated symptom-specific questionnaires: Oswestry Disability Index (ODI) and Roland Morris disability scale (RM), and a pain visual analogue scale. All patients completed a second questionnaire within 7-10 days to assess reproducibility. The COMI summary score displayed minimal floor and ceiling effects. On re-test, the responses for each individual domain of the COMI were within 1 category in 98% patients for the domain 'function', 96% for 'symptom-specific well-being', 97% for 'general quality of life', 99% for 'social disability' and 100% for 'work disability'. The intraclass correlation coefficients (ICC2,1) for COMI pain and COMI summary scores were 0.91-0.96, which compared favourably with the corresponding values for the RM (ICC, 0.99) and ODI (ICC, 0.98). The standard error of measurement for the COMI was 0.6, giving a "minimum detectable change" (MDC95%) of approximately 1.7 points i.e., the minimum change to be considered "real change" beyond measurement error. The COMI scores correlated as hypothesised (Rho, 0.4-0.8) with the other symptom-specific questionnaires. The reproducibility of the Brazilian-Portuguese version of the COMI was comparable to that of other language versions. The COMI scores correlated in the expected manner with existing but longer symptom-specific questionnaires suggesting good convergent validity for the COMI. The Brazilian-Portuguese COMI represents a valuable tool for Brazilian study-centres in future multicentre clinical studies and surgical registries.
Resumo:
Most accounts of child language acquisition use as analytic tools adult-like syntactic categories and schemas (formal grammars) with little concern for whether they are psychologically real for young children. Recent research has demonstrated, however, that children do not operate initially with such abstract linguistic entities, but instead operate on the basis of concrete, item-based constructions. Children construct more abstract linguistic constructions only gradually – on the basis of linguistic experience in which frequency plays a key role – and they constrain these constructions to their appropriate ranges of use only gradually as well – again on the basis of linguistic experience in which frequency plays a key role. The best account of first language acquisition is provided by a construction-based, usage-based model in which children process the language they experience in discourse interactions with other persons, relying explicitly and exclusively on social and cognitive skills that children of this age are known to possess.
Resumo:
During the last decade, medical education in the German-speaking world has been striving to become more practice-oriented. This is currently being achieved in many schools through the implementation of simulation-based instruction in Skills Labs. Simulators are thus an essential part of this type of medical training, and their acquisition and operation by a Skills Lab require a large outlay of resources. Therefore, the Practical Skills Committee of the Medical Education Society (GMA) introduced a new project, which aims to improve the flow of information between the Skills Labs and enable a transparent assessment of the simulators via an online database (the Simulator Network).
Resumo:
The European Union has been promoting linguistic diversity for many years as one of its main educational goals. This is an element that facilitates student mobility and student exchanges between different universities and countries and enriches the education of young undergraduates. In particular, a higher degree of competence in the English language is becoming essential for engineers, architects and researchers in general, as English has become the lingua franca that opens up horizons to internationalisation and the transfer of knowledge in today’s world. Many experts point to the Integrated Approach to Contents and Foreign Languages System as being an option that has certain benefits over the traditional method of teaching a second language that is exclusively based on specific subjects. This system advocates teaching the different subjects in the syllabus in a language other than one’s mother tongue, without prioritising knowledge of the language over the subject. This was the idea that in the 2009/10 academic year gave rise to the Second Language Integration Programme (SLI Programme) at the Escuela Arquitectura Técnica in the Universidad Politécnica Madrid (EUATM-UPM), just at the beginning of the tuition of the new Building Engineering Degree, which had been adapted to the European Higher Education Area (EHEA) model. This programme is an interdisciplinary initiative for the set of subjects taught during the semester and is coordinated through the Assistant Director Office for Educational Innovation. The SLI Programme has a dual goal; to familiarise students with the specific English terminology of the subject being taught, and at the same time improve their communication skills in English. A total of thirty lecturers are taking part in the teaching of eleven first year subjects and twelve in the second year, with around 120 students who have voluntarily enrolled in a special group in each semester. During the 2010/2011 academic year the degree of acceptance and the results of the SLI Programme have been monitored. Tools have been designed to aid interdisciplinary coordination and to analyse satisfaction, such as coordination records and surveys. The results currently available refer to the first and second year and are divided into specific aspects of the different subjects involved and into general aspects of the ongoing experience.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based