965 resultados para Text-base vocabulary
Resumo:
Mode of access: Internet.
Resumo:
"Sixteenth printing."
Resumo:
Mode of access: Internet.
Resumo:
Current image database metadata schemas require users to adopt a specific text-based vocabulary. Text-based metadata is good for searching but not for browsing. Existing image-based search facilities, on the other hand, are highly specialised and so suffer similar problems. Wexelblat's semantic dimensional spatial visualisation schemas go some way towards addressing this problem by making both searching and browsing more accessible to the user in a single interface. But the question of how and what initial metadata to enter a database remains. Different people see different things in an image and will organise a collection in equally diverse ways. However, we can find some similarity across groups of users regardless of their reasoning. For example, a search on Amazon.com returns other products also, based on an averaging of how users navigate the database. In this paper, we report on applying this concept to a set of images for which we have visualised them using traditional methods and the Amazon.com method. We report on the findings of this comparative investigation in a case study setting involving a group of randomly selected participants. We conclude with the recommendation that in combination, the traditional and averaging methods would provide an enhancement to current database visualisation, searching, and browsing facilities.
Resumo:
O nosso trabalho tem como objectivo apresentar uma organização de um Vocabulário de Base no Ensino Primário em Angola, numa perspectiva de ensino e aprendizagem do Português, em contexto angolano, com vista à contribuição e melhoria das estratégias do ensino e aprendizagem do vocabulário, não só em sala de aula, como também a sua aplicação em materiais didácticos, como em dicionários de aprendizagem. Apresentámos, de uma forma sucinta, a organização e o contexto de desenvolvimento dos Sistema de Educação de Angola, com a finalidade de fazer um enquadramento dos planos de estudo do Ensino Primário. Foram importantes as abordagens sobre os conceitos básicos de Lexicologia, Lexicografia, léxico, cultura, vocabulário, Lexicultura. Usámos o software Hyperbase como metodologia para o levantamento, selecção e organização dos vocábulos nos manuais escolares. O software serviu para análise dos dados dos textos, de modo a propor um dicionário de aprendizagem, conforme o modelo apresentado no final do trabalho.
Resumo:
Using free text and controlled vocabulary in Medline and CINAHL
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
A pre-test, post-test, quasi-experimental design was used to examine the effects of student-centered and traditional models of reading instruction on outcomes of literal comprehension and critical thinking skills. The sample for this study consisted of 101 adult students enrolled in a high-level developmental reading course at a large, urban community college in the Southeastern United States. The experimental group consisted of 48 students, and the control group consisted of 53 students. Students in the experimental group were limited in the time spent reading a course text of basic skills, with instructors using supplemental materials such as poems, news articles, and novels. Discussions, the reading-writing connection, and student choice in material selection were also part of the student-centered curriculum. Students in the control group relied heavily on a course text and vocabulary text for reading material, with great focus placed on basic skills. Activities consisted primarily of multiple-choice questioning and quizzes. The instrument used to collect pre-test data was Descriptive Tests of Language Skills in Reading Comprehension; post-test data were taken from the Florida College Basic Skills Exit Test. A MANCOVA was used as the statistical method to determine if either model of instruction led to significantly higher gains in literal comprehension skills or critical thinking skills. A paired samples t-test was also used to compare pre-test and post-test means. The results of the MANCOVA indicated no significant difference between instructional models on scores of literal comprehension and critical thinking. Neither was there any significant difference in scores between subgroups of age (under 25 and 25 and older) and language background (native English speaker and second-language learner). The results of the t-test indicated, however, that students taught under both instructional models made significant gains in on both literal comprehension and critical thinking skills from pre-test to post-test.
Resumo:
L’objectif du chapitre consiste à dresser un bilan des acquis concernant les relations entre lecture-compréhension et écriture-rédaction. Deux approches sont envisagées. La première s’appuie sur des évaluations globales des dimensions relatives dites au bas niveau (au code : écriture et décodage) et des dimensions de haut niveau (compréhension et production textuelle). Elle met en évidence des relations fortes entre les premières, mais plutôt faibles entre les secondes. La seconde approche procède à la mise en relation entre composantes des deux activités : lexique (en lecture et en production), syntaxe, structure des textes, etc. Elle fait apparaître une plus grande complexité des relations et une relative indépendance des composantes. Elle conduit à reposer la question des impacts respectifs et réciproques de la lecture-compréhension sur l’écriture-rédaction. Elle amène à s’interroger sur ce que seraient les effets d’une pratique accordant une priorité à la production plutôt qu’à la lecture-compréhension et à rechercher les modalités d’intervention visant à améliorer l’une à partir de l’autre, et réciproquement.
Resumo:
Over the last decade, the majority of existing search techniques is either keyword- based or category-based, resulting in unsatisfactory effectiveness. Meanwhile, studies have illustrated that more than 80% of users preferred personalized search results. As a result, many studies paid a great deal of efforts (referred to as col- laborative filtering) investigating on personalized notions for enhancing retrieval performance. One of the fundamental yet most challenging steps is to capture precise user information needs. Most Web users are inexperienced or lack the capability to express their needs properly, whereas the existent retrieval systems are highly sensitive to vocabulary. Researchers have increasingly proposed the utilization of ontology-based tech- niques to improve current mining approaches. The related techniques are not only able to refine search intentions among specific generic domains, but also to access new knowledge by tracking semantic relations. In recent years, some researchers have attempted to build ontological user profiles according to discovered user background knowledge. The knowledge is considered to be both global and lo- cal analyses, which aim to produce tailored ontologies by a group of concepts. However, a key problem here that has not been addressed is: how to accurately match diverse local information to universal global knowledge. This research conducts a theoretical study on the use of personalized ontolo- gies to enhance text mining performance. The objective is to understand user information needs by a \bag-of-concepts" rather than \words". The concepts are gathered from a general world knowledge base named the Library of Congress Subject Headings. To return desirable search results, a novel ontology-based mining approach is introduced to discover accurate search intentions and learn personalized ontologies as user profiles. The approach can not only pinpoint users' individual intentions in a rough hierarchical structure, but can also in- terpret their needs by a set of acknowledged concepts. Along with global and local analyses, another solid concept matching approach is carried out to address about the mismatch between local information and world knowledge. Relevance features produced by the Relevance Feature Discovery model, are determined as representatives of local information. These features have been proven as the best alternative for user queries to avoid ambiguity and consistently outperform the features extracted by other filtering models. The two attempt-to-proposed ap- proaches are both evaluated by a scientific evaluation with the standard Reuters Corpus Volume 1 testing set. A comprehensive comparison is made with a num- ber of the state-of-the art baseline models, including TF-IDF, Rocchio, Okapi BM25, the deploying Pattern Taxonomy Model, and an ontology-based model. The gathered results indicate that the top precision can be improved remarkably with the proposed ontology mining approach, where the matching approach is successful and achieves significant improvements in most information filtering measurements. This research contributes to the fields of ontological filtering, user profiling, and knowledge representation. The related outputs are critical when systems are expected to return proper mining results and provide personalized services. The scientific findings have the potential to facilitate the design of advanced preference mining models, where impact on people's daily lives.
Resumo:
This paper seeks to discover in what sense we can classify vocabulary items as technical terms in the later medieval period. In order to arrive at a principled categorization of technicality, distribution is taken as a diagnostic factor: vocabulary shared across the widest range of text types may be assumed to be both prototypical for the semantic field, but also the most general and therefore least technical terms since lexical items derive at least part of their meaning from context, a wider range of contexts implying a wider range of senses. A further way of addressing the question of technicality is tested through the classification of the lexis into semantic hierarchies: in the terms of componential analysis, having more components of meaning puts a term lower in the semantic hierarchy and flags it as having a greater specificity of sense, and thus as more technical. The various text types are interrogated through comparison of the number of levels in their hierarchies and number of lexical items at each level within the hierarchies. Focusing on the vocabulary of a single semantic field, DRESS AND TEXTILES, this paper investigates how four medieval text types (wills, sumptuary laws, petitions, and romances) employ technical terminology in the establishment of the conventions of their genres.
Resumo:
A class of twenty-two grade one children was tested to determine their reading levels using the Stanford Diagnostic Reading Achievement Test. Based on these results and teacher input the students were paired according to reading ability. The students ages ranged from six years four months to seven years four months at the commencement of the study. Eleven children were assigned to the language experience group and their partners became the text group. Each member of the language experience group generated a list of eight to be learned words. The treatment consisted of exposing the student to a given word three times per session for ten sessions, over a period of five days. The dependent variables consisted of word identification speed, word identification accuracy, and word recognition accuracy. Each member of the text group followed the same procedure using his/her partner's list of words. Upon completion of this training, the entire process was repeated with members of the text group from the first part becoming members of the language experience group and vice versa. The results suggest that generally speaking language experience words are identified faster than text words but that there is no difference in the rate at which these words are learned. Language experience words may be identified faster because the auditory-semantic information is more readily available in them than in text words. The rate of learning in both types of words, however, may be dictated by the orthography of the to be learned word.
Resumo:
This paper carries out a critical discourse analysis (CDA) of the article "¿Chinofobia?” published by the Spanish newspaper El País. It applies the theory of racist discourse in the media as formulated by Teun A. van Dijk. Its hypothesis is that the article, which supposedly analyses discrimination against the Chinese minority in Spain, covertly blames the Chinese minority itself for the discrimination. Applying CDA methodology as exemplified in previous studies by van Dijk, the paper analyses the article on global and local levels, and delineates its mental model. The global level analysis describes the article in terms of macropropositions and proposes a macrostructure: the self-alienation of the Chinese minority. The paper then analyses how the macrostructure is reinforced on the local level through micropropositions by examining 1) how vocabulary serves the negative presentation and othering of the Chinese minority, 2) how strategies of mitigation minimise discrimination by employing imprecise and vague language, 3) how quotes are used to give coherence and force to the macrostructure, 4) how implications associate the Chinese minority with criminality, and 5) how stereotyped beliefs about the Chinese minority are presented as common sense (presuppositions) and fallaciously argued. Finally, the paper delineates the mental model: the presuppositions about integration, and the implicit warning that minorities should integrate or they will be discriminated against.
Resumo:
by the late J. T. Marshall. Ed. from the author's ms. by J. Barton Turner. With introduction by A. Mingana
Resumo:
National Highway Traffic Safety Administration, Office of Research and Development, Washington, D.C.