996 resultados para Controlled vocabulary
Resumo:
Using free text and controlled vocabulary in Medline and CINAHL
Resumo:
Introduction: the statistical record used in the Field Academic Programs (PAC for it’s initials in Spanish) of Rehabilitation denotes generalities in the data conceptualization, which complicates the reliable guidance in making decisions and provides a low support for research in rehabilitation and disability. In response, the Research Group in Rehabilitation and Social Integration of Persons with Disabilities has worked on the creation of a registry to characterize the population seen by Rehabilitation PAC. This registry includes the use of the International Classification of Functioning, Disability and Health (ICF) of the WHO. Methodology: the proposed methodology includes two phases: the first one is a descriptive study and the second one involves performing methodology Methontology, which integrates the identification and development of ontology knowledge. This article contextualizes the progress made in the second phase. Results: the development of the registry in 2008, as an information system, included documentary review and the analysis of possible use scenarios to help guide the design and development of the SIDUR system. The system uses the ICF given that it is a terminology standardization that allows the reduction of ambiguity and that makes easier the transformation of health facts into data translatable to information systems. The record raises three categories and a total of 129 variables Conclusions: SIDUR facilitates accessibility to accurate and updated information, useful for decision making and research.
Resumo:
There are three key components for developing a metadata system: a container structure laying out the key semantic issues of interest and their relationships; an extensible controlled vocabulary providing possible content; and tools to create and manipulate that content. While metadata systems must allow users to enter their own information, the use of a controlled vocabulary both imposes consistency of definition and ensures comparability of the objects described. Here we describe the controlled vocabulary (CV) and metadata creation tool built by the METAFOR project for use in the context of describing the climate models, simulations and experiments of the fifth Coupled Model Intercomparison Project (CMIP5). The CV and resulting tool chain introduced here is designed for extensibility and reuse and should find applicability in many more projects.
Resumo:
Proceedings paper published by Society of American Archivists. Presented at conference in 2015 in Cleveland, OH (http://www2.archivists.org/proceedings/research-forum/2015/agenda#papers). Published by SAA in 2016.
Resumo:
This paper describes the process of creating a controlled vocabulary which can be used to systematically analyse the copyright transfer agreements (CTAs) of journal publishers with regard to self-archiving. The analysis formed the basis of the newly created Copyright Knowledge Bank of publishers’ self-archiving policies. Self-archiving terms appearing in publishers’ CTAs were identified and classified, with these then being simplified, merged, and discarded to form a definitive list. The controlled vocabulary consists of three categories that describe ‘what’ can be self-archived, the ‘conditions’ of self-archiving and the ‘restrictions’ of self-archiving. Condition terms include specifications such as ‘where’ an article can be self archived, restriction terms include specifications such as ‘when’ the article can be self archived. Additional information on any of these terms appears in ‘free-text’ fields. Although this controlled vocabulary provides an effective way of analysing CTAs, it will need to be continually reviewed and updated in light of any major new additions to the terms used in publishers’ copyright and self-archiving policies.
Resumo:
Democratic governments raise taxes and charges and spend revenue on delivering peace, order and good government. The delivery process begins with a legislature as that can provide a framework of legally enforceable rules enacted according to the government’s constitution. These rules confer rights and obligations that allow particular people to carry on particular functions at particular places and times. Metadata standards as applied to public records contain information about the functioning of government as distinct from the non-government sector of society. Metadata standards apply to database construction. Data entry, storage, maintenance, interrogation and retrieval depend on a controlled vocabulary needed to enable accurate retrieval of suitably catalogued records in a global information environment. Queensland’s socioeconomic progress now depends in part on technical efficiency in database construction to address queries about who does what, where and when; under what legally enforceable authority; and how the evidence of those facts is recorded. The Survey and Mapping Infrastructure Act 2003 (Qld) addresses technical aspects of where questions – typically the officially recognised name of a place and a description of its boundaries. The current 10-year review of the Survey and Mapping Regulation 2004 provides a valuable opportunity to consider whether the Regulation makes sense in the context of a number of later laws concerned with management of Public Sector Information (PSI) as well as policies for ICT hardware and software procurement. Removing ambiguities about how official place names are to be regarded on a whole-of-government basis can achieve some short term goals. Longer-term goals depend on a more holistic approach to information management – and current aspirations for more open government and community engagement are unlikely to occur without such a longer-term vision.
Resumo:
Expert searchers engage with information as information brokers, researchers, reference librarians, information architects, faculty who teach advanced search, and in a variety of other information-intensive professions. Their experiences are characterized by a profound understanding of information concepts and skills and they have an agile ability to apply this knowledge to interacting with and having an impact on the information environment. This study explored the learning experiences of searchers to understand the acquisition of search expertise. The research question was: What can be learned about becoming an expert searcher from the learning experiences of proficient novice searchers and highly experienced searchers? The key objectives were: (1) to explore the existence of threshold concepts in search expertise; (2) to improve our understanding of how search expertise is acquired and how novice searchers, intent on becoming experts, can learn to search in more expertlike ways. The participant sample drew from two population groups: (1) highly experienced searchers with a minimum of 20 years of relevant professional experience, including LIS faculty who teach advanced search, information brokers, and search engine developers (11 subjects); and (2) MLIS students who had completed coursework in information retrieval and online searching and demonstrated exceptional ability (9 subjects). Using these two groups allowed a nuanced understanding of the experience of learning to search in expertlike ways, with data from those who search at a very high level as well as those who may be actively developing expertise. The study used semi-structured interviews, search tasks with think-aloud narratives, and talk-after protocols. Searches were screen-captured with simultaneous audio-recording of the think-aloud narrative. Data were coded and analyzed using NVivo9 and manually. Grounded theory allowed categories and themes to emerge from the data. Categories represented conceptual knowledge and attributes of expert searchers. In accord with grounded theory method, once theoretical saturation was achieved, during the final stage of analysis the data were viewed through lenses of existing theoretical frameworks. For this study, threshold concept theory (Meyer & Land, 2003) was used to explore which concepts might be threshold concepts. Threshold concepts have been used to explore transformative learning portals in subjects ranging from economics to mathematics. A threshold concept has five defining characteristics: transformative (causing a shift in perception), irreversible (unlikely to be forgotten), integrative (unifying separate concepts), troublesome (initially counter-intuitive), and may be bounded. Themes that emerged provided evidence of four concepts which had the characteristics of threshold concepts. These were: information environment: the total information environment is perceived and understood; information structures: content, index structures, and retrieval algorithms are understood; information vocabularies: fluency in search behaviors related to language, including natural language, controlled vocabulary, and finesse using proximity, truncation, and other language-based tools. The fourth threshold concept was concept fusion, the integration of the other three threshold concepts and further defined by three properties: visioning (anticipating next moves), being light on one's 'search feet' (dancing property), and profound ontological shift (identity as searcher). In addition to the threshold concepts, findings were reported that were not concept-based, including praxes and traits of expert searchers. A model of search expertise is proposed with the four threshold concepts at its core that also integrates the traits and praxes elicited from the study, attributes which are likewise long recognized in LIS research as present in professional searchers. The research provides a deeper understanding of the transformative learning experiences involved in the acquisition of search expertise. It adds to our understanding of search expertise in the context of today's information environment and has implications for teaching advanced search, for research more broadly within library and information science, and for methodologies used to explore threshold concepts.
Resumo:
Depuis quelques années, Internet est devenu un média incontournable pour la diffusion de ressources multilingues. Cependant, les différences linguistiques constituent souvent un obstacle majeur aux échanges de documents scientifiques, culturels, pédagogiques et commerciaux. En plus de cette diversité linguistique, on constate le développement croissant de bases de données et de collections composées de différents types de documents textuels ou multimédias, ce qui complexifie également le processus de repérage documentaire. En général, on considère l’image comme « libre » au point de vue linguistique. Toutefois, l’indexation en vocabulaire contrôlé ou libre (non contrôlé) confère à l’image un statut linguistique au même titre que tout document textuel, ce qui peut avoir une incidence sur le repérage. Le but de notre recherche est de vérifier l’existence de différences entre les caractéristiques de deux approches d’indexation pour les images ordinaires représentant des objets de la vie quotidienne, en vocabulaire contrôlé et en vocabulaire libre, et entre les résultats obtenus au moment de leur repérage. Cette étude suppose que les deux approches d’indexation présentent des caractéristiques communes, mais également des différences pouvant influencer le repérage de l’image. Cette recherche permet de vérifier si l’une ou l’autre de ces approches d’indexation surclasse l’autre, en termes d’efficacité, d’efficience et de satisfaction du chercheur d’images, en contexte de repérage multilingue. Afin d’atteindre le but fixé par cette recherche, deux objectifs spécifiques sont définis : identifier les caractéristiques de chacune des deux approches d’indexation de l’image ordinaire représentant des objets de la vie quotidienne pouvant influencer le repérage, en contexte multilingue et exposer les différences sur le plan de l’efficacité, de l’efficience et de la satisfaction du chercheur d’images à repérer des images ordinaires représentant des objets de la vie quotidienne indexées à l’aide d’approches offrant des caractéristiques variées, en contexte multilingue. Trois modes de collecte des données sont employés : l’analyse des termes utilisés pour l’indexation des images, la simulation du repérage d’un ensemble d’images indexées selon chacune des formes d’indexation à l’étude réalisée auprès de soixante répondants, et le questionnaire administré aux participants pendant et après la simulation du repérage. Quatre mesures sont définies pour cette recherche : l’efficacité du repérage d’images, mesurée par le taux de succès du repérage calculé à l’aide du nombre d’images repérées; l’efficience temporelle, mesurée par le temps, en secondes, utilisé par image repérée; l’efficience humaine, mesurée par l’effort humain, en nombre de requêtes formulées par image repérée et la satisfaction du chercheur d’images, mesurée par son autoévaluation suite à chaque tâche de repérage effectuée. Cette recherche montre que sur le plan de l’indexation de l’image ordinaire représentant des objets de la vie quotidienne, les approches d’indexation étudiées diffèrent fondamentalement l’une de l’autre, sur le plan terminologique, perceptuel et structurel. En outre, l’analyse des caractéristiques des deux approches d’indexation révèle que si la langue d’indexation est modifiée, les caractéristiques varient peu au sein d’une même approche d’indexation. Finalement, cette recherche souligne que les deux approches d’indexation à l’étude offrent une performance de repérage des images ordinaires représentant des objets de la vie quotidienne différente sur le plan de l’efficacité, de l’efficience et de la satisfaction du chercheur d’images, selon l’approche et la langue utilisées pour l’indexation.
Resumo:
Ce mémoire porte sur l’analyse documentaire en milieu universitaire. Deux approches générales sont d’abord étudiées : l’approche centrée sur le document (premier chapitre), prédominante dans la tradition bibliothéconomique, et l’approche centrée sur l’usager (deuxième chapitre), influencée par le développement d’outils le plus souvent associés au Web 2.0. L’opposition entre ces deux démarches reflète une dichotomie qui se trouve au cœur de la notion de sujet, c’est-à-dire les dimensions objective et subjective du sujet. Ce mémoire prend par conséquent la forme d’une dissertation dont l’avantage principal est de considérer à la fois d’importants acquis qui appartiennent à la tradition bibliothéconomique, à la fois des développements plus récents ayant un impact important sur l’évolution de l’analyse documentaire en milieu universitaire. Notre hypothèse est que ces deux tendances générales doivent être mises en relief afin d’approfondir la problématique de l’appariement, laquelle définit la difficulté d’accorder le vocabulaire qu’utilise l’usager dans ses recherches documentaires avec celui issu de l’analyse documentaire (métadonnées sujet). Dans le troisième chapitre, nous examinons certaines particularités liées à l’utilisation de la documentation en milieu universitaire dans le but de repérer certaines possibilités et certaines exigences de l’analyse documentaire dans un tel milieu. À partir d’éléments basés sur l’analyse des domaines d’études et sur la démarche analytico-synthétique, il s’agit d’accentuer l’interaction potentielle entre usagers et analystes documentaires sur le plan du vocabulaire utilisé de part et d’autre.
Resumo:
O aspecto fulcral desta dissertação centra-se-à volta do desafio de procurar facilitar o acesso à informação contida na base de dados bibliográfica da Biblioteca Universitária João Paulo II (BUJPII) da Universidade Católica Portuguesa (UCP) cujo conteúdo temático tem sido até agora representado pela Classificação Decimal Universal (CDU), linguagem documental pouco acessível a grande parte dos nossos utilizadores, na sua maioria estudantes universitários que a consideram um instrumento de pesquisa pouco amigável porque estão muito pouco ou nada familiarizados com este tipo de classificação numérica preferindo o uso de palavras-chave no acesso ao conteúdo temático das obras. Com este objectivo em vista, propusemo-nos levar a cabo este trabalho de investigação fazendo a harmonização (correspondência) entre as notações da CDU, usada na classificação da colecção de fundos da BUJPII e uma lista simplificada de Cabeçalhos de Assunto da Biblioteca do Congresso, com o propósito de iniciar um processo de atribuição de cabeçalhos de assunto, mapeados a partir das notações da CDU, a parte dos referidos fundos, cuja recuperação de conteúdo tem sido feita até agora através da Classificação Decimal Universal. O estudo incidiu experimentalmente numa amostragem de monografias de áreas não indexadas mas já classificadas, cujos registos bibliográficos se encontram na base de dados da Biblioteca Universitária João Paulo II. O projecto consistiu na atribuição de cabeçalhos de assunto, traduzidos manualmente para português a partir da lista em inglês dos Cabeçalhos de Assunto da Biblioteca do Congresso (LCSH). Procurou-se que estivessem semanticamente tão próximos quanto possível dos assuntos que correspondiam às notações da Classificação Decimal Universal (CDU) com as quais as monografias tinham sido anteriormente classificadas. O trabalho foi primeiro elaborado de forma manual e depois “carregado” no software Horizon, dado ser este o sistema informático de gestão integrada em uso na Biblioteca Universitária João Paulo II, sendo o objectivo futuro a indexação de todas as áreas do seu acervo bibliográfico, como forma complementar privilegiada no acesso à informação.
Resumo:
A presente dissertação busca discutir a questão da indexação em arquivos pessoais, tendo como campo de análise o arquivo pessoal de Ubaldino do Amaral Fontoura, além dos aspectos teóricos arquivísticos que devem ser considerados durante a normalização, padronização e construção de vocabulário controlado, a fim de melhor atender ao usuário. A pesquisa também analisa os arquivos pessoais enquanto arquivos de memória e como a definição dos pontos de acesso interfere na questão do enquadramento e apagamento da memória.
Resumo:
Trata das questões de organização e recuperação da informação no caso específico do acervo do Centro de Pesquisa e História Contemporânea do Brasil – CPDOC. Baseia essa análise num estudo de caso do uso do serviço de referência da instituição prestado pela Sala de Consulta e também no utilização da base de dados Accessus. Traça um perfil do usuário do acervo da instituição além de um perfil de pesquisa desses indivíduos ao mapear o comportamento dos usuários diante da ferramenta Accessus. Aborda o contexto da elaboração da base de dados e investiga a criação da linguagem controlada em história e ciências afins que serviu de base para o Accessus. Problematiza as questões de acessibilidade da linguagem a um público não relacionado com a área. Pareia essa problematização com análise dos diferentes perfis de usuários. Discute a forma de indexação do acervo do CPDOC e suscita reflexões sobre esse processo que considere uma relação direta com o perfil dos usuários.
Resumo:
The terminological performance of the descriptors representing the Information Science domain in the SIBI/USP Controlled Vocabulary was evaluated in manual, automatic and semi-automatic indexing processes. It can be concluded that, in order to have a better performance (i.e., to adequately represent the content of the corpus), current Information Science descriptors of the SIBi/USP Controlled Vocabulary must be extended and put into context by means of terminological definitions so that information needs of users are fulfilled.
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)