61 resultados para thesauri


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transportation Department, Bureau of Transportation Statistics, Washington, D.C.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transportation Department, Bureau of Transportation Statistics, Washington, D.C.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

"Conspectus librorum": p. 13-17.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Linguarum Vett. septentrionalium thesaurus grammatico-criticus et archaelogicus. 1705 : Linguarum Vett. septentrionalium thesauri ... pars prima[-tertia]. 1703 ; De antiquae litteraturae septentrionalis utilitate usu ... Dissertatio epistolaris ad Bartholomaeum Showere. 1703 ; Numismata Anglo-Saxonica & Anglo-Danica breviter illustrata ab Andrea Fountaine. 1705 ; Antiquae literaturae septentrionalis liber alter seu Humpredi Wanleii ... 1705 ; Indices totius operis. 1703.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present an innovative topic segmentation system based on a new informative similarity measure that takes into account word co-occurrence in order to avoid the accessibility to existing linguistic resources such as electronic dictionaries or lexico-semantic databases such as thesauri or ontology. Topic segmentation is the task of breaking documents into topically coherent multi-paragraph subparts. Topic segmentation has extensively been used in information retrieval and text summarization. In particular, our architecture proposes a language-independent topic segmentation system that solves three main problems evidenced by previous research: systems based uniquely on lexical repetition that show reliability problems, systems based on lexical cohesion using existing linguistic resources that are usually available only for dominating languages and as a consequence do not apply to less favored languages and finally systems that need previously existing harvesting training data. For that purpose, we only use statistics on words and sequences of words based on a set of texts. This solution provides a flexible solution that may narrow the gap between dominating languages and less favored languages thus allowing equivalent access to information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Se estudia la codificación y organización del campo de las Ciencias de la Comunicación, fundamentalmente los códigos Unesco. El objetivo es plantear un cambio en estos códigos, puesto que la forma de clasificar y organizar un dominio científico tiene consecuencias de naturaleza operativa y epistemológica en el propio trabajo científico. En este trabajo se estudia la clasificación actual, que muestra una presencia escasa y dispersa de los términos vinculados a Comunicación. Se describen las dificultades prácticas y teóricas que conlleva su reorganización, las posibles fuentes (planes de estudio, congresos, revistas científicas, propuestas documentales y palabras clave) y los métodos de trabajo que se pueden emplear, tomando como bases teóricas el dominio de la Organización del Conocimiento y el de la Comunicación. Por último, se analizan dos ámbitos disciplinares diferentes (Historia de la Comunicación y Tecnologías de la Comunicación), mediante la información recogida, en una base de datos, de asignaturas en grado y en másteres oficiales de 12 universidades españolas. También se observa que este tipo de propuesta requiere el conocimiento derivado de instrumentos documentales como las clasificaciones y los tesauros.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper analyses how the topic of the silent statue is dealt with in Neo-Latin literature. The subject matter comes from the epigrams about Pythagoras of the Palatine Anthology. There are numerous Neo-Latin imitations of this topic that are complex as various sources are used at the same time. The authors focus on an active reading of the epigrams of their predecessors, applying the traditional motive to new subjects and adapting it to the religious theme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La Artificiosa memoria siue Phoenix de Pedro de Rávena tuvo una amplia difusión en la Europa del siglo XVI. Dos son las claves de su éxito: la fama de ilustre memorioso que consiguió forjarse con sus exhibiciones de memoria y el uso de las emociones en la formulación de reglas mnemotécnicas basadas en el humor y el erotismo. Sin embargo, poco antes de morir, en 1508, publicó unas breves Additiones quaedam ad artificiosam memoriam en las que añade algunas reglas nuevas y, sobre todo, renuncia a la norma que aconseja usar la imagen de jóvenes hermosas para elaborar escenas mnemotécnicas. Esta suerte de retractatio se explica en el contexto de la polémica mantenida con algunos teólogos de Colonia

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Le présent travail consiste à proposer un modèle de représentation des notions théoriques et pratiques de la terminologie et de leurs relations sous forme de thésaurus. Selon la norme ISO 25964-1, 2011, « un thésaurus est un vocabulaire contrôlé et structuré dans lequel les concepts sont représentés par des termes, ayant été organisés afin de rendre explicites les relations entre les concepts (…) ». Notre objectif est de créer un outil pédagogique à la suite d’une réflexion théorique englobant différentes perspectives notionnelles au sein de cette discipline. Les enjeux soulevés par la classification des concepts de certains champs de savoir (notamment ceux donnant lieu à différentes perspectives) n’ont pas été approfondis suffisamment dans la littérature de la terminologie, ni dans celle portant sur les thésaurus. Comment décrire des concepts qui sont sujets à des dissensions théoriques entre les différentes écoles de pensée? Comment classer les différentes relations entretenues par les concepts théoriques et les applications pratiques d’une discipline? À ces questions s’ajoute celle de la prise en compte de ces difficultés dans un thésaurus. Nous commençons par délimiter et organiser les concepts saillants du domaine. Ensuite, à l’aide d’un corpus comprenant des publications associées à différentes approches de la terminologie, nous étudions les réalisations linguistiques de ces concepts et leurs relations en contexte, dans le but de les décrire, de les classer et de les définir. Puis, nous faisons l’encodage de ces données à l’aide d’un logiciel de gestion de thésaurus, en respectant les normes ISO applicables. La dernière étape consiste à définir la visualisation de ces données afin de la rendre conviviale et compréhensible. Enfin, nous présentons les caractéristiques fondamentales du Thésaurus de la terminologie. Nous avons analysé et représenté un échantillon de 45 concepts et leurs termes reliés. Les différents phénomènes associés à ces descripteurs comme la multidimensionalité, la variation conceptuelle et la variation dénominative sont aussi représentés dans notre thésaurus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Le présent travail consiste à proposer un modèle de représentation des notions théoriques et pratiques de la terminologie et de leurs relations sous forme de thésaurus. Selon la norme ISO 25964-1, 2011, « un thésaurus est un vocabulaire contrôlé et structuré dans lequel les concepts sont représentés par des termes, ayant été organisés afin de rendre explicites les relations entre les concepts (…) ». Notre objectif est de créer un outil pédagogique à la suite d’une réflexion théorique englobant différentes perspectives notionnelles au sein de cette discipline. Les enjeux soulevés par la classification des concepts de certains champs de savoir (notamment ceux donnant lieu à différentes perspectives) n’ont pas été approfondis suffisamment dans la littérature de la terminologie, ni dans celle portant sur les thésaurus. Comment décrire des concepts qui sont sujets à des dissensions théoriques entre les différentes écoles de pensée? Comment classer les différentes relations entretenues par les concepts théoriques et les applications pratiques d’une discipline? À ces questions s’ajoute celle de la prise en compte de ces difficultés dans un thésaurus. Nous commençons par délimiter et organiser les concepts saillants du domaine. Ensuite, à l’aide d’un corpus comprenant des publications associées à différentes approches de la terminologie, nous étudions les réalisations linguistiques de ces concepts et leurs relations en contexte, dans le but de les décrire, de les classer et de les définir. Puis, nous faisons l’encodage de ces données à l’aide d’un logiciel de gestion de thésaurus, en respectant les normes ISO applicables. La dernière étape consiste à définir la visualisation de ces données afin de la rendre conviviale et compréhensible. Enfin, nous présentons les caractéristiques fondamentales du Thésaurus de la terminologie. Nous avons analysé et représenté un échantillon de 45 concepts et leurs termes reliés. Les différents phénomènes associés à ces descripteurs comme la multidimensionalité, la variation conceptuelle et la variation dénominative sont aussi représentés dans notre thésaurus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Things change. Words change, meaning changes and use changes both words and meaning. In information access systems this means concept schemes such as thesauri or clas- sification schemes change. They always have. Concept schemes that have survived have evolved over time, moving from one version, often called an edition, to the next. If we want to manage how words and meanings - and as a conse- quence use - change in an effective manner, and if we want to be able to search across versions of concept schemes, we have to track these changes. This paper explores how we might expand SKOS, a World Wide Web Consortium (W3C) draft recommendation in order to do that kind of tracking.The Simple Knowledge Organization System (SKOS) Core Guide is sponsored by the Semantic Web Best Practices and Deployment Working Group. The second draft, edited by Alistair Miles and Dan Brickley, was issued in November 2005. SKOS is a “model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, other types of controlled vocabulary and also concept schemes embedded in glossaries and terminologies” in RDF. How SKOS handles version in concept schemes is an open issue. The current draft guide suggests using OWL and DCTERMS as mechanisms for concept scheme revision.As it stands an editor of a concept scheme can make notes or declare in OWL that more than one version exists. This paper adds to the SKOS Core by introducing a tracking sys- tem for changes in concept schemes. We call this tracking system vocabulary ontogeny. Ontogeny is a biological term for the development of an organism during its lifetime. Here we use the ontogeny metaphor to describe how vocabularies change over their lifetime. Our purpose here is to create a conceptual mechanism that will track these changes and in so doing enhance information retrieval and prevent document loss through versioning, thereby enabling persistent retrieval.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Every indexing language is made up of terms. Those terms have morphological characteristics. These include terms made up of single words, two words, or more. We can also take into account the total number of terms.We can assemble these measures, normalize them, and then cluster indexing languages based on this common set of measures [1].Cluster analysis reviews discrete groups based on term morphology that comport with traditional design assumptions that separate ontologies, from thesauri, and folksonomies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In knowledge technology work, as expressed by the scope of this conference, there are a number of communities, each uncovering new methods, theories, and practices. The Library and Information Science (LIS) community is one such community. This community, through tradition and innovation, theories and practice, organizes knowledge and develops knowledge technologies formed by iterative research hewn to the values of equal access and discovery for all. The Information Modeling community is another contributor to knowledge technologies. It concerns itself with the construction of symbolic models that capture the meaning of information and organize it in ways that are computer-based, but human understandable. A recent paper that examines certain assumptions in information modeling builds a bridge between these two communities, offering a forum for a discussion on common aims from a common perspective. In a June 2000 article, Parsons and Wand separate classes from instances in information modeling in order to free instances from what they call the “tyranny” of classes. They attribute a number of problems in information modeling to inherent classification – or the disregard for the fact that instances can be conceptualized independent of any class assignment. By faceting instances from classes, Parsons and Wand strike a sonorous chord with classification theory as understood in LIS. In the practice community and in the publications of LIS, faceted classification has shifted the paradigm of knowledge organization theory in the twentieth century. Here, with the proposal of inherent classification and the resulting layered information modeling, a clear line joins both the LIS classification theory community and the information modeling community. Both communities have their eyes turned toward networked resource discovery, and with this conceptual conjunction a new paradigmatic conversation can take place. Parsons and Wand propose that the layered information model can facilitate schema integration, schema evolution, and interoperability. These three spheres in information modeling have their own connotation, but are not distant from the aims of classification research in LIS. In this new conceptual conjunction, established by Parsons and Ward, information modeling through the layered information model, can expand the horizons of classification theory beyond LIS, promoting a cross-fertilization of ideas on the interoperability of subject access tools like classification schemes, thesauri, taxonomies, and ontologies. This paper examines the common ground between the layered information model and faceted classification, establishing a vocabulary and outlining some common principles. It then turns to the issue of schema and the horizons of conventional classification and the differences between Information Modeling and Library and Information Science. Finally, a framework is proposed that deploys an interpretation of the layered information modeling approach in a knowledge technologies context. In order to design subject access systems that will integrate, evolve and interoperate in a networked environment, knowledge organization specialists must consider a semantic class independence like Parsons and Wand propose for information modeling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Knowledge organization in the networked environment is guided by standards. Standards in knowledge organization are built on principles. For example, NISO Z39.19-1993 Guide to the Construction of Monolingual Thesauri (now undergoing revision) and NISO Z39.85- 2001 Dublin Core Metadata Element Set are two standards used in many implementations. Both of these standards were crafted with knowledge organization principles in mind. Therefore it is standards work guided by knowledge organization principles which can affect design of information services and technologies. This poster outlines five threads of thought that inform knowledge organization principles in the networked environment. An understanding of each of these five threads informs system evaluation. The evaluation of knowledge organization systems should be tightly linked to a rigorous understanding of the principles of construction. Thus some foundational evaluation questions grow from an understanding of stan dard s and pr inciples: on what pr inciples is this know ledge organization system built? How well does this implementation meet the ideal conceptualization of those principles? How does this tool compare to others built on the same principles?