986 resultados para Metadata repository
Resumo:
Today higher education system and R&D in science & Technology has undergone tremendous changes from the traditional class room learning system and scholarly communication. Huge volume of Academic output and scientific communications are coming in electronic format. Knowledge management is a key challenge in the current century .Due to the advancement of ICT, Open access movement, Scholarly communications, Institutional repositories, ontology, semantic web, web 2.0 etc has revolutionized knowledge transactions and knowledge management in the field of science & technology. Today higher education has moved into a stage where competitive advantage is gained not just through access of infonnation but more importantly from new Knowledge creations.This paper examines the role of institutional repository in knowledge transactions in current scenario of Higher education.
Resumo:
Anticipating the increase in video information in future, archiving of news is an important activity in the visual media industry. When the volume of archives increases, it will be difficult for journalists to find the appropriate content using current search tools. This paper provides the details of the study we conducted about the news extraction systems used in different news channels in Kerala. Semantic web technologies can be used effectively since news archiving share many of the characteristics and problems of WWW. Since visual news archives of different media resources follow different metadata standards, interoperability between the resources is also an issue. World Wide Web Consortium has proposed a draft for an ontology framework for media resource which addresses the intercompatiblity issues. In this paper, the w3c proposed framework and its drawbacks is also discussed
Resumo:
The South West (S.W.) coast of India is blessed with a series of wetland systems popularly referred to as backwaters covering a total area of 46128.94 ha. These backwaters are internationally renowned for their aesthetic and scientific values including being a repository for several species fish and shell fishes. This is more significant in that three wetlands (Vembanad, Sasthamcotta and Ashtamudi) have recently been designated as Ramsar sites of international importance. Thirty major backwaters forming the crux of the coastal wetlands form an abode for over 200 resident or migratory fish and shellfish species. The fishing activities in these water bodies provide the livelihood to about 200,000 fishers and also provide full-time employment to over 50,000 fishermen. This paper describes the changes on the environmental and biodiversity status of selected wetlands, during 1994-2005 period. The pH was generally near neutral to alkaline in range. The salinity values indicated mixohaline condition ranging from 5.20-32.38 ppt. in the 12 wetlands. The productivity values were generally low in most of the wetlands during the study, where the gross production varied from 0.22 gC/m3/day in Kadinamkulam to 1.10 gC/m3/day in the Kayamkulam. The diversity of plankton and benthos was more during the pre-monsoon compared to the monsoon and post-monsoon periods in most of the wetlands. The diversity of plankton and benthos was more during the pre-monsoon compared to the monsoon and post-monsoon periods in most of the wetlands. The average fish yield per ha. varied from 246 kg. in Valapattanam to 2747.3 kg. in Azhikode wetland. Retting of coconut husk in most of the wetlands led to acidic pH conditions with anoxia resulting in the production of high amounts of sulphide, coupled with high carbon dioxide values leading to drastic reduction in the incidence and abundance of plankton, benthic fauna and the fishery resources. The major fish species recorded from the investigation were Etroplus suratensis, E. maculatus, Channa marulius, Labeo dussumieri, Puntius sp. Lutianus argentimaculatus, Mystus sp., Tachysurus sp. and Hemiramphus sp. The majority of these backwaters are highly stressed, especially during the pre monsoon period when the retting activity is at its peak. The study has clearly reflected that a more restrained and cautious approach is needed to manage and preserve the unique backwater ecosystems of South-west India
Resumo:
Electronic Theses and Dissertations (ETD) have become an important component of library service in all countries. Many Indian higher education institutions are actively engaged in the process of introducing ETDs. This study describes the development of ETD projects in Kerala. This paper examines the ETD project of Cochin University of Science and Technology (CUSAT) and Mahatma Gandhi University (MGU)
Resumo:
This paper aims to describe recent developments in the services provided by Indian electronic thesis and dissertation (ETD) repositories. It seeks to explore the prospect of knowledge formation and diffusion in India and to discuss the potential of open access e-theses repositories for knowledge management.This study is based on literature review and content analysis of IndianETDrepository websites. Institutional repositories and electronic thesis and dissertation projects in India were identified through a literature survey as well as internet searching and browsing. The study examines the tools, type of contents, coverage and aims of Indian ETD repositories.The paper acknowledges the need for knowledge management for national development. It highlights the significance of an integrated platform for preserving, searching and retrieving Indian theses. It describes the features and functions of Indian ETD repositories.The paper provides insights into the characteristics of the national repository of ETDs of India, which encourage and support open access to publicly-funded research
Resumo:
The Cochin estuary (CE), which is one of the largest wetland ecosystems, extends from Thanneermukkam bund in the south to Azhikode in the north. It functions as an effluent repository for more than 240 industries, the characteristics of which includes fertilizer, pesticide, radioactive mineral processing, chemical and allied industries, petroleum refining and heavy metal processing industries (Thyagarajan, 2004). Studies in the CE have been mostly on the spatial and temporal variations in the physical, chemical and biological characteristics of the estuary (Balachandran et al., 2006; Madhu et al., 2007; Menon et al., 2000; Qasim 2003;Qasim and Gopinathan 1969) . Although several monitoring programs have been initiated in the CE to understand the level of heavy metal pollution, these were restricted to trace metals distribution (Balachandran et al., 2005) or the influence of anthropogenic inputs on the benthos and phytoplankton (Madhu et al., 2007;Jayaraj, 2006). Recently, few studies were carried out on microbial ecology in the CE(Thottathil et al 2008a and b;Parvathi et al., 2009and 2011; Thomas et al., 2006;Chandran and Hatha, 2003). However, studies on metal - microbe interaction are hitherto not undertaken in this estuary. Hence, a study was undertaken at 3 sites with different level of heavy metal concentration tounderstand the abundance, diversity and mechanisms of resistance in metal resistant bacteria and its impact on the nutrient regeneration. The present work has also focused on the response of heavy metal resistant bacteria towards antibacterial agent’s antibiotics and silver nanoparticles
Resumo:
This report gives a detailed discussion on the system, algorithms, and techniques that we have applied in order to solve the Web Service Challenges (WSC) of the years 2006 and 2007. These international contests are focused on semantic web service composition. In each challenge of the contests, a repository of web services is given. The input and output parameters of the services in the repository are annotated with semantic concepts. A query to a semantic composition engine contains a set of available input concepts and a set of wanted output concepts. In order to employ an offered service for a requested role, the concepts of the input parameters of the offered operations must be more general than requested (contravariance). In contrast, the concepts of the output parameters of the offered service must be more specific than requested (covariance). The engine should respond to a query by providing a valid composition as fast as possible. We discuss three different methods for web service composition: an uninformed search in form of an IDDFS algorithm, a greedy informed search based on heuristic functions, and a multi-objective genetic algorithm.
Resumo:
Formal Concept Analysis allows to derive conceptual hierarchies from data tables. Formal Concept Analysis is applied in various domains, e.g., data analysis, information retrieval, and knowledge discovery in databases. In order to deal with increasing sizes of the data tables (and to allow more complex data structures than just binary attributes), conceputal scales habe been developed. They are considered as metadata which structure the data conceptually. But in large applications, the number of conceptual scales increases as well. Techniques are needed which support the navigation of the user also on this meta-level of conceptual scales. In this paper, we attack this problem by extending the set of scales by hierarchically ordered higher level scales and by introducing a visualization technique called nested scaling. We extend the two-level architecture of Formal Concept Analysis (the data table plus one level of conceptual scales) to many-level architecture with a cascading system of conceptual scales. The approach also allows to use representation techniques of Formal Concept Analysis for the visualization of thesauri and ontologies.
Resumo:
The ongoing growth of the World Wide Web, catalyzed by the increasing possibility of ubiquitous access via a variety of devices, continues to strengthen its role as our prevalent information and commmunication medium. However, although tools like search engines facilitate retrieval, the task of finally making sense of Web content is still often left to human interpretation. The vision of supporting both humans and machines in such knowledge-based activities led to the development of different systems which allow to structure Web resources by metadata annotations. Interestingly, two major approaches which gained a considerable amount of attention are addressing the problem from nearly opposite directions: On the one hand, the idea of the Semantic Web suggests to formalize the knowledge within a particular domain by means of the "top-down" approach of defining ontologies. On the other hand, Social Annotation Systems as part of the so-called Web 2.0 movement implement a "bottom-up" style of categorization using arbitrary keywords. Experience as well as research in the characteristics of both systems has shown that their strengths and weaknesses seem to be inverse: While Social Annotation suffers from problems like, e. g., ambiguity or lack or precision, ontologies were especially designed to eliminate those. On the contrary, the latter suffer from a knowledge acquisition bottleneck, which is successfully overcome by the large user populations of Social Annotation Systems. Instead of being regarded as competing paradigms, the obvious potential synergies from a combination of both motivated approaches to "bridge the gap" between them. These were fostered by the evidence of emergent semantics, i. e., the self-organized evolution of implicit conceptual structures, within Social Annotation data. While several techniques to exploit the emergent patterns were proposed, a systematic analysis - especially regarding paradigms from the field of ontology learning - is still largely missing. This also includes a deeper understanding of the circumstances which affect the evolution processes. This work aims to address this gap by providing an in-depth study of methods and influencing factors to capture emergent semantics from Social Annotation Systems. We focus hereby on the acquisition of lexical semantics from the underlying networks of keywords, users and resources. Structured along different ontology learning tasks, we use a methodology of semantic grounding to characterize and evaluate the semantic relations captured by different methods. In all cases, our studies are based on datasets from several Social Annotation Systems. Specifically, we first analyze semantic relatedness among keywords, and identify measures which detect different notions of relatedness. These constitute the input of concept learning algorithms, which focus then on the discovery of synonymous and ambiguous keywords. Hereby, we assess the usefulness of various clustering techniques. As a prerequisite to induce hierarchical relationships, our next step is to study measures which quantify the level of generality of a particular keyword. We find that comparatively simple measures can approximate the generality information encoded in reference taxonomies. These insights are used to inform the final task, namely the creation of concept hierarchies. For this purpose, generality-based algorithms exhibit advantages compared to clustering approaches. In order to complement the identification of suitable methods to capture semantic structures, we analyze as a next step several factors which influence their emergence. Empirical evidence is provided that the amount of available data plays a crucial role for determining keyword meanings. From a different perspective, we examine pragmatic aspects by considering different annotation patterns among users. Based on a broad distinction between "categorizers" and "describers", we find that the latter produce more accurate results. This suggests a causal link between pragmatic and semantic aspects of keyword annotation. As a special kind of usage pattern, we then have a look at system abuse and spam. While observing a mixed picture, we suggest that an individual decision should be taken instead of disregarding spammers as a matter of principle. Finally, we discuss a set of applications which operationalize the results of our studies for enhancing both Social Annotation and semantic systems. These comprise on the one hand tools which foster the emergence of semantics, and on the one hand applications which exploit the socially induced relations to improve, e. g., searching, browsing, or user profiling facilities. In summary, the contributions of this work highlight viable methods and crucial aspects for designing enhanced knowledge-based services of a Social Semantic Web.
Resumo:
The next generations of both biological engineering and computer engineering demand that control be exerted at the molecular level. Creating, characterizing and controlling synthetic biological systems may provide us with the ability to build cells that are capable of a plethora of activities, from computation to synthesizing nanostructures. To develop these systems, we must have a set of tools not only for synthesizing systems, but also designing and simulating them. The BioJADE project provides a comprehensive, extensible design and simulation platform for synthetic biology. BioJADE is a graphical design tool built in Java, utilizing a database back end, and supports a range of simulations using an XML communication protocol. BioJADE currently supports a library of over 100 parts with which it can compile designs into actual DNA, and then generate synthesis instructions to build the physical parts. The BioJADE project contributes several tools to Synthetic Biology. BioJADE in itself is a powerful tool for synthetic biology designers. Additionally, we developed and now make use of a centralized BioBricks repository, which enables the sharing of BioBrick components between researchers, and vastly reduces the barriers to entry for aspiring Synthetic Biologists.
Resumo:
In this document we describe the use of metadata catalog Geonetwork software for creating thematic nodes in a spatial data infrastructure
Resumo:
Este artículo refleja los problemas de interoperabilidad que existen entre las diferentes implementaciones de catálogos de metadatos que contemplan la especificación CSW [1] (Catalog Service for the Web) de OGC (Open GeoSpatial Consortium ) . Esta situación ha llevado al desarrollo de una aplicación cliente para poder lanzar peticiones simultáneas a diferentes catálogos de metadatos, con la intención de poder visualizar los resultados de forma unitaria. En el artículo se detalla tanto la arquitectura como todo proceso de desarrollo de la aplicación
Resumo:
El programa ALFA de la Comisión Europea (América Latina Formación Académica) fomenta y apoya las actividades de cooperación entre universidades de ambos continentes1. Las universidades miembros de la Red ALFA Biblioteca de Babel2 asumen como parte de su misión la búsqueda de la excelencia y de la calidad educativa. En la propuesta inicial de trabajo se establecía, como uno de los resultados esperados, la redacción de un documento, a modo de directrices, sobre el desarrollo de servicios basados en el uso de las nuevas tecnologías de la información y la comunicación. El Repositorio Institucional (RI) se entiende como un sistema de información que reúne, preserva, divulga y da acceso a la producción intelectual y académica de las comunidades universitarias. En la actualidad el RI se constituye en una herramienta clave de la política científica y académica de la universidad. Por otro lado, el acceso al texto completo de los objetos de aprendizaje digitales hace que el repositorio se constituya en una pieza de apoyo fundamental para la enseñanza y la investigación, a la vez que multiplica la visibilidad institucional en la comunidad internacional. Dentro de este escenario, las bibliotecas universitarias son el órgano que, por su experiencia en la gestión de la información en todas sus formas y el contacto con el conocimiento, deberá liderar la implementación de los RI con el fin de lograr la competitividad educativa.