876 resultados para Metadata standards
Resumo:
The Metadata Provenance Task Group aims to define a data model that allows for making assertions about description sets. Creating a shared model of the data elements required to describe an aggregation of metadata statements allows to collectively import, access, use and publish facts about the quality, rights, timeliness, data source type, trust situation, etc. of the described statements. In this paper we outline the preliminary model created by the task group, together with first examples that demonstrate how the model is to be used.
Resumo:
Aeronautical charts underlie the representation of aeronautic geographic information that supports pilots in flight. Nevertheless, charts become complex due to the high density of data and the different kinds that support each phase of flight. These features make difficult using them on board. After conducting a study that aims to understand and to evaluate pilot’s needs related to Geographic Information, it is proposed a solution to implement a platform based on geographic information standards (OGC, ISO) and supported by a distributed Web architecture. This platform facilitates the use, retrieval, updating of information and its exchange among different institutions through private and public users. As a first element to ensure interoperability and the harmonisation of information, we propose an aeronautical metadata profile that sets guidelines and elements for its description. This profile meets the standards set by ICAO, Eurocontrol and ISO. The platform offers three levels of access to data through different types of devices and user profiles. This paper suggests an alternative and reliable way for distributing aeronautical geoinformation, focusing on specific functions or displaying and querying.
Resumo:
Aeronautical charts underlie the representation of aeronautic geographic information that supports pilots in flight. Nevertheless, the charts become complex due to the high density of data and the different kinds of charts that support each phase of flight. These features make difficult using them on board. After conducting a study, with civil Spaniard pilots, that aims to understand and to evaluate their needs related to Geographic Information, it is proposed a solution to implement a platform based on geographic information standards (OGC, ISO) and supported by a distributed Web architecture. This platform facilitates the use, retrieval, updating of information and its exchange among different institutions through private and public users. As a first element to ensure interoperability of information, we suggest an aeronautical metadata profile that sets guidelines and elements for its description. The metadata profile meets the standards set by ICAO, Eurocontrol and ISO. The platform offers three levels of access to data through different types of devices and user profiles. Thus, aeronautical institutions could edit data while pilot is on board accessing digital aeronautical charts through a laptop or Table PC. This paper suggests an alternative and reliable way for distributing aeronautical geoinformation, focusing on specific functions or displaying and querying.
Resumo:
The goal of the W3C's Media Annotation Working Group (MAWG) is to promote interoperability between multimedia metadata formats on the Web. As experienced by everybody, audiovisual data is omnipresent on today's Web. However, different interaction interfaces and especially diverse metadata formats prevent unified search, access, and navigation. MAWG has addressed this issue by developing an interlingua ontology and an associated API. This article discusses the rationale and core concepts of the ontology and API for media resources. The specifications developed by MAWG enable interoperable contextualized and semantic annotation and search, independent of the source metadata format, and connecting multimedia data to the Linked Data cloud. Some demonstrators of such applications are also presented in this article.
Resumo:
The spreading of new systems of broadcasting and distribution of multimedia content has had as a consequence a larger need for aggregation of data and metadata to traditionally based contents of video and audio supply. Broadcasting chains of this type of channels have become overwhelmed by the quantity of resources, infrastructures and development needed for these channels to provide information. In order to avoid this kind of shortcomings, several recommendations and standards have been created to exchange metadata between production and distribution of taped programs. The problem lies in live programs, producers sometimes offer data to channels but most often, channels are not able to face required developments. The key to this problem is cost reduction. In this work, a study is conducted on added services which producers may provide to the media about content; a system is found by which additional communication expenses are not made and a model of information transfer is offered which allows low cost developments to supply new media platforms.
Resumo:
Sensor network deployments have become a primary source of big data about the real world that surrounds us, measuring a wide range of physical properties in real time. With such large amounts of heterogeneous data, a key challenge is to describe and annotate sensor data with high-level metadata, using and extending models, for instance with ontologies. However, to automate this task there is a need for enriching the sensor metadata using the actual observed measurements and extracting useful meta-information from them. This paper proposes a novel approach of characterization and extraction of semantic metadata through the analysis of sensor data raw observations. This approach consists in using approximations to represent the raw sensor measurements, based on distributions of the observation slopes, building a classi?cation scheme to automatically infer sensor metadata like the type of observed property, integrating the semantic analysis results with existing sensor networks metadata.
Resumo:
Actualmente, la Web provee un inmenso conjunto de servicios (WS-*, RESTful, OGC WFS), los cuales están normalmente expuestos a través de diferentes estándares que permiten localizar e invocar a estos servicios. Estos servicios están, generalmente, descritos utilizando información textual, sin una descripción formal, es decir, la descripción de los servicios es únicamente sintáctica. Para facilitar el uso y entendimiento de estos servicios, es necesario anotarlos de manera formal a través de la descripción de los metadatos. El objetivo de esta tesis es proponer un enfoque para la anotación semántica de servicios Web en el dominio geoespacial. Este enfoque permite automatizar algunas de las etapas del proceso de anotación, mediante el uso combinado de recursos ontológicos y servicios externos. Este proceso ha sido evaluado satisfactoriamente con un conjunto de servicios en el dominio geoespacial. La contribución principal de este trabajo es la automatización parcial del proceso de anotación semántica de los servicios RESTful y WFS, lo cual mejora el estado del arte en esta área. Una lista detallada de las contribuciones son: • Un modelo para representar servicios Web desde el punto de vista sintáctico y semántico, teniendo en cuenta el esquema y las instancias. • Un método para anotar servicios Web utilizando ontologías y recursos externos. • Un sistema que implementa el proceso de anotación propuesto. • Un banco de pruebas para la anotación semántica de servicios RESTful y OGC WFS. Abstract The Web contains an immense collection of Web services (WS-*, RESTful, OGC WFS), normally exposed through standards that tell us how to locate and invocate them. These services are usually described using mostly textual information and without proper formal descriptions, that is, existing service descriptions mostly stay on a syntactic level. If we want to make such services potentially easier to understand and use, we may want to annotate them formally, by means of descriptive metadata. The objective of this thesis is to propose an approach for the semantic annotation of services in the geospatial domain. Our approach automates some stages of the annotation process, by using a combination of thirdparty resources and services. It has been successfully evaluated with a set of geospatial services. The main contribution of this work is the partial automation of the process of RESTful and WFS semantic annotation services, what improves the current state of the art in this area. The more detailed list of contributions are: • A model for representing Web services. • A method for annotating Web services using ontological and external resources. • A system that implements the proposed annotation process. • A gold standard for the semantic annotation of RESTful and OGC WFS services, and algorithms for evaluating the annotations.
Resumo:
Brick facades are a construction type, strongly linked to local construction characteristics and methods. In Spain, particularly in Castilla, the facades have been built since the '80s with Castilian half foot (11.5 cm), resting on the edge of slabs. The design of these facades, to horizontal loads from wind, depending on the codes used, can lead to completely different valid solutions. Applying same loads, the facades studied with current European standard (Eurocode 6), have a maximum length of 7.1 m between supports, while the Spanish code, Technical Building Code - Structural Safety Masonry, (CTE SE-F), 8.4 m can be achieved. This represents an increase of flexural strength, depending on the calculation model used, which can reach until 8 times. This is due to the difference of the calculation method and the structural model in one and another standard, depending on if this facade is analyzed as a vertical or horizontal beam or by formation of a vertical or horizontal archh. This paper analyzes the constructive solution of the brick facades that results from applying Spanish or European standards and how it affects the model applied in the safety of the resulting facade.
Resumo:
Monte Carlo calculations were carried out to characterize the neutron field produced by the calibration neutron sources of the Neutron Standards Laboratory at the Research Center for Energy, Environment and Technology (CIEMAT) in Spain. For 241AmBe and 252Cf neutron sources, the neutron spectra, the ambient dose equivalent rates and the total neutron fluence rates were estimated. In the calibration hall, there are several items that modify the neutron field. To evaluate their effects different cases were used, from point-like source in vacuum up to the full model. Additionally, using the full model, the neutron spectra were estimated to different distances along the bench; with these spectra, the total neutron fluence and the ambient dose equivalent rates were calculated. The hall walls induce the largest changes in the neutron spectra and the respective integral quantities. The free-field neutron spectrum is modified due the room return effect.
Resumo:
Background: This project’s idea arose derived of the need of the professors of the department “Computer Languages and Systems and Software Engineering (DLSIIS)” to develop exams with multiple choice questions in a more productive and comfortable way than the one they are currently using. The goal of this project is to develop an application that can be easily used by the professors of the DLSIIS when they need to create a new exam. The main problems of the previous creation process were the difficulty in searching for a question that meets some specific conditions in the previous exam files; and the difficulty for editing exams because of the format of the employed text files. Result: The results shown in this document allow the reader to understand how the final application works and how it addresses successfully every customer need. The elements that will help the reader to understand the application are the structure of the application, the design of the different components, diagrams that show the workflow of the application and some selected fragments of code. Conclusions: The goals stated in the application requirements are finally met. In addition, there are some thoughts about the work performed during the development of the application and how it improved the author skills in web development.
Resumo:
The Connecticut State Medical Society (CSMS) reviews and accredits the continuing medical education (CME) programs offered by Connecticut's hospitals. As part of the survey process, the CSMS assesses the quality of the hospitals' libraries. In 1987, the CSMS adopted the Medical Library Association's (MLA's) “Minimum Standards for Health Sciences Libraries in Hospitals.” In 1990, professional librarians were added to the survey team and, later, to the CSMS CME Committee. Librarians participating in this effort are recruited from the membership of the Connecticut Association of Health Sciences Librarians (CAHSL). The positive results of having a qualified librarian on the survey team and the invaluable impact of adherence to the MLA standards are outlined. As a direct result of this process, hospitals throughout the state have added staffing, increased space, and added funding for resources during an era of cutbacks. Some hospital libraries have been able to maintain a healthy status quo, while others have had proposed cuts reconsidered by administrators for fear of losing valuable CME accreditation status. Creating a relationship with an accrediting agency is one method by which hospital librarians elsewhere may strengthen their efforts to ensure adequate library resources in an era of downsizing. In addition, this collaboration has provided a new and important role for librarians to play on an accreditation team.
Resumo:
The minimum levels of staffing, services, budget, and technology that should be provided by a library specializing in vision science are presented. The scope and coverage of the collection is described as well. These standards may be used by institutions establishing libraries or by accrediting bodies reviewing existing libraries.