992 resultados para extensible hypertext markup language


Relevância:

100.00% 100.00%

Publicador:

Resumo:

For various reasons, it is important, if not essential, to integrate the computations and code used in data analyses, methodological descriptions, simulations, etc. with the documents that describe and rely on them. This integration allows readers to both verify and adapt the statements in the documents. Authors can easily reproduce them in the future, and they can present the document's contents in a different medium, e.g. with interactive controls. This paper describes a software framework for authoring and distributing these integrated, dynamic documents that contain text, code, data, and any auxiliary content needed to recreate the computations. The documents are dynamic in that the contents, including figures, tables, etc., can be recalculated each time a view of the document is generated. Our model treats a dynamic document as a master or ``source'' document from which one can generate different views in the form of traditional, derived documents for different audiences. We introduce the concept of a compendium as both a container for the different elements that make up the document and its computations (i.e. text, code, data, ...), and as a means for distributing, managing and updating the collection. The step from disseminating analyses via a compendium to reproducible research is a small one. By reproducible research, we mean research papers with accompanying software tools that allow the reader to directly reproduce the results and employ the methods that are presented in the research paper. Some of the issues involved in paradigms for the production, distribution and use of such reproducible research are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main roles of the Neural Open Markup Language, NeuroML, is to facilitate cooperation in building, simulating, testing and publishing models of channels, neurons and networks of neurons. MorphML, which was developed as a common format for exchange of neural morphology data, is distributed as part of NeuroML but can be used as a stand-alone application. In this collection of tutorials and workshop summary, we provide an overview of these XML schemas and provide examples of their use in down-stream applications. We also summarize plans for the further development of XML specifications for modeling channels, channel distributions, and network connectivity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aufbau einer föderativen Dienstlandschaft in der Ruhr-Region auf Basis von SAML mit dem Ziel eine organisationsübergreifende Nutzung von webbasierten IT-Diensten zu ermöglichen

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este artículo se describe el proceso de diseño e implementación de la base de datos RVDynDB (Rail Vehicle Dynamic parameters DataBase), que pretende ser un extenso repositorio de los modelos de dominio público empleados en la simulación dinámica de vehículos ferroviarios en todo el mundo. Atendiendo a sus características de flexibilidad, extensibilidad e independencia de la plataforma, se ha escogido un modelo de datos XML, que facilita el almacenamiento de datos de procedencia muy heterogénea, al tiempo que permite compartir el contenido de la base de datos con otros usuarios a través de internet. Se ha presentado también el lenguaje RVDynML (Rail Vehicle Dynamic parameters Markup Language), que define la estructura de la información almacenada en la base de datos. Al ser un lenguaje basado en XML, con el tiempo podría llegar a convertirse en un estándar para el intercambio de datos sobre los principales parámetros constructivos que definen el comportamiento dinámico de los vehículos.Se han seleccionado 173 referencias bibliográficas, cuyos datos se han utilizado para construir la base de datos, constituida por un total de 957 registros. Finalmente, se ha desarrollado una aplicación específica con MATLAB para gestionar las búsquedas en la base de datos. Para ello se ha empleando una API de Java que proporciona una interfaz para el DOM, que permite permiten acceder, modificar, insertar o eliminar los elementos y atributos que componen un documento XML.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Extensible Business Reporting Language (XBRL) is a grammar based on XML that is defined and described in the XBRL 2.1 specification. Instance documents are created by combining XBRL taxonomies and linkbases with data (facts) for a particular context. An alternative view is, XBRL is a mechanism for communicating information for decision-making between interested parties based on a generally accepted way of representing and digitally transmitting symbols of actions and events. XBRL may be both of these and many other things depending on how we frame our methodological understanding for the purposes of research. In this section we present an approach that conceives XBRL as a socio-technical object in the tradition of post-social perspectives (Knorr Cetina 1997; Latour 1996, 1999). © Deutscher Universitäts-Verlag GWV Fachverlage GmbH, Wiesbaden 2007.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extensible Business Reporting Language (XBRL) is being adopted by European regulators as a data standard for the exchange of business information. This paper examines the approach of XBRL International (XII) to the meta-data standard's development and diffusion. We theorise the development of XBRL using concepts drawn from a model of successful open source projects. Comparison of the open source model to XBRL enables us to identify a number of interesting similarities and differences. In common with open source projects, the benefits and progress of XBRL have been overstated and 'hyped' by enthusiastic participants. While XBRL is an open data standard in terms of access to the equivalent of its 'source code' we find that the governance structure of the XBRL consortium is significantly different to a model open source approach. The barrier to participation that is created by requiring paid membership and a focus on transacting business at physical conferences and meetings is identified as particularly critical. Decisions about the technical structure of XBRL, the regulator-led pattern of adoption and the organisation of XII are discussed. Finally areas for future research are identified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis provides an interoperable language for quantifying uncertainty using probability theory. A general introduction to interoperability and uncertainty is given, with particular emphasis on the geospatial domain. Existing interoperable standards used within the geospatial sciences are reviewed, including Geography Markup Language (GML), Observations and Measurements (O&M) and the Web Processing Service (WPS) specifications. The importance of uncertainty in geospatial data is identified and probability theory is examined as a mechanism for quantifying these uncertainties. The Uncertainty Markup Language (UncertML) is presented as a solution to the lack of an interoperable standard for quantifying uncertainty. UncertML is capable of describing uncertainty using statistics, probability distributions or a series of realisations. The capabilities of UncertML are demonstrated through a series of XML examples. This thesis then provides a series of example use cases where UncertML is integrated with existing standards in a variety of applications. The Sensor Observation Service - a service for querying and retrieving sensor-observed data - is extended to provide a standardised method for quantifying the inherent uncertainties in sensor observations. The INTAMAP project demonstrates how UncertML can be used to aid uncertainty propagation using a WPS by allowing UncertML as input and output data. The flexibility of UncertML is demonstrated with an extension to the GML geometry schemas to allow positional uncertainty to be quantified. Further applications and developments of UncertML are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most object-based approaches to Geographical Information Systems (GIS) have concentrated on the representation of geometric properties of objects in terms of fixed geometry. In our road traffic marking application domain we have a requirement to represent the static locations of the road markings but also enforce the associated regulations, which are typically geometric in nature. For example a give way line of a pedestrian crossing in the UK must be within 1100-3000 mm of the edge of the crossing pattern. In previous studies of the application of spatial rules (often called 'business logic') in GIS emphasis has been placed on the representation of topological constraints and data integrity checks. There is very little GIS literature that describes models for geometric rules, although there are some examples in the Computer Aided Design (CAD) literature. This paper introduces some of the ideas from so called variational CAD models to the GIS application domain, and extends these using a Geography Markup Language (GML) based representation. In our application we have an additional requirement; the geometric rules are often changed and vary from country to country so should be represented in a flexible manner. In this paper we describe an elegant solution to the representation of geometric rules, such as requiring lines to be offset from other objects. The method uses a feature-property model embraced in GML 3.1 and extends the possible relationships in feature collections to permit the application of parameterized geometric constraints to sub features. We show the parametric rule model we have developed and discuss the advantage of using simple parametric expressions in the rule base. We discuss the possibilities and limitations of our approach and relate our data model to GML 3.1. © 2006 Springer-Verlag Berlin Heidelberg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTAMAP is a web processing service for the automatic interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the open geospatial consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an open source solution. The system couples the 52-North web processing service, accepting data in the form of an observations and measurements (O&M) document with a computing back-end realized in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a new markup language to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropies and extreme values. In the light of the INTAMAP experience, we discuss the lessons learnt.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52°North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The results of an experimental study of retail investors' use of eXtensible Business Reporting Language tagged (interactive) data and PDF format for making investment decisions are reported. The main finding is that data format made no difference to participants' ability to locate and integrate information from statement footnotes to improve investment decisions. Interactive data were perceived by participants as quick and 'accurate', but it failed to facilitate the identification of the adjustment needed to make the ratios accurate for comparison. An important implication is that regulators and software designers should work to reduce user reliance on the comparability of ratios generated automatically using interactive data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Arnamagnæan Institute, principally in the form of the present writer, has been involved in a number of projects to do with the digitisation, electronic description and text-encoding of medieval manuscripts. Several of these projects were dealt with in a previous article 'The view from the North: Some Scandinavian digitisation projects', NCD review, 4 (2004), pp. 22-30. This paper looks in some depth at two others, MASTER and CHLT. The Arnamagnæan Institute is a teaching and research institute within the Faculty of Humanities at the University of Copenhagen. It is named after the Icelandic scholar and antiquarian Árni Magnússon (1663-1730), secretary of the Royal Danish Archives and Professor of Danish Antiquities at the University of Copenhagen, who in the course of his lifetime built up what is arguably the single most important collection of early Scandinavian manuscripts in the world, some 2,500 manuscript items, the earliest dating from the 12th century. The majority of these are from Iceland, but the collection also contains important Norwegian, Danish and Swedish manuscripts, along with approximately 100 manuscripts of continental provenance. In addition to the manuscripts proper, there are collections of original charters and apographa: 776 Norwegian (including Faroese, Shetlandic and Orcadian) charters and 2895 copies, 1571 Danish charters and 1372 copies, and 1345 Icelandic charters and 5942 copies. When he died in 1730, Árni Magnússon bequeathed his collection to the University of Copenhagen. The original collection has subsequently been augmented through individual purchases and gifts and the acquisition of a number of smaller collections, bringing the total to nearly 3000 manuscript items, which, with the charters and apographa, comprise over half a million pages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advantages of a COG (Component Object Graphic) approach to the composition of PDF pages have been set out in a previous paper [1]. However, if pages are to be composed in this way then the individual graphic objects must have known bounding boxes and must be correctly placed on the page in a process that resembles the link editing of a multi-module computer program. Ideally the linker should be able to utilize all declared resource information attached to each COG. We have investigated the use of an XML application called Personalized Print Markup Language (PPML) to control the link editing process for PDF COGs. Our experiments, though successful, have shown up the shortcomings of PPML's resource handling capabilities which are currently active at the document and page levels but which cannot be elegantly applied to individual graphic objects at a sub-page level. Proposals are put forward for modifications to PPML that would make easier any COG-based approach to page composition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Repeat photography is an efficient, effective and useful method to identify trends of changes in the landscapes. It was used to illustrate long-term changes occurring in the landscapes. In the Northeast of Portugal, landscapes changes is currently driven mostly by agriculture abandonment and agriculture and energy policy. However, there is a need to monitoring changes in the region using a multitemporal and multiscale approach. This project aimed to establish an online repository of oblique digital photography from the region to be used to register the condition of the landscape as recorded in historical and contemporary photography over time as well as to support qualitative and quantitative assessment of change in the landscape using repeat photography techniques and methods. It involved the development of a relational database and a series of web-based services using PHP: Hypertext Preprocessor language, and the development of an interface, with Joomla, of pictures uploading and downloading by users. The repository will make possible to upload, store, search by location, theme, or date, display, and download pictures for Northeastern Portugal. The website service is devoted to help researchers to obtain quickly the photographs needed to apply RP through a developed search engine. It can be accessed at: http://esa.ipb.pt/digitalandscape/.