879 resultados para Web services. Service orchestration languages. PEWS. Graphreduction machines
Resumo:
In many Environmental Information Systems the actual observations arise from a discrete monitoring network which might be rather heterogeneous in both location and types of measurements made. In this paper we describe the architecture and infrastructure for a system, developed as part of the EU FP6 funded INTAMAP project, to provide a service oriented solution that allows the construction of an interoperable, automatic, interpolation system. This system will be based on the Open Geospatial Consortium’s Web Feature Service (WFS) standard. The essence of our approach is to extend the GML3.1 observation feature to include information about the sensor using SensorML, and to further extend this to incorporate observation error characteristics. Our extended WFS will accept observations, and will store them in a database. The observations will be passed to our R-based interpolation server, which will use a range of methods, including a novel sparse, sequential kriging method (only briefly described here) to produce an internal representation of the interpolated field resulting from the observations currently uploaded to the system. The extended WFS will then accept queries, such as ‘What is the probability distribution of the desired variable at a given point’, ‘What is the mean value over a given region’, or ‘What is the probability of exceeding a certain threshold at a given location’. To support information-rich transfer of complex and uncertain predictions we are developing schema to represent probabilistic results in a GML3.1 (object-property) style. The system will also offer more easily accessible Web Map Service and Web Coverage Service interfaces to allow users to access the system at the level of complexity they require for their specific application. Such a system will offer a very valuable contribution to the next generation of Environmental Information Systems in the context of real time mapping for monitoring and security, particularly for systems that employ a service oriented architecture.
Resumo:
This thesis provides an interoperable language for quantifying uncertainty using probability theory. A general introduction to interoperability and uncertainty is given, with particular emphasis on the geospatial domain. Existing interoperable standards used within the geospatial sciences are reviewed, including Geography Markup Language (GML), Observations and Measurements (O&M) and the Web Processing Service (WPS) specifications. The importance of uncertainty in geospatial data is identified and probability theory is examined as a mechanism for quantifying these uncertainties. The Uncertainty Markup Language (UncertML) is presented as a solution to the lack of an interoperable standard for quantifying uncertainty. UncertML is capable of describing uncertainty using statistics, probability distributions or a series of realisations. The capabilities of UncertML are demonstrated through a series of XML examples. This thesis then provides a series of example use cases where UncertML is integrated with existing standards in a variety of applications. The Sensor Observation Service - a service for querying and retrieving sensor-observed data - is extended to provide a standardised method for quantifying the inherent uncertainties in sensor observations. The INTAMAP project demonstrates how UncertML can be used to aid uncertainty propagation using a WPS by allowing UncertML as input and output data. The flexibility of UncertML is demonstrated with an extension to the GML geometry schemas to allow positional uncertainty to be quantified. Further applications and developments of UncertML are discussed.
Resumo:
Overlaying maps using a desktop GIS is often the first step of a multivariate spatial analysis. The potential of this operation has increased considerably as data sources and Web services to manipulate them are becoming widely available via the Internet. Standards from the OGC enable such geospatial mashups to be seamless and user driven, involving discovery of thematic data. The user is naturally inclined to look for spatial clusters and correlation of outcomes. Using classical cluster detection scan methods to identify multivariate associations can be problematic in this context, because of a lack of control on or knowledge about background populations. For public health and epidemiological mapping, this limiting factor can be critical but often the focus is on spatial identification of risk factors associated with health or clinical status. Spatial entropy index HSu for the ScankOO analysis of the hypothetical dataset using a vicinity which is fixed by the number of points without distinction between their labels. (The size of the labels is proportional to the inverse of the index) In this article we point out that this association itself can ensure some control on underlying populations, and develop an exploratory scan statistic framework for multivariate associations. Inference using statistical map methodologies can be used to test the clustered associations. The approach is illustrated with a hypothetical data example and an epidemiological study on community MRSA. Scenarios of potential use for online mashups are introduced but full implementation is left for further research.
Resumo:
OBJECTIVES: The objective of this research was to design a clinical decision support system (CDSS) that supports heterogeneous clinical decision problems and runs on multiple computing platforms. Meeting this objective required a novel design to create an extendable and easy to maintain clinical CDSS for point of care support. The proposed solution was evaluated in a proof of concept implementation. METHODS: Based on our earlier research with the design of a mobile CDSS for emergency triage we used ontology-driven design to represent essential components of a CDSS. Models of clinical decision problems were derived from the ontology and they were processed into executable applications during runtime. This allowed scaling applications' functionality to the capabilities of computing platforms. A prototype of the system was implemented using the extended client-server architecture and Web services to distribute the functions of the system and to make it operational in limited connectivity conditions. RESULTS: The proposed design provided a common framework that facilitated development of diversified clinical applications running seamlessly on a variety of computing platforms. It was prototyped for two clinical decision problems and settings (triage of acute pain in the emergency department and postoperative management of radical prostatectomy on the hospital ward) and implemented on two computing platforms-desktop and handheld computers. CONCLUSIONS: The requirement of the CDSS heterogeneity was satisfied with ontology-driven design. Processing of application models described with the help of ontological models allowed having a complex system running on multiple computing platforms with different capabilities. Finally, separation of models and runtime components contributed to improved extensibility and maintainability of the system.
Resumo:
The American Academy of Optometry (AAO) had their annual meeting in San Diego in December 2005 and the BCLA and CLAE were well represented there. The BCLA does have a reasonable number of non-UK based members and hopefully in the future will attract more. This will certainly be beneficial to the society as a whole and may draw more delegates to the BCLA annual conference. To increase awareness of the BCLA at the AAO a special evening seminar was arranged where BCLA president Dr. James Wolffsohn gave his presidential address. Dr. Wolffsohn has given the presidential address in the UK, Ireland, Hong Kong and Japan – making it the most travelled presidential address for the BCLA to date. Aside from the BCLA activity at the AAO there were numerous lectures of interest to all, truly a “something for everyone” meeting. All the sessions were multi-track (often up to 10 things occurring at the same time) and the biggest dilemma was often deciding what to attend and more importantly what you will miss! Nearly 200 new AAO Fellows were inducted at the Gala Dinner from many countries including 3 new fellows from the UK (this year they all just happened to be from Aston University!). It is certainly one of the highlights of the AAO to see fellows from different schools of training from around the world fulfilling the same criteria and being duly rewarded for their commitment to the profession. BCLA members will be aware that 2006 sees the introduction of the new fellowship scheme of the BCLA and by the time you read this the first set of fellowship examinations will have taken place. For more details of the FBCLA scheme see the BCLA web site http://www.bcla.org.uk. Since many of CLAE's editorial panel were at the AAO an informal meeting and dinner was arranged for them where ideas were exchanged about the future of the journal. It is envisaged that the panel will meet twice a year – the next meeting will be at the BCLA conference. The biggest excitement by far was the fact that CLAE is now Medline/PubMed indexed. You may ask why is this significant to CLAE? PubMed is the free web-based service from the US National Library of Medicine. It holds over 15 million biomedical citations and abstracts from the Medline database. Medline is the largest component of PubMed and covers over 4800 journals published in more than 70 countries. The impact of this is that CLAE is starting to attract more submissions as researchers and authors are not worried that their work will not be hidden from other colleagues in the field but rather the work is available to view on the World Wide Web. CLAE is one of a very small number of contact lens journals that is indexed this way. Amongst the other CL journals listed you will note that the International Contact Lens Clinic has now merged with CLAE and the journal CLAO has been renamed Eye and Contact Lenses – making the list of indexed CL journals even smaller than it appears. The on-line submission and reviewing system introduced in 2005 has also made it easier for authors to submit their work and easier for reviewers to check the content. This ease of use has lead to quicker times from submission to publication. Looking back at the articles published in CLAE in 2005 reveals some interesting facts. The majority of the material still tends to be from UK groups related to the field of Optometry, although we hope that in the future we will attract more work from non-UK groups and also from non-Optometric areas such as refractive surgery or anterior eye pathology. Interestingly in 2005 the most downloaded article from CLAE was “Wavefront technology: Past, present and future” by Professor W. Neil Charman, who was also the recipient of the Charles F. Prentice award at the AAO – one of the highest awards honours that the AAO can bestow. Professor Charman was also the keynote speaker at the BCLA's first Pioneer's Day meeting in 2004. In 2006, readers of CLAE will notice more changes, firstly we are moving to 5 issues per year. It is hoped that in the future, depending on increased submissions, a move to 6 issues may be feasible. Secondly, CLAE will aim to have one article per issue that carries CL CET points. You will see in this issue there is an article from Professor Mark Wilcox (who was a keynote speaker at the BCLA conference in 2005). In future articles that carry CET points will be either reviews from BCLA conference keynote speakers, members of the editorial panel or material from other invited persons that will be of interest to the readership of CLAE. Finally, in 2006, you will notice a change to the Editorial Panel, some of the distinguished panel felt that it was good time to step down and new members have been invited to join the remaining panel. The panel represent some of the most eminent names in the fields of contact lenses and/or anterior eye and have varying backgrounds and interests from many of the prominent institutions around the world. One of the tasks that the Editorial Panel undertake is to seek out possible submissions to the journal, either from conferences they attend (posters and papers that they will see and hear) and from their own research teams. However, on behalf of CLAE I would like to extend that invitation to seek original articles to all readers – if you hear a talk and think it could make a suitable publication to CLAE please ask the presenters to submit the work via the on-line submission system. If you found the work interesting then the chances are so will others. CLAE invites submissions that are original research, full length articles, short case reports, full review articles, technical reports and letters to the editor. The on-line submission web page is http://www.ees.elsevier.com/clae/.
Resumo:
OpenMI is a widely used standard allowing exchange of data between integrated models, which has mostly been applied to dynamic, deterministic models. Within the FP7 UncertWeb project we are developing mechanisms and tools to support the management of uncertainty in environmental models. In this paper we explore the integration of the UncertWeb framework with OpenMI, to assess the issues that arise when propagating uncertainty in OpenMI model compositions, and the degree of integration possible with UncertWeb tools. In particular we develop an uncertainty-enabled model for a simple Lotka-Volterra system with an interface conforming to the OpenMI standard, exploring uncertainty in the initial predator and prey levels, and the parameters of the model equations. We use the Elicitator tool developed within UncertWeb to identify the initial condition uncertainties, and show how these can be integrated, using UncertML, with simple Monte Carlo propagation mechanisms. The mediators we develop for OpenMI models are generic and produce standard Web services that expose the OpenMI models to a Web based framework. We discuss what further work is needed to allow a more complete system to be developed and show how this might be used practically.
Resumo:
In recent years Web has become mainstream medium for communication and information dissemination. This paper presents approaches and methods for adaptive learning implementation, which are used in some contemporary web-interfaced Learning Management Systems (LMSs). The problem is not how to create electronic learning materials, but how to locate and utilize the available information in personalized way. Different attitudes to personalization are briefly described in section 1. The real personalization requires a user profile containing information about preferences, aims, and educational history to be stored and used by the system. These issues are considered in section 2. A method for development and design of adaptive learning content in terms of learning strategy system support is represented in section 3. Section 4 includes a set of innovative personalization services that are suggested by several very important research projects (SeLeNe project, ELENA project, etc.) dated from the last few years. This section also describes a model for role- and competency-based learning customization that uses Web Services approach. The last part presents how personalization techniques are implemented in Learning Grid-driven applications.
Resumo:
The paper has been presented at the International Conference Pioneers of Bulgarian Mathematics, Dedicated to Nikola Obreshko and Lubomir Tschakalo , So a, July, 2006.
Resumo:
Current state of Russian databases for substances and materials properties was considered. A brief review of integration methods of given information systems was prepared and a distributed databases integration approach based on metabase was proposed. Implementation details were mentioned on the posed database on electronics materials integration approach. An operating pilot version of given integrated information system implemented at IMET RAS was considered.
Resumo:
Florida State University and University of Helsinki Information technology has the potential to deliver education to everybody by high quality online courses and associated services, and to enhance traditional face-to-face instruction by, e.g., web services offering virtually unlimited practice and step-bystep solutions to practice problems. Regardless of this, tools of information technology have not yet penetrated mathematics education in any meaningful way. This is mostly due to the inertia of academia: instructors are slow to change their working habits. This paper reports on an experiment where all the instructors (seven instructors and six teaching assistants) of a large calculus course were required to base their instruction on online content. The paper will analyze the effectiveness of various solutions used, and finishes with recommendations regarding best practices.
Resumo:
In the recent years the East-Christian iconographical art works have been digitized providing a large volume of data. The need for effective classification, indexing and retrieval of iconography repositories was the motivation of the design and development of a systemized ontological structure for description of iconographical art objects. This paper presents the ontology of the East-Christian iconographical art, developed to provide content annotation in the Virtual encyclopedia of Bulgarian iconography multimedia digital library. The ontology’s main classes, relations, facts, rules, and problems appearing during the design and development are described. The paper also presents an application of the ontology for learning analysis on an iconography domain implemented during the SINUS project “Semantic Technologies for Web Services and Technology Enhanced Learning”.
Resumo:
ACM Computing Classification System (1998): D.0, D.2.11.
Resumo:
The current research activities of the Institute of Mathematics and Informatics at the Bulgarian Academy of Sciences (IMI—BAS) include the study and application of knowledge-based methods for the creation, integration and development of multimedia digital libraries with applications in cultural heritage. This report presents IMI-BAS’s developments at the digital library management systems and portals, i.e. the Bulgarian Iconographical Digital Library, the Bulgarian Folklore Digital Library and the Bulgarian Folklore Artery, etc. developed during the several national and international projects: - "Digital Libraries with Multimedia Content and its Application in Bulgarian Cultural Heritage" (contract 8/21.07.2005 between the IMI–BAS, and the State Agency for Information Technologies and Communications; - FP6/IST/P-027451 PROJECT LOGOS "Knowledge-on-Demand for Ubiquitous Learning", EU FP6, IST, Priority 2.4.13 "Strengthening the Integration of the ICT research effort in an Enlarged Europe" - NSF project D-002-189 SINUS "Semantic Technologies for Web Services and Technology Enhanced Learning". - NSF project IO-03-03/2006 ―Development of Digital Libraries and Information Portal with Virtual Exposition "Bulgarian Folklore Heritage". The presented prototypes aims to provide flexible and effective access to the multimedia presentation of the cultural heritage artefacts and collections, maintaining different forms and format of the digitized information content and rich functionality for interaction. The developments are a result of long- standing interests and work in the technological developments in information systems, knowledge processing and content management systems. The current research activities aims at creating innovative solutions for assembling multimedia digital libraries for collaborative use in specific cultural heritage context, maintaining their semantic interoperability and creating new services for dynamic aggregation of their resources, access improvement, personification, intelligent curation of content, and content protection. The investigations are directed towards the development of distributed tools for aggregating heterogeneous content and ensuring semantic compatibility with the European digital library EUROPEANA, thus providing possibilities for pan- European access to rich digitalised collections of Bulgarian cultural heritage.
Resumo:
Purpose – The paper challenges the focal firm perspective of much resource/capability research, identifying how a dyadic perspective facilitates identification of capabilities required for servitization. Design/methodology/approach – Exploratory study consisting of seven dyadic relationships in five sectors. Findings – An additional dimension of capabilities should be recognised; whether they are developed independently or interactively (with another actor). The following examples of interactively developed capabilities are identified: knowledge development, where partners interactively communicate to understand capabilities; service enablement, manufacturers work with suppliers and customers to support delivery of new services; service development, partners interact to optimise performance of existing services; risk management, customers work with manufacturers to manage risks of product acquisition/operation. Six propositions were developed to articulate these findings. Research implications/limitations – Interactively developed capabilities are created when two or more actors interact to create value. Interactively developed capabilities do not just reside within one firm and, therefore, cannot be a source of competitive advantage for one firm alone. Many of the capabilities required for servitization are interactive, yet have received little research attention. The study does not provide an exhaustive list of interactively developed capabilities, but demonstrates their existence in manufacturer/supplier and manufacturer/customer dyads. Practical implications – Manufacturers need to understand how to develop capabilities interactively to create competitive advantage and value and identify other actors with whom these capabilities can be developed. Originality/value – Previous research has focused on relational capabilities within a focal firm. This study extends existing theories to include interactively developed capabilities. The paper proposes that interactivity is a key dimension of actors’ complementary capabilities.
Resumo:
En esta investigación analizaremos las capacidades de la nube, centrándonos en la computación (entendiendo esta como la capacidad de procesamiento) y el almacenamiento de datos geolocalizados (redes de entrega de contenidos). Estas capacidades, unidas al modelo de negocio que implica un coste cero por aprovisionamiento y el pago por uso, además de la reducción de costes mediante sinergias y optimización de los centros de datos, suponen un escenario interesante para las PYMES dedicadas a la producción audiovisual. Utilizando los servicios en la nube, una empresa pequeña puede, en cuestión de minutos, alcanzar la potencia de procesamiento y distribución de gigantes de la producción audiovisual, sin necesidad de ocupar sus instalaciones ni pagar por adelantado el alquiler de los equipos. Describiremos los servicios de AWS (Amazon Web Services) que serían útiles a una empresa, el uso que darían de dichos servicios y su coste, y lo compararemos con el presupuesto por realizar una misma instalación físicamente en dicha empresa, teniendo que comprar e instalar los equipos.