41 resultados para Web services. Service orchestration languages. PEWS. Graphreduction machines


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In view of the increasingly complexity of services logic and functional requirements, a new system architecture based on SOA was proposed for the equipment remote monitoring and diagnosis system. According to the design principles of SOA, different levels and different granularities of services logic and functional requirements for remote monitoring and diagnosis system were divided, and a loosely coupled web services system was built. The design and implementation schedule of core function modules for the proposed architecture were presented. A demo system was used to validate the feasibility of the proposed architecture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

eHabitat is a Web Processing Service (WPS) designed to compute the likelihood of finding ecosystems with equal properties. Inputs to the WPS, typically thematic geospatial "layers", can be discovered using standardised catalogues, and the outputs tailored to specific end user needs. Because these layers can range from geophysical data captured through remote sensing to socio-economical indicators, eHabitat is exposed to a broad range of different types and levels of uncertainties. Potentially chained to other services to perform ecological forecasting, for example, eHabitat would be an additional component further propagating uncertainties from a potentially long chain of model services. This integration of complex resources increases the challenges in dealing with uncertainty. For such a system, as envisaged by initiatives such as the "Model Web" from the Group on Earth Observations, to be used for policy or decision making, users must be provided with information on the quality of the outputs since all system components will be subject to uncertainty. UncertWeb will create the Uncertainty-Enabled Model Web by promoting interoperability between data and models with quantified uncertainty, building on existing open, international standards. It is the objective of this paper to illustrate a few key ideas behind UncertWeb using eHabitat to discuss the main types of uncertainties the WPS has to deal with and to present the benefits of the use of the UncertWeb framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditionally, geostatistical algorithms are contained within specialist GIS and spatial statistics software. Such packages are often expensive, with relatively complex user interfaces and steep learning curves, and cannot be easily integrated into more complex process chains. In contrast, Service Oriented Architectures (SOAs) promote interoperability and loose coupling within distributed systems, typically using XML (eXtensible Markup Language) and Web services. Web services provide a mechanism for a user to discover and consume a particular process, often as part of a larger process chain, with minimal knowledge of how it works. Wrapping current geostatistical algorithms with a Web service layer would thus increase their accessibility, but raises several complex issues. This paper discusses a solution to providing interoperable, automatic geostatistical processing through the use of Web services, developed in the INTAMAP project (INTeroperability and Automated MAPping). The project builds upon Open Geospatial Consortium standards for describing observations, typically used within sensor webs, and employs Geography Markup Language (GML) to describe the spatial aspect of the problem domain. Thus the interpolation service is extremely flexible, being able to support a range of observation types, and can cope with issues such as change of support and differing error characteristics of sensors (by utilising descriptions of the observation process provided by SensorML). XML is accepted as the de facto standard for describing Web services, due to its expressive capabilities which allow automatic discovery and consumption by ‘naive’ users. Any XML schema employed must therefore be capable of describing every aspect of a service and its processes. However, no schema currently exists that can define the complex uncertainties and modelling choices that are often present within geostatistical analysis. We show a solution to this problem, developing a family of XML schemata to enable the description of a full range of uncertainty types. These types will range from simple statistics, such as the kriging mean and variances, through to a range of probability distributions and non-parametric models, such as realisations from a conditional simulation. By employing these schemata within a Web Processing Service (WPS) we show a prototype moving towards a truly interoperable geostatistical software architecture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When constructing and using environmental models, it is typical that many of the inputs to the models will not be known perfectly. In some cases, it will be possible to make observations, or occasionally physics-based uncertainty propagation, to ascertain the uncertainty on these inputs. However, such observations are often either not available or even possible, and another approach to characterising the uncertainty on the inputs must be sought. Even when observations are available, if the analysis is being carried out within a Bayesian framework then prior distributions will have to be specified. One option for gathering or at least estimating this information is to employ expert elicitation. Expert elicitation is well studied within statistics and psychology and involves the assessment of the beliefs of a group of experts about an uncertain quantity, (for example an input / parameter within a model), typically in terms of obtaining a probability distribution. One of the challenges in expert elicitation is to minimise the biases that might enter into the judgements made by the individual experts, and then to come to a consensus decision within the group of experts. Effort is made in the elicitation exercise to prevent biases clouding the judgements through well-devised questioning schemes. It is also important that, when reaching a consensus, the experts are exposed to the knowledge of the others in the group. Within the FP7 UncertWeb project (http://www.uncertweb.org/), there is a requirement to build a Webbased tool for expert elicitation. In this paper, we discuss some of the issues of building a Web-based elicitation system - both the technological aspects and the statistical and scientific issues. In particular, we demonstrate two tools: a Web-based system for the elicitation of continuous random variables and a system designed to elicit uncertainty about categorical random variables in the setting of landcover classification uncertainty. The first of these examples is a generic tool developed to elicit uncertainty about univariate continuous random variables. It is designed to be used within an application context and extends the existing SHELF method, adding a web interface and access to metadata. The tool is developed so that it can be readily integrated with environmental models exposed as web services. The second example was developed for the TREES-3 initiative which monitors tropical landcover change through ground-truthing at confluence points. It allows experts to validate the accuracy of automated landcover classifications using site-specific imagery and local knowledge. Experts may provide uncertainty information at various levels: from a general rating of their confidence in a site validation to a numerical ranking of the possible landcover types within a segment. A key challenge in the web based setting is the design of the user interface and the method of interacting between the problem owner and the problem experts. We show the workflow of the elicitation tool, and show how we can represent the final elicited distributions and confusion matrices using UncertML, ready for integration into uncertainty enabled workflows.We also show how the metadata associated with the elicitation exercise is captured and can be referenced from the elicited result, providing crucial lineage information and thus traceability in the decision making process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The CancerGrid consortium is developing open-standards cancer informatics to address the challenges posed by modern cancer clinical trials. This paper presents the service-oriented software paradigm implemented in CancerGrid to derive clinical trial information management systems for collaborative cancer research across multiple institutions. Our proposal is founded on a combination of a clinical trial (meta)model and WSRF (Web Services Resource Framework), and is currently being evaluated for use in early phase trials. Although primarily targeted at cancer research, our approach is readily applicable to other areas for which a similar information model is available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis examines the innovative performance of 206 U.S. business service firms. Undeniably, a need exists for better comprehension of the service sector of developed economies. This research takes a unique view by applying a synthesis approach to studying innovation and attempts to build under a proposed strategic innovation paradigm. A quantitative method is utilised via questionnaire in which all major types of innovation are under examination including: product and service, organisational, and technology-driven innovations. Essential ideas for this conceptual framework encapsulate a new mode of understanding service innovation. Basically, the structure of this analysis encompasses the likelihood of innovation and determining the extent of innovation, while also attempting to shed light on the factors which determine the impact of innovation on performance among service firms. What differentiates this research is its focus on customer-driven service firms in addition to other external linkages. A synopsis of the findings suggest that external linkages, particularly with customers, suppliers and strategic alliances or joint ventures, significantly affect innovation performance with regard to the introduction of new services. Service firms which incorporate formal and informal R&D experience significant increases in the extent of new-to-market and new-to-firm innovations. Additionally, the results show that customer-driven service firms experience greater productivity and growth. Furthermore, the findings suggest that external linkages assist service firm performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite expectations being high, the industrial take-up of Semantic Web technologies in developing services and applications has been slower than expected. One of the main reasons is that many legacy systems have been developed without considering the potential of theWeb in integrating services and sharing resources.Without a systematic methodology and proper tool support, the migration from legacy systems to SemanticWeb Service-based systems can be a tedious and expensive process, which carries a significant risk of failure. There is an urgent need to provide strategies, allowing the migration of legacy systems to Semantic Web Services platforms, and also tools to support such strategies. In this paper we propose a methodology and its tool support for transitioning these applications to Semantic Web Services, which allow users to migrate their applications to Semantic Web Services platforms automatically or semi-automatically. The transition of the GATE system is used as a case study. © 2009 - IOS Press and the authors. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many Environmental Information Systems the actual observations arise from a discrete monitoring network which might be rather heterogeneous in both location and types of measurements made. In this paper we describe the architecture and infrastructure for a system, developed as part of the EU FP6 funded INTAMAP project, to provide a service oriented solution that allows the construction of an interoperable, automatic, interpolation system. This system will be based on the Open Geospatial Consortium’s Web Feature Service (WFS) standard. The essence of our approach is to extend the GML3.1 observation feature to include information about the sensor using SensorML, and to further extend this to incorporate observation error characteristics. Our extended WFS will accept observations, and will store them in a database. The observations will be passed to our R-based interpolation server, which will use a range of methods, including a novel sparse, sequential kriging method (only briefly described here) to produce an internal representation of the interpolated field resulting from the observations currently uploaded to the system. The extended WFS will then accept queries, such as ‘What is the probability distribution of the desired variable at a given point’, ‘What is the mean value over a given region’, or ‘What is the probability of exceeding a certain threshold at a given location’. To support information-rich transfer of complex and uncertain predictions we are developing schema to represent probabilistic results in a GML3.1 (object-property) style. The system will also offer more easily accessible Web Map Service and Web Coverage Service interfaces to allow users to access the system at the level of complexity they require for their specific application. Such a system will offer a very valuable contribution to the next generation of Environmental Information Systems in the context of real time mapping for monitoring and security, particularly for systems that employ a service oriented architecture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis provides an interoperable language for quantifying uncertainty using probability theory. A general introduction to interoperability and uncertainty is given, with particular emphasis on the geospatial domain. Existing interoperable standards used within the geospatial sciences are reviewed, including Geography Markup Language (GML), Observations and Measurements (O&M) and the Web Processing Service (WPS) specifications. The importance of uncertainty in geospatial data is identified and probability theory is examined as a mechanism for quantifying these uncertainties. The Uncertainty Markup Language (UncertML) is presented as a solution to the lack of an interoperable standard for quantifying uncertainty. UncertML is capable of describing uncertainty using statistics, probability distributions or a series of realisations. The capabilities of UncertML are demonstrated through a series of XML examples. This thesis then provides a series of example use cases where UncertML is integrated with existing standards in a variety of applications. The Sensor Observation Service - a service for querying and retrieving sensor-observed data - is extended to provide a standardised method for quantifying the inherent uncertainties in sensor observations. The INTAMAP project demonstrates how UncertML can be used to aid uncertainty propagation using a WPS by allowing UncertML as input and output data. The flexibility of UncertML is demonstrated with an extension to the GML geometry schemas to allow positional uncertainty to be quantified. Further applications and developments of UncertML are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Overlaying maps using a desktop GIS is often the first step of a multivariate spatial analysis. The potential of this operation has increased considerably as data sources and Web services to manipulate them are becoming widely available via the Internet. Standards from the OGC enable such geospatial mashups to be seamless and user driven, involving discovery of thematic data. The user is naturally inclined to look for spatial clusters and correlation of outcomes. Using classical cluster detection scan methods to identify multivariate associations can be problematic in this context, because of a lack of control on or knowledge about background populations. For public health and epidemiological mapping, this limiting factor can be critical but often the focus is on spatial identification of risk factors associated with health or clinical status. Spatial entropy index HSu for the ScankOO analysis of the hypothetical dataset using a vicinity which is fixed by the number of points without distinction between their labels. (The size of the labels is proportional to the inverse of the index) In this article we point out that this association itself can ensure some control on underlying populations, and develop an exploratory scan statistic framework for multivariate associations. Inference using statistical map methodologies can be used to test the clustered associations. The approach is illustrated with a hypothetical data example and an epidemiological study on community MRSA. Scenarios of potential use for online mashups are introduced but full implementation is left for further research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: The objective of this research was to design a clinical decision support system (CDSS) that supports heterogeneous clinical decision problems and runs on multiple computing platforms. Meeting this objective required a novel design to create an extendable and easy to maintain clinical CDSS for point of care support. The proposed solution was evaluated in a proof of concept implementation. METHODS: Based on our earlier research with the design of a mobile CDSS for emergency triage we used ontology-driven design to represent essential components of a CDSS. Models of clinical decision problems were derived from the ontology and they were processed into executable applications during runtime. This allowed scaling applications' functionality to the capabilities of computing platforms. A prototype of the system was implemented using the extended client-server architecture and Web services to distribute the functions of the system and to make it operational in limited connectivity conditions. RESULTS: The proposed design provided a common framework that facilitated development of diversified clinical applications running seamlessly on a variety of computing platforms. It was prototyped for two clinical decision problems and settings (triage of acute pain in the emergency department and postoperative management of radical prostatectomy on the hospital ward) and implemented on two computing platforms-desktop and handheld computers. CONCLUSIONS: The requirement of the CDSS heterogeneity was satisfied with ontology-driven design. Processing of application models described with the help of ontological models allowed having a complex system running on multiple computing platforms with different capabilities. Finally, separation of models and runtime components contributed to improved extensibility and maintainability of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The American Academy of Optometry (AAO) had their annual meeting in San Diego in December 2005 and the BCLA and CLAE were well represented there. The BCLA does have a reasonable number of non-UK based members and hopefully in the future will attract more. This will certainly be beneficial to the society as a whole and may draw more delegates to the BCLA annual conference. To increase awareness of the BCLA at the AAO a special evening seminar was arranged where BCLA president Dr. James Wolffsohn gave his presidential address. Dr. Wolffsohn has given the presidential address in the UK, Ireland, Hong Kong and Japan – making it the most travelled presidential address for the BCLA to date. Aside from the BCLA activity at the AAO there were numerous lectures of interest to all, truly a “something for everyone” meeting. All the sessions were multi-track (often up to 10 things occurring at the same time) and the biggest dilemma was often deciding what to attend and more importantly what you will miss! Nearly 200 new AAO Fellows were inducted at the Gala Dinner from many countries including 3 new fellows from the UK (this year they all just happened to be from Aston University!). It is certainly one of the highlights of the AAO to see fellows from different schools of training from around the world fulfilling the same criteria and being duly rewarded for their commitment to the profession. BCLA members will be aware that 2006 sees the introduction of the new fellowship scheme of the BCLA and by the time you read this the first set of fellowship examinations will have taken place. For more details of the FBCLA scheme see the BCLA web site http://www.bcla.org.uk. Since many of CLAE's editorial panel were at the AAO an informal meeting and dinner was arranged for them where ideas were exchanged about the future of the journal. It is envisaged that the panel will meet twice a year – the next meeting will be at the BCLA conference. The biggest excitement by far was the fact that CLAE is now Medline/PubMed indexed. You may ask why is this significant to CLAE? PubMed is the free web-based service from the US National Library of Medicine. It holds over 15 million biomedical citations and abstracts from the Medline database. Medline is the largest component of PubMed and covers over 4800 journals published in more than 70 countries. The impact of this is that CLAE is starting to attract more submissions as researchers and authors are not worried that their work will not be hidden from other colleagues in the field but rather the work is available to view on the World Wide Web. CLAE is one of a very small number of contact lens journals that is indexed this way. Amongst the other CL journals listed you will note that the International Contact Lens Clinic has now merged with CLAE and the journal CLAO has been renamed Eye and Contact Lenses – making the list of indexed CL journals even smaller than it appears. The on-line submission and reviewing system introduced in 2005 has also made it easier for authors to submit their work and easier for reviewers to check the content. This ease of use has lead to quicker times from submission to publication. Looking back at the articles published in CLAE in 2005 reveals some interesting facts. The majority of the material still tends to be from UK groups related to the field of Optometry, although we hope that in the future we will attract more work from non-UK groups and also from non-Optometric areas such as refractive surgery or anterior eye pathology. Interestingly in 2005 the most downloaded article from CLAE was “Wavefront technology: Past, present and future” by Professor W. Neil Charman, who was also the recipient of the Charles F. Prentice award at the AAO – one of the highest awards honours that the AAO can bestow. Professor Charman was also the keynote speaker at the BCLA's first Pioneer's Day meeting in 2004. In 2006, readers of CLAE will notice more changes, firstly we are moving to 5 issues per year. It is hoped that in the future, depending on increased submissions, a move to 6 issues may be feasible. Secondly, CLAE will aim to have one article per issue that carries CL CET points. You will see in this issue there is an article from Professor Mark Wilcox (who was a keynote speaker at the BCLA conference in 2005). In future articles that carry CET points will be either reviews from BCLA conference keynote speakers, members of the editorial panel or material from other invited persons that will be of interest to the readership of CLAE. Finally, in 2006, you will notice a change to the Editorial Panel, some of the distinguished panel felt that it was good time to step down and new members have been invited to join the remaining panel. The panel represent some of the most eminent names in the fields of contact lenses and/or anterior eye and have varying backgrounds and interests from many of the prominent institutions around the world. One of the tasks that the Editorial Panel undertake is to seek out possible submissions to the journal, either from conferences they attend (posters and papers that they will see and hear) and from their own research teams. However, on behalf of CLAE I would like to extend that invitation to seek original articles to all readers – if you hear a talk and think it could make a suitable publication to CLAE please ask the presenters to submit the work via the on-line submission system. If you found the work interesting then the chances are so will others. CLAE invites submissions that are original research, full length articles, short case reports, full review articles, technical reports and letters to the editor. The on-line submission web page is http://www.ees.elsevier.com/clae/.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OpenMI is a widely used standard allowing exchange of data between integrated models, which has mostly been applied to dynamic, deterministic models. Within the FP7 UncertWeb project we are developing mechanisms and tools to support the management of uncertainty in environmental models. In this paper we explore the integration of the UncertWeb framework with OpenMI, to assess the issues that arise when propagating uncertainty in OpenMI model compositions, and the degree of integration possible with UncertWeb tools. In particular we develop an uncertainty-enabled model for a simple Lotka-Volterra system with an interface conforming to the OpenMI standard, exploring uncertainty in the initial predator and prey levels, and the parameters of the model equations. We use the Elicitator tool developed within UncertWeb to identify the initial condition uncertainties, and show how these can be integrated, using UncertML, with simple Monte Carlo propagation mechanisms. The mediators we develop for OpenMI models are generic and produce standard Web services that expose the OpenMI models to a Web based framework. We discuss what further work is needed to allow a more complete system to be developed and show how this might be used practically.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – The paper challenges the focal firm perspective of much resource/capability research, identifying how a dyadic perspective facilitates identification of capabilities required for servitization. Design/methodology/approach – Exploratory study consisting of seven dyadic relationships in five sectors. Findings – An additional dimension of capabilities should be recognised; whether they are developed independently or interactively (with another actor). The following examples of interactively developed capabilities are identified: knowledge development, where partners interactively communicate to understand capabilities; service enablement, manufacturers work with suppliers and customers to support delivery of new services; service development, partners interact to optimise performance of existing services; risk management, customers work with manufacturers to manage risks of product acquisition/operation. Six propositions were developed to articulate these findings. Research implications/limitations – Interactively developed capabilities are created when two or more actors interact to create value. Interactively developed capabilities do not just reside within one firm and, therefore, cannot be a source of competitive advantage for one firm alone. Many of the capabilities required for servitization are interactive, yet have received little research attention. The study does not provide an exhaustive list of interactively developed capabilities, but demonstrates their existence in manufacturer/supplier and manufacturer/customer dyads. Practical implications – Manufacturers need to understand how to develop capabilities interactively to create competitive advantage and value and identify other actors with whom these capabilities can be developed. Originality/value – Previous research has focused on relational capabilities within a focal firm. This study extends existing theories to include interactively developed capabilities. The paper proposes that interactivity is a key dimension of actors’ complementary capabilities.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.