897 resultados para Service Oriented Computing
Resumo:
Adaptability for distributed object-oriented enterprise frameworks is a critical mission for system evolution. Today, building adaptive services is a complex task due to lack of adequate framework support in the distributed computing environment. In this thesis, we propose a Meta Level Component-Based Framework (MELC) which uses distributed computing design patterns as components to develop an adaptable pattern-oriented framework for distributed computing applications. We describe our novel approach of combining a meta architecture with a pattern-oriented framework, resulting in an adaptable framework which provides a mechanism to facilitate system evolution. The critical nature of distributed technologies requires frameworks to be adaptable. Our framework employs a meta architecture. It supports dynamic adaptation of feasible design decisions in the framework design space by specifying and coordinating meta-objects that represent various aspects within the distributed environment. The meta architecture in MELC framework can provide the adaptability for system evolution. This approach resolves the problem of dynamic adaptation in the framework, which is encountered in most distributed applications. The concept of using a meta architecture to produce an adaptable pattern-oriented framework for distributed computing applications is new and has not previously been explored in research. As the framework is adaptable, the proposed architecture of the pattern-oriented framework has the abilities to dynamically adapt new design patterns to address technical system issues in the domain of distributed computing and they can be woven together to shape the framework in future. We show how MELC can be used effectively to enable dynamic component integration and to separate system functionality from business functionality. We demonstrate how MELC provides an adaptable and dynamic run time environment using our system configuration and management utility. We also highlight how MELC will impose significant adaptability in system evolution through a prototype E-Bookshop application to assemble its business functions with distributed computing components at the meta level in MELC architecture. Our performance tests show that MELC does not entail prohibitive performance tradeoffs. The work to develop the MELC framework for distributed computing applications has emerged as a promising way to meet current and future challenges in the distributed environment.
Resumo:
INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52°North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process.
Resumo:
Purpose – The main purpose of this paper is to analyze knowledge management in service networks. It analyzes the knowledge management process and identifies related challenges. The authors take a strategic management approach instead of a more technology-oriented approach, since it is believed that managerial problems still remain after technological problems are solved. Design/methodology/approach – The paper explores the literature on the topic of knowledge management as well as the resource (or knowledge) based view of the firm. It offers conceptual insights and provides possible solutions for knowledge management problems. Findings – The paper discusses several possible solutions for managing knowledge processes in knowledge-intensive service networks. Solutions for knowledge identification/generation, knowledge application, knowledge combination/transfer and supporting the evolution of tacit network knowledge include personal and technological aspects, as well as organizational and cultural elements. Practical implications – In a complex environment, knowledge management and network management become crucial for business success. It is the task of network management to establish routines, and to build and regularly refresh meta-knowledge about the competencies and abilities that exist within the network. It is suggested that each network partner should be rated according to the contribution to the network knowledge base. Based on this rating, a particular network partner is a member of a certain knowledge club, meaning that the partner has access to a particular level of network knowledge. Such an established routine provides strong incentives to add knowledge to the network's knowledge base Originality/value – This paper is a first attempt to outline the problems of knowledge management in knowledge-intensive service networks and, by so doing, to introduce strategic management reasoning to the discussion.
Resumo:
The number of interoperable research infrastructures has increased significantly with the growing awareness of the efforts made by the Global Earth Observation System of Systems (GEOSS). One of the Societal Benefit Areas (SBA) that is benefiting most from GEOSS is biodiversity, given the costs of monitoring the environment and managing complex information, from space observations to species records including their genetic characteristics. But GEOSS goes beyond simple data sharing to encourage the publishing and combination of models, an approach which can ease the handling of complex multi-disciplinary questions. It is the purpose of this paper to illustrate these concepts by presenting eHabitat, a basic Web Processing Service (WPS) for computing the likelihood of finding ecosystems with equal properties to those specified by a user. When chained with other services providing data on climate change, eHabitat can be used for ecological forecasting and becomes a useful tool for decision-makers assessing different strategies when selecting new areas to protect. eHabitat can use virtually any kind of thematic data that can be considered as useful when defining ecosystems and their future persistence under different climatic or development scenarios. The paper will present the architecture and illustrate the concepts through case studies which forecast the impact of climate change on protected areas or on the ecological niche of an African bird.
Resumo:
Constructing and executing distributed systems that can adapt to their operating context in order to sustain provided services and the service qualities are complex tasks. Managing adaptation of multiple, interacting services is particularly difficult since these services tend to be distributed across the system, interdependent and sometimes tangled with other services. Furthermore, the exponential growth of the number of potential system configurations derived from the variabilities of each service need to be handled. Current practices of writing low-level reconfiguration scripts as part of the system code to handle run time adaptation are both error prone and time consuming and make adaptive systems difficult to validate and evolve. In this paper, we propose to combine model driven and aspect oriented techniques to better cope with the complexities of adaptive systems construction and execution, and to handle the problem of exponential growth of the number of possible configurations. Combining these techniques allows us to use high level domain abstractions, simplify the representation of variants and limit the problem pertaining to the combinatorial explosion of possible configurations. In our approach we also use models at runtime to generate the adaptation logic by comparing the current configuration of the system to a composed model representing the configuration we want to reach. © 2008 Springer-Verlag Berlin Heidelberg.
Resumo:
Technological advancements enable new sourcing models in software development such as cloud computing, software-as-a-service, and crowdsourcing. While the first two are perceived as a re-emergence of older models (e.g., ASP), crowdsourcing is a new model that creates an opportunity for a global workforce to compete with established service providers. Organizations engaging in crowdsourcing need to develop the capabilities to successfully utilize this sourcing model in delivering services to their clients. To explore these capabilities we collected qualitative data from focus groups with crowdsourcing leaders at a large technology organization. New capabilities we identified stem from the need of the traditional service provider to assume a "client" role in the crowdsourcing context, while still acting as a "vendor" in providing services to the end client. This paper expands the research on vendor capabilities and IS outsourcing as well as offers important insights to organizations that are experimenting with, or considering, crowdsourcing.
Resumo:
Background - Problems of quality and safety persist in health systems worldwide. We conducted a large research programme to examine culture and behaviour in the English National Health Service (NHS). Methods - Mixed-methods study involving collection and triangulation of data from multiple sources, including interviews, surveys, ethnographic case studies, board minutes and publicly available datasets. We narratively synthesised data across the studies to produce a holistic picture and in this paper present a highlevel summary. Results - We found an almost universal desire to provide the best quality of care. We identified many 'bright spots' of excellent caring and practice and high-quality innovation across the NHS, but also considerable inconsistency. Consistent achievement of high-quality care was challenged by unclear goals, overlapping priorities that distracted attention, and compliance-oriented bureaucratised management. The institutional and regulatory environment was populated by multiple external bodies serving different but overlapping functions. Some organisations found it difficult to obtain valid insights into the quality of the care they provided. Poor organisational and information systems sometimes left staff struggling to deliver care effectively and disempowered them from initiating improvement. Good staff support and management were also highly variable, though they were fundamental to culture and were directly related to patient experience, safety and quality of care. Conclusions - Our results highlight the importance of clear, challenging goals for high-quality care. Organisations need to put the patient at the centre of all they do, get smart intelligence, focus on improving organisational systems, and nurture caring cultures by ensuring that staff feel valued, respected, engaged and supported.
Resumo:
This paper explores the components of Service Quality in HE from the Business School Postgraduate student perspective. A six-dimensional scale measuring Service Quality is developed based on focus group and survey data. Our findings highlight that postgraduate students are highly outcome oriented; the award of a reputable degree to gain employment is more important than learning for life. Whilst developing employable graduates, Business Schools must not neglect the core service; teaching & learning. In the long-term this contributes to employability rates and the reputation of institutions. However, as student satisfaction is an increasingly paramount objective, balancing the core service and factors perceived as important by postgraduate students is key.
Resumo:
Purpose - The purpose of this paper is to construct a new e-commerce innovation and adoption model that takes into account various stages of e-commerce adoption (interactive, non-interactive and stabilised) and covers technological, organisational and environmental factors. This was tested using data collected from manufacturing and service companies in Saudi Arabia (SA) to reveal inhibitors and catalysts for e-commerce adoption. Design/methodology/approach - This study uses new data from surveys from 202 companies and then uses exploratory factor analysis and structural equation modelling for analyses. Findings - This study shows that the new stage-oriented model (SOM) is valid and can reveal specific detailed nuances of e-commerce adoption within a particular setting. Surprising results show that SA is not so very different to developed western countries in respect to e-commerce adoption. However there are some important differences which are discussed in detail. Research limitations/implications - A new SOM for e-commerce adoption is provided which may be used by other IS adoption researchers. Practical implications - Managers responsible for the adoption of e-commerce in SA, the Middle East and beyond can learn from these findings to speed up adoption rates and make e-commerce more effective. Social implications - This work may help spread e-commerce use throughout SA, the Middle East and to other developing nations. Originality/value - The results add to the extremely limited number of empirical studies that has been conducted to investigate e-commerce adoption in the context of Arabic countries.
Resumo:
This paper is sponsored by the Ministry of Education and Research of the Republic of Bulgaria in the framework of project No 105 “Multimedia Packet Switching Networks Planning with Quality of Service and Traffic Management”.
Resumo:
The paper gives an overview about the ongoing FP6-IST INFRAWEBS project and describes the main layers and software components embedded in an application oriented realisation framework. An important part of INFRAWEBS is a Semantic Web Unit (SWU) – a collaboration platform and interoperable middleware for ontology-based handling and maintaining of SWS. The framework provides knowledge about a specific domain and relies on ontologies to structure and exchange this knowledge to semantic service development modules. INFRAWEBS Designer and Composer are sub-modules of SWU responsible for creating Semantic Web Services using Case-Based Reasoning approach. The Service Access Middleware (SAM) is responsible for building up the communication channels between users and various other modules. It serves as a generic middleware for deployment of Semantic Web Services. This software toolset provides a development framework for creating and maintaining the full-life-cycle of Semantic Web Services with specific application support.
Resumo:
Risk management in healthcare represents a group of various complex actions, implemented to improve the quality of healthcare services and guarantee the patients safety. Risks cannot be eliminated, but it can be controlled with different risk assessment methods derived from industrial applications and among these the Failure Mode Effect and Criticality Analysis (FMECA) is a largely used methodology. The main purpose of this work is the analysis of failure modes of the Home Care (HC) service provided by local healthcare unit of Naples (ASL NA1) to focus attention on human and non human factors according to the organization framework selected by WHO. © Springer International Publishing Switzerland 2014.
Resumo:
This is an extended version of an article presented at the Second International Conference on Software, Services and Semantic Technologies, Sofia, Bulgaria, 11–12 September 2010.
Resumo:
Volunteered Service Composition (VSC) refers to the process of composing volunteered services and resources. These services are typically published to a pool of voluntary resources. The composition aims at satisfying some objectives (e.g. Utilizing storage and eliminating waste, sharing space and optimizing for energy, reducing computational cost etc.). In cases when a single volunteered service does not satisfy a request, VSC will be required. In this paper, we contribute to three approaches for composing volunteered services: these are exhaustive, naïve and utility-based search approach to VSC. The proposed new utility-based approach, for instance, is based on measuring the utility that each volunteered service can provide to each request and systematically selects the one with the highest utility. We found that the utility-based approach tend to be more effective and efficient when selecting services, while minimizing resource waste when compared to the other two approaches.
Resumo:
The enormous potential of cloud computing for improved and cost-effective service has generated unprecedented interest in its adoption. However, a potential cloud user faces numerous risks regarding service requirements, cost implications of failure and uncertainty about cloud providers' ability to meet service level agreements. These risks hinder the adoption of cloud. We extend the work on goal-oriented requirements engineering (GORE) and obstacles for informing the adoption process. We argue that obstacles prioritisation and their resolution is core to mitigating risks in the adoption process. We propose a novel systematic method for prioritising obstacles and their resolution tactics using Analytical Hierarchy Process (AHP). We provide an example to demonstrate the applicability and effectiveness of the approach. To assess the AHP choice of the resolution tactics we support the method by stability and sensitivity analysis. Copyright 2014 ACM.