857 resultados para Service-oriented grid computing
Resumo:
Arguably, the world has become one large pervasive computing environment. Our planet is growing a digital skin of a wide array of sensors, hand-held computers, mobile phones, laptops, web services and publicly accessible web-cams. Often, these devices and services are deployed in groups, forming small communities of interacting devices. Service discovery protocols allow processes executing on each device to discover services offered by other devices within the community. These communities can be linked together to form a wide-area pervasive environment, allowing processes in one p u p tu interact with services in another. However, the costs of communication and the protocols by which this communication is mediated in the wide-area differ from those of intra-group, or local-area, communication. Communication is an expensive operation for small, battery powered devices, but it is less expensive for servem and workstations, which have a constant power supply and 81'e connected to high bandwidth networks. This paper introduces Superstring, a peer to-peer service discovery protocol optimised fur use in the wide-area. Its goals are to minimise computation and memory overhead in the face of large numbers of resources. It achieves this memory and computation scalability by distributing the storage cost of service descriptions and the computation cost of queries over multiple resolvers.
Resumo:
A major requirement for pervasive systems is to integrate context-awareness to support heterogeneous networks and device technologies and at the same time support application adaptations to suit user activities. However, current infrastructures for pervasive systems are based on centralized architectures which are focused on context support for service adaptations in response to changes in the computing environment or user mobility. In this paper, we propose a hierarchical architecture based on active nodes, which maximizes the computational capabilities of various nodes within the pervasive computing environment, while efficiently gathering and evaluating context information from the user's working environment. The migratable active node architecture employs various decision making processes for evaluating a rich set of context information in order to dynamically allocate active nodes in the working environment, perform application adaptations and predict user mobility. The active node also utilizes the Redundant Positioning System to accurately manage user's mobility. This paper demonstrates the active node capabilities through context-aware vertical handover applications.
Resumo:
This paper asks to question. First, what types of linkages make firms in the service sector innovate? And second, what is the link between innovation and the firms’ productivity and export performance? Using survey data from Northern Ireland we find that links intra-regional links (i.e. within Northern Ireland) to customers, suppliers and universities have little effect on innovation, but external links (i.e. outside Northern Ireland) help to boost innovation. Relationships between innovation, exporting and productivity prove complex but suggest that innovation itself is not sufficient to generate productivity improvements. Only when innovation is combined with increased export activity are productivity gains produced. This suggests that regional innovation policy should be oriented towards helping firms to innovate only where it helps firms to enter export markets or to expand their existing export market presence.
Resumo:
INTAMAP is a web processing service for the automatic interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the open geospatial consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an open source solution. The system couples the 52-North web processing service, accepting data in the form of an observations and measurements (O&M) document with a computing back-end realized in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a new markup language to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropies and extreme values. In the light of the INTAMAP experience, we discuss the lessons learnt.
Resumo:
This thesis describes research on End-User Computing (EUC) in small business in an environment where no Information System (IS) support and expertise are available. The research aims to identify the factors that contribute to EUC Sophistication and understand the extent small firms are capable of developing their own applications. The intention is to assist small firms to adopt EUC, encourage better utilisation of their IT resources and gain the benefits associated with computerisation. The factors examined are derived inductively from previous studies where a model is developed to map these factors with the degree of sophistication associated with IT and EUC. This study attempts to combine the predictive power of quantitative research through surveys with the explanatory power of qualitative research through action-oriented case study. Following critical examination of the literature, a survey of IT Adoption and EUC was conducted. Instruments were then developed to measure EUC and IT Sophistication indexes based on sophistication constructs adapted from previous studies using data from the survey. This is followed by an in-depth action case study involving two small firms to investigate the EUC phenomenon in its real life context. The accumulated findings from these mixed research strategies are used to form the final model of EUC Sophistication in small business. Results of the study suggest both EUC Sophistication and the Presence of EUC in small business are affected by Management Support and Behaviour towards EUC. Additionally EUC Sophistication is also affected by the presence of an EUC Champion. Results are also consistent with respect to the independence between IT Sophistication and EUC Sophistication. The main research contributions include an accumulated knowledge of EUC in small business, the Model of EUC Sophistication, an instrument to measure EUC Sophistication Index for small firms, and a contribution to research methods in IS.
Resumo:
This thesis has been concerned with obtaining evidence to explore the proposition that the provision of occupational health services as arranged at the present time represents a misallocation of resources. The research has been undertaken within the occupational health service of a large Midlands food factory. As the research progressed it became evident that questions were being raised about the nature and scope of occupational health as well as the contribution, in combating danger at work, that occupational health services can make to the health and safety team. These questions have been scrutinized in depth, as they are clearly important, and a resolution of the problem of the definition of occupational health has been proposed. I have taken the approach of attempting to identify specific objectives or benefits of occupational health activities so that it is possible to assess how far these objectives are being achieved. I have looked at three aspects of occupational health; audiometry, physiotherapy and pre-employment medical examinations as these activities embody crucial concepts which are common to all activities in an occupational health programme. A three category classification of occupational health activities is proposed such that the three activities provide examples within each category. These are called personnel therapy, personnel input screening and personnel throughput screening. I conclude that I have not shown audiometry to be cost-effective. My observations of the physiotherapy service lead me to support the suggestion that there is a decline in sickness absence rates due to physiotherapy in industry. With pre-employment medical examinations I have shown that the service is product safety oriented and that benefits are extremely difficult to identify. In regard to the three services studied, in the one factory investigated, and because of the immeasurability of certain activities, I find support for the proposition that the mix of occupational health services as provided at the present time represents a misallocation of resources.
Resumo:
Adaptability for distributed object-oriented enterprise frameworks is a critical mission for system evolution. Today, building adaptive services is a complex task due to lack of adequate framework support in the distributed computing environment. In this thesis, we propose a Meta Level Component-Based Framework (MELC) which uses distributed computing design patterns as components to develop an adaptable pattern-oriented framework for distributed computing applications. We describe our novel approach of combining a meta architecture with a pattern-oriented framework, resulting in an adaptable framework which provides a mechanism to facilitate system evolution. The critical nature of distributed technologies requires frameworks to be adaptable. Our framework employs a meta architecture. It supports dynamic adaptation of feasible design decisions in the framework design space by specifying and coordinating meta-objects that represent various aspects within the distributed environment. The meta architecture in MELC framework can provide the adaptability for system evolution. This approach resolves the problem of dynamic adaptation in the framework, which is encountered in most distributed applications. The concept of using a meta architecture to produce an adaptable pattern-oriented framework for distributed computing applications is new and has not previously been explored in research. As the framework is adaptable, the proposed architecture of the pattern-oriented framework has the abilities to dynamically adapt new design patterns to address technical system issues in the domain of distributed computing and they can be woven together to shape the framework in future. We show how MELC can be used effectively to enable dynamic component integration and to separate system functionality from business functionality. We demonstrate how MELC provides an adaptable and dynamic run time environment using our system configuration and management utility. We also highlight how MELC will impose significant adaptability in system evolution through a prototype E-Bookshop application to assemble its business functions with distributed computing components at the meta level in MELC architecture. Our performance tests show that MELC does not entail prohibitive performance tradeoffs. The work to develop the MELC framework for distributed computing applications has emerged as a promising way to meet current and future challenges in the distributed environment.
Resumo:
INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52°North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process.
Resumo:
Purpose – The main purpose of this paper is to analyze knowledge management in service networks. It analyzes the knowledge management process and identifies related challenges. The authors take a strategic management approach instead of a more technology-oriented approach, since it is believed that managerial problems still remain after technological problems are solved. Design/methodology/approach – The paper explores the literature on the topic of knowledge management as well as the resource (or knowledge) based view of the firm. It offers conceptual insights and provides possible solutions for knowledge management problems. Findings – The paper discusses several possible solutions for managing knowledge processes in knowledge-intensive service networks. Solutions for knowledge identification/generation, knowledge application, knowledge combination/transfer and supporting the evolution of tacit network knowledge include personal and technological aspects, as well as organizational and cultural elements. Practical implications – In a complex environment, knowledge management and network management become crucial for business success. It is the task of network management to establish routines, and to build and regularly refresh meta-knowledge about the competencies and abilities that exist within the network. It is suggested that each network partner should be rated according to the contribution to the network knowledge base. Based on this rating, a particular network partner is a member of a certain knowledge club, meaning that the partner has access to a particular level of network knowledge. Such an established routine provides strong incentives to add knowledge to the network's knowledge base Originality/value – This paper is a first attempt to outline the problems of knowledge management in knowledge-intensive service networks and, by so doing, to introduce strategic management reasoning to the discussion.
Resumo:
Constructing and executing distributed systems that can adapt to their operating context in order to sustain provided services and the service qualities are complex tasks. Managing adaptation of multiple, interacting services is particularly difficult since these services tend to be distributed across the system, interdependent and sometimes tangled with other services. Furthermore, the exponential growth of the number of potential system configurations derived from the variabilities of each service need to be handled. Current practices of writing low-level reconfiguration scripts as part of the system code to handle run time adaptation are both error prone and time consuming and make adaptive systems difficult to validate and evolve. In this paper, we propose to combine model driven and aspect oriented techniques to better cope with the complexities of adaptive systems construction and execution, and to handle the problem of exponential growth of the number of possible configurations. Combining these techniques allows us to use high level domain abstractions, simplify the representation of variants and limit the problem pertaining to the combinatorial explosion of possible configurations. In our approach we also use models at runtime to generate the adaptation logic by comparing the current configuration of the system to a composed model representing the configuration we want to reach. © 2008 Springer-Verlag Berlin Heidelberg.
Resumo:
IEEE 802.15.4 standard has been recently developed for low power wireless personal area networks. It can find many applications for smart grid, such as data collection, monitoring and control functions. The performance of 802.15.4 networks has been widely studied in the literature. However the main focus has been on the modeling throughput performance with frame collisions. In this paper we propose an analytic model which can model the impact of frame collisions as well as frame corruptions due to channel bit errors. With this model the frame length can be carefully selected to improve system performance. The analytic model can also be used to study the 802.15.4 networks with interference from other co-located networks, such as IEEE 802.11 and Bluetooth networks. © 2011 Springer-Verlag.
Resumo:
IEEE 802.15.4 networks (also known as ZigBee networks) has the features of low data rate and low power consumption. In this paper we propose an adaptive data transmission scheme which is based on CSMA/CA access control scheme, for applications which may have heavy traffic loads such as smart grids. In the proposed scheme, the personal area network (PAN) coordinator will adaptively broadcast a frame length threshold, which is used by the sensors to make decision whether a data frame should be transmitted directly to the target destinations, or follow a short data request frame. If the data frame is long and prone to collision, use of a short data request frame can efficiently reduce the costs of the potential collision on the energy and bandwidth. Simulation results demonstrate the effectiveness of the proposed scheme with largely improve bandwidth and power efficiency. © 2011 Springer-Verlag.
Resumo:
Technological advancements enable new sourcing models in software development such as cloud computing, software-as-a-service, and crowdsourcing. While the first two are perceived as a re-emergence of older models (e.g., ASP), crowdsourcing is a new model that creates an opportunity for a global workforce to compete with established service providers. Organizations engaging in crowdsourcing need to develop the capabilities to successfully utilize this sourcing model in delivering services to their clients. To explore these capabilities we collected qualitative data from focus groups with crowdsourcing leaders at a large technology organization. New capabilities we identified stem from the need of the traditional service provider to assume a "client" role in the crowdsourcing context, while still acting as a "vendor" in providing services to the end client. This paper expands the research on vendor capabilities and IS outsourcing as well as offers important insights to organizations that are experimenting with, or considering, crowdsourcing.
Resumo:
Background - Problems of quality and safety persist in health systems worldwide. We conducted a large research programme to examine culture and behaviour in the English National Health Service (NHS). Methods - Mixed-methods study involving collection and triangulation of data from multiple sources, including interviews, surveys, ethnographic case studies, board minutes and publicly available datasets. We narratively synthesised data across the studies to produce a holistic picture and in this paper present a highlevel summary. Results - We found an almost universal desire to provide the best quality of care. We identified many 'bright spots' of excellent caring and practice and high-quality innovation across the NHS, but also considerable inconsistency. Consistent achievement of high-quality care was challenged by unclear goals, overlapping priorities that distracted attention, and compliance-oriented bureaucratised management. The institutional and regulatory environment was populated by multiple external bodies serving different but overlapping functions. Some organisations found it difficult to obtain valid insights into the quality of the care they provided. Poor organisational and information systems sometimes left staff struggling to deliver care effectively and disempowered them from initiating improvement. Good staff support and management were also highly variable, though they were fundamental to culture and were directly related to patient experience, safety and quality of care. Conclusions - Our results highlight the importance of clear, challenging goals for high-quality care. Organisations need to put the patient at the centre of all they do, get smart intelligence, focus on improving organisational systems, and nurture caring cultures by ensuring that staff feel valued, respected, engaged and supported.