780 resultados para Service-based
Resumo:
This paper addresses the novel notion of offering a radio access network as a service. Its components may be instantiated on general purpose platforms with pooled resources (both radio and hardware ones) dimensioned on-demand, elastically and following the pay-per-use principle. A novel architecture is proposed that supports this concept. The architecture's success is in its modularity, well-defined functional elements and clean separation between operational and control functions. By moving much processing traditionally located in hardware for computation in the cloud, it allows the optimisation of hardware utilization and reduction of deployment and operation costs. It enables operators to upgrade their network as well as quickly deploy and adapt resources to demand. Also, new players may easily enter the market, permitting a virtual network operator to provide connectivity to its users.
Resumo:
by Abraham Zevi Idelsohn
Resumo:
by Abraham Zevi Idelsohn
Resumo:
by Abraham Zevi Idelsohn
Resumo:
Models are an effective tool for systems and software design. They allow software architects to abstract from the non-relevant details. Those qualities are also useful for the technical management of networks, systems and software, such as those that compose service oriented architectures. Models can provide a set of well-defined abstractions over the distributed heterogeneous service infrastructure that enable its automated management. We propose to use the managed system as a source of dynamically generated runtime models, and decompose management processes into a composition of model transformations. We have created an autonomic service deployment and configuration architecture that obtains, analyzes, and transforms system models to apply the required actions, while being oblivious to the low-level details. An instrumentation layer automatically builds these models and interprets the planned management actions to the system. We illustrate these concepts with a distributed service update operation.
Resumo:
Open source is a software development paradigm that has seen a huge rise in recent years. It reduces IT costs and time to market, while increasing security and reliability. However, the difficulty in integrating developments from different communities and stakeholders prevents this model from reaching its full potential. This is mainly due to the challenge of determining and locating the correct dependencies for a given software artifact. To solve this problem we propose the development of an extensible software component repository based upon models. This repository should be capable of solving the dependencies between several components and work with already existing repositories to access the needed artifacts transparently. This repository will also be easily expandable, enabling the creation of modules that support new kinds of dependencies or other existing repository technologies. The proposed solution will work with OSGi components and use OSGi itself.
Resumo:
The properties of data and activities in business processes can be used to greatly facilítate several relevant tasks performed at design- and run-time, such as fragmentation, compliance checking, or top-down design. Business processes are often described using workflows. We present an approach for mechanically inferring business domain-specific attributes of workflow components (including data Ítems, activities, and elements of sub-workflows), taking as starting point known attributes of workflow inputs and the structure of the workflow. We achieve this by modeling these components as concepts and applying sharing analysis to a Horn clause-based representation of the workflow. The analysis is applicable to workflows featuring complex control and data dependencies, embedded control constructs, such as loops and branches, and embedded component services.
Resumo:
Service compositions put together loosely-coupled component services to perform more complex, higher level, or cross-organizational tasks in a platform-independent manner. Quality-of-Service (QoS) properties, such as execution time, availability, or cost, are critical for their usability, and permissible boundaries for their values are defined in Service Level Agreements (SLAs). We propose a method whereby constraints that model SLA conformance and violation are derived at any given point of the execution of a service composition. These constraints are generated using the structure of the composition and properties of the component services, which can be either known or empirically measured. Violation of these constraints means that the corresponding scenario is unfeasible, while satisfaction gives values for the constrained variables (start / end times for activities, or number of loop iterations) which make the scenario possible. These results can be used to perform optimized service matching or trigger preventive adaptation or healing.
Resumo:
Knowledge about the quality characteristics (QoS) of service com- positions is crucial for determining their usability and economic value. Ser- vice quality is usually regulated using Service Level Agreements (SLA). While end-to-end SLAs are well suited for request-reply interactions, more complex, decentralized, multiparticipant compositions (service choreographies) typ- ically involve multiple message exchanges between stateful parties and the corresponding SLAs thus encompass several cooperating parties with interde- pendent QoS. The usual approaches to determining QoS ranges structurally (which are by construction easily composable) are not applicable in this sce- nario. Additionally, the intervening SLAs may depend on the exchanged data. We present an approach to data-aware QoS assurance in choreographies through the automatic derivation of composable QoS models from partici- pant descriptions. Such models are based on a message typing system with size constraints and are derived using abstract interpretation. The models ob- tained have multiple uses including run-time prediction, adaptive participant selection, or design-time compliance checking. We also present an experimen- tal evaluation and discuss the benefits of the proposed approach.
Resumo:
Data-related properties of the activities involved in a service composition can be used to facilitate several design-time and run-time adaptation tasks, such as service evolution, distributed enactment, and instance-level adaptation. A number of these properties can be expressed using a notion of sharing. We present an approach for automated inference of data properties based on sharing analysis, which is able to handle service compositions with complex control structures, involving loops and sub-workflows. The properties inferred can include data dependencies, information content, domain-defined attributes, privacy or confidentiality levels, among others. The analysis produces characterizations of the data and the activities in the composition in terms of minimal and maximal sharing, which can then be used to verify compliance of potential adaptation actions, or as supporting information in their generation. This sharing analysis approach can be used both at design time and at run time. In the latter case, the results of analysis can be refined using the composition traces (execution logs) at the point of execution, in order to support run-time adaptation.
Resumo:
The availability of electronic health data favors scientific advance through the creation of repositories for secondary use. Data anonymization is a mandatory step to comply with current legislation. A service for the pseudonymization of electronic healthcare record (EHR) extracts aimed at facilitating the exchange of clinical information for secondary use in compliance with legislation on data protection is presented. According to ISO/TS 25237, pseudonymization is a particular type of anonymization. This tool performs the anonymizations by maintaining three quasi-identifiers (gender, date of birth and place of residence) with a degree of specification selected by the user. The developed system is based on the ISO/EN 13606 norm using its characteristics specifically favorable for anonymization. The service is made up of two independent modules: the demographic server and the pseudonymizing module. The demographic server supports the permanent storage of the demographic entities and the management of the identifiers. The pseudonymizing module anonymizes the ISO/EN 13606 extracts. The pseudonymizing process consists of four phases: the storage of the demographic information included in the extract, the substitution of the identifiers, the elimination of the demographic information of the extract and the elimination of key data in free-text fields. The described pseudonymizing system was used in three Telemedicine research projects with satisfactory results. A problem was detected with the type of data in a demographic data field and a proposal for modification was prepared for the group in charge of the drawing up and revision of the ISO/EN 13606 norm.
Resumo:
The concept of service oriented architecture has been extensively explored in software engineering, due to the fact that it produces architectures made up of several interconnected modules, easy to reuse when building new systems. This approach to design would be impossible without interconnection mechanisms such as REST (Representationa State Transfer) services, which allow module communication while minimizing coupling. . However, this low coupling brings disadvantages, such as the lack of transparency, which makes it difficult to sistematically create tests without knowledge of the inner working of a system. In this article, we present an automatic error detection system for REST services, based on a statistical analysis over responses produced at multiple service invocations. Thus, a service can be systematically tested without knowing its full specification. The method can find errors in REST services which could not be identified by means of traditional testing methods, and provides limited testing coverage for services whose response format is unknown. It can be also useful as a complement to other testing mechanisms.
Resumo:
The products and services designed for Smart Cities provide the necessary tools to improve the management of modern cities in a more efficient way. These tools need to gather citizens’ information about their activity, preferences, habits, etc. opening up the possibility of tracking them. Thus, privacy and security policies must be developed in order to satisfy and manage the legislative heterogeneity surrounding the services provided and comply with the laws of the country where they are provided. This paper presents one of the possible solutions to manage this heterogeneity, bearing in mind these types of networks, such as Wireless Sensor Networks, have important resource limitations. A knowledge and ontology management system is proposed to facilitate the collaboration between the business, legal and technological areas. This will ease the implementation of adequate specific security and privacy policies for a given service. All these security and privacy policies are based on the information provided by the deployed platforms and by expert system processing.
Resumo:
At head of title: U.S. Dept. of Commerce, Jesse H. Jones, Secretary. Bureau of the Commerce, J.C. Capt, director ..
Resumo:
"March 2002."