992 resultados para service description
Resumo:
Web services from different partners can be combined to applications that realize a more complex business goal. Such applications built as Web service compositions define how interactions between Web services take place in order to implement the business logic. Web service compositions not only have to provide the desired functionality but also have to comply with certain Quality of Service (QoS) levels. Maximizing the users' satisfaction, also reflected as Quality of Experience (QoE), is a primary goal to be achieved in a Service-Oriented Architecture (SOA). Unfortunately, in a dynamic environment like SOA unforeseen situations might appear like services not being available or not responding in the desired time frame. In such situations, appropriate actions need to be triggered in order to avoid the violation of QoS and QoE constraints. In this thesis, proper solutions are developed to manage Web services and Web service compositions with regard to QoS and QoE requirements. The Business Process Rules Language (BPRules) was developed to manage Web service compositions when undesired QoS or QoE values are detected. BPRules provides a rich set of management actions that may be triggered for controlling the service composition and for improving its quality behavior. Regarding the quality properties, BPRules allows to distinguish between the QoS values as they are promised by the service providers, QoE values that were assigned by end-users, the monitored QoS as measured by our BPR framework, and the predicted QoS and QoE values. BPRules facilitates the specification of certain user groups characterized by different context properties and allows triggering a personalized, context-aware service selection tailored for the specified user groups. In a service market where a multitude of services with the same functionality and different quality values are available, the right services need to be selected for realizing the service composition. We developed new and efficient heuristic algorithms that are applied to choose high quality services for the composition. BPRules offers the possibility to integrate multiple service selection algorithms. The selection algorithms are applicable also for non-linear objective functions and constraints. The BPR framework includes new approaches for context-aware service selection and quality property predictions. We consider the location information of users and services as context dimension for the prediction of response time and throughput. The BPR framework combines all new features and contributions to a comprehensive management solution. Furthermore, it facilitates flexible monitoring of QoS properties without having to modify the description of the service composition. We show how the different modules of the BPR framework work together in order to execute the management rules. We evaluate how our selection algorithms outperform a genetic algorithm from related research. The evaluation reveals how context data can be used for a personalized prediction of response time and throughput.
Resumo:
Purpose: there are many studies reporting the benefits of pulmonary rehabilitation, but few of them exhibit the behavior and activities of these services. This article presents the characteristics of services, parts management and training level of team members, in addition to the variables or instruments used to measure the effectiveness and impact in these programs. Method: it was made a cross sectional convenience sample which included seven pulmonary rehabilitation services in four Colombian cities (Bogotá, Medellín, Manizales and Cali), selected by the coverage, for having at least one year of experience and for being formally established and recognized nationwide. The interdisciplinary team of each service answered a survey that was validated through a pilot test and expert consensus. Participation was voluntary. Results: labor onset pulmonary rehabilitation services correspond to an average of a decade, with COPD and asthma pathologies of attention. The programs are characterized by an outpatient treatment with an average duration of eight to twelve weeks, with a frequency of an hour three times a week. Also, the director of the service is regularly a pulmonologist and the coordinator a physiotherapist (57.14%). The posgradual training of these professionals is notable, and they report to have procedural, administrative and communicative skills, but qualify regular there research skills. The physical and technological resources are well tested. 71.42% have done impact studies, but only 28.57% have been published. All have in common training in upper limbs, lower limbs, respiratory muscles, counseling, functional assessment and quality of life. The effectiveness and impact of programs is measured by the walking test, quality of life questionnaires and activities of daily living.
Resumo:
Posttraumatic stress disorder (PTSD) is reported to be caused by exposure to traumatic events including (but not limited to) military combat, violent personal assault, being kidnapped or taken hostage and terrorist attacks. Initial data suggest that at least 1 out of 6 Iraq War veterans are exhibiting symptoms of depression, anxiety and PTSD. Virtual reality (VR) delivered exposure therapy for PTSD has been used with reports of positive outcomes. The aim of the current paper, is to present the rationale and brief description of a Virtual Iraq/Afghanistan PTSD VR therapy application and present initial findings from its use with PTSD patients. Thus far, Virtual Iraq/Afghanistan consists of a series of customizable virtual scenarios designed to represent relevant Middle Eastern VR contexts for exposure therapy, including a city and desert road convoy environment. User-centered design feedback, needed to iteratively evolve the system, was gathered from returning Iraq War veterans in the USA and from a system deployed in Iraq and tested by an Army Combat Stress Control Team. Results from an open clinical trial at San Diego Naval Medical Center of the first 20 treatment completers indicate that 16 no longer met PTSD screening criteria at post-treatment, with only one not maintaining treatment gains at 3 month follow-up.
Resumo:
This paper proposes a limitation to epistemological claims to theory building prevalent in critical realist research. While accepting the basic ontological and epistemological positions of the perspective as developed by Roy Bhaskar, it is argued that application in social science has relied on sociological concepts to explain the underlying generative mechanisms, and that in many cases this has been subject to the effects of an anthropocentric constraint. A novel contribution to critical realist research comes from the work and ideas of Gregory Bateson. This is in service of two central goals of critical realism, namely an abductive route to theory building and a commitment to interdisciplinarity. Five aspects of Bateson’s epistemology are introduced: (1) difference, (2) logical levels of abstraction, (3) recursive causal loops, (4) the logic of metaphor, and (5) Bateson’s theory of mind. The comparison between Bateson and Bhaskar’s ideas is seen as a form of double description, illustrative of the point being raised. The paper concludes with an appeal to critical realists to start exploring the writing and outlook of Bateson himself.
Resumo:
A Gram-negative, rod-shaped, non-spore-forming and nitrogen-fixing bacterium, designated ICB 89(T), was isolated from stems of a Brazilian sugar cane variety widely used in organic farming. 16S rRNA gene sequence analysis revealed that strain ICB 89(T) belonged to the genus Stenotrophomonas and was most closely related to Stenotrophomonas maltophilia LMG 958(T), Stenotrophomonas rhizophila LMG 22075(T), Stenotrophomonas nitritireducens L2(T), [Pseudomonas] geniculata ATCC 19374(T), [Pseudomonas] hibiscicola ATCC 19867(T) and [Pseudomonas] beteli ATCC 19861(T). DNA-DNA hybridization together with chemotaxonomic data and biochemical characteristics allowed the differentiation of strain ICB 89(T) from its nearest phylogenetic neighbours. Therefore, strain ICB 89(T) represents a novel species, for which the name Stenotrophomonas pavanii sp. nov. is proposed. The type strain is ICB 89(T) (=CBMAI 564(T) =LMG 25348(T)).
Resumo:
Architectural description languages (ADLs) are used to specify a high-level, compositional view of a software application, specifying how a system is to be composed from coarse-grain components. ADLs usually come equipped with a formal dynamic semantics, facilitating specification and analysis of distributed and event-based systems. In this paper, we describe the TrustME, an ADL framework that provides both a process and a structural view of web service-based systems. We use Petri-net descriptions to give a dynamic view of business workflow for web service collaboration. We adapt the approach of Schmidt to define a form of Meyer's design-by-contract for configuring workflow architectures. This serves as a configuration-level means of constructing safer, more robust systems.
Resumo:
Web service-based application is an architectural style, where a collection of Web services communicate to each other to execute processes. With the popularity increase of Web service-based applications and since messages exchanged inside of this applications can be complex, we need tools to simplify the understanding of interrelationship among Web services. This work present a description of a graphical representation of Web service-based applications and the mechanisms inserted among Web service requesters and providers to catch information to represent an application. The major contribution of this paper is to discus and use HTTP and SOAP information to show a graphical representation similar to a UML sequence diagram of Web service-based applications.
Resumo:
Web service-based application is an architectural style, where a collection of Web services communicates to each other to execute processes. With the popularity increase of developing Web service-based application and once Web services may change, in terms of functional and non-functional Quality of Service (QoS), we need mechanisms to monitor, diagnose, and repair Web services into a Web Application. This work presents a description of self-healing architecture that deals with these mechanisms. Other contributions of this paper are using the proxy server to measure Web service QoS values and to employ some strategies to recovery the effects from misbehaved Web services. © 2008 IEEE.
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.
Resumo:
This paper investigates the current relationship between information management and information mediation along with the digital reference service through a case study which took place in an academic library. The concept of information mediation is herein analyzed, since a conceptual examination provides elements that will help people to comprehend and evaluate the concerned service. The information professional plays a very important role in the mediation aforementioned, which may be directly or indirectly; consciously or unconsciously; by himself/herself or plurally; individually or inserted into a group ? in all such manners that mediator facilitates the acquisition of information, fully or partially satisfying a user?s need of all sorts of knowledge. Meanwhile, we here approach information management from a scope that points out a description over performed activities concerned to policies and procedures put into effect until the service evaluation by proposing a criterion for such point. Finally, we outline a few actions to be implemented in long-term perspective, which goal is to continually ameliorate such assistance, taking in account the human factor
Resumo:
This paper investigates the current relationship between information management and information mediation along with the digital reference service through a case study which took place in an academic library. The concept of information mediation is herein analyzed, since a conceptual examination provides elements that will help people to comprehend and evaluate the concerned service. The information professional plays a very important role in the mediation aforementioned, which may be directly or indirectly; consciously or unconsciously; by himself/herself or plurally; individually or inserted into a group ? in all such manners that mediator facilitates the acquisition of information, fully or partially satisfying a user?s need of all sorts of knowledge. Meanwhile, we here approach information management from a scope that points out a description over performed activities concerned to policies and procedures put into effect until the service evaluation by proposing a criterion for such point. Finally, we outline a few actions to be implemented in long-term perspective, which goal is to continually ameliorate such assistance, taking in account the human factor
Resumo:
This paper investigates the current relationship between information management and information mediation along with the digital reference service through a case study which took place in an academic library. The concept of information mediation is herein analyzed, since a conceptual examination provides elements that will help people to comprehend and evaluate the concerned service. The information professional plays a very important role in the mediation aforementioned, which may be directly or indirectly; consciously or unconsciously; by himself/herself or plurally; individually or inserted into a group ? in all such manners that mediator facilitates the acquisition of information, fully or partially satisfying a user?s need of all sorts of knowledge. Meanwhile, we here approach information management from a scope that points out a description over performed activities concerned to policies and procedures put into effect until the service evaluation by proposing a criterion for such point. Finally, we outline a few actions to be implemented in long-term perspective, which goal is to continually ameliorate such assistance, taking in account the human factor
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
In this introductory chapter we put in context and give a brief outline of the work that we thoroughly present in the rest of the dissertation. We consider this work divided in two main parts. The first part is the Firenze Framework, a knowledge level description framework rich enough to express the semantics required for describing both semantic Web services and semantic Grid services. We start by defining what the Semantic Grid is and its relation with the Semantic Web; and the possibility of their convergence since both initiatives have become mainly service-oriented. We also introduce the main motivators of the creation of this framework, one is to provide a valid description framework that works at knowledge level; the other to provide a description framework that takes into account the characteristics of Grid services in order to be able to describe them properly. The other part of the dissertation is devoted to Vega, an event-driven architecture that, by means of proposed knowledge level description framework, is able to achieve high scale provisioning of knowledge-intensive services. In this introductory chapter we portrait the anatomy of a generic event-driven architecture, and we briefly enumerate their main characteristics, which are the reason that make them our choice.
Resumo:
The use of semantic and Linked Data technologies for Enterprise Application Integration (EAI) is increasing in recent years. Linked Data and Semantic Web technologies such as the Resource Description Framework (RDF) data model provide several key advantages over the current de-facto Web Service and XML based integration approaches. The flexibility provided by representing the data in a more versatile RDF model using ontologies enables avoiding complex schema transformations and makes data more accessible using Web standards, preventing the formation of data silos. These three benefits represent an edge for Linked Data-based EAI. However, work still has to be performed so that these technologies can cope with the particularities of the EAI scenarios in different terms, such as data control, ownership, consistency, or accuracy. The first part of the paper provides an introduction to Enterprise Application Integration using Linked Data and the requirements imposed by EAI to Linked Data technologies focusing on one of the problems that arise in this scenario, the coreference problem, and presents a coreference service that supports the use of Linked Data in EAI systems. The proposed solution introduces the use of a context that aggregates a set of related identities and mappings from the identities to different resources that reside in distinct applications and provide different views or aspects of the same entity. A detailed architecture of the Coreference Service is presented explaining how it can be used to manage the contexts, identities, resources, and applications which they relate to. The paper shows how the proposed service can be utilized in an EAI scenario using an example involving a dashboard that integrates data from different systems and the proposed workflow for registering and resolving identities. As most enterprise applications are driven by business processes and involve legacy data, the proposed approach can be easily incorporated into enterprise applications.