997 resultados para Unified Service
Resumo:
A comprehensive user model, built by monitoring a user's current use of applications, can be an excellent starting point for building adaptive user-centred applications. The BaranC framework monitors all user interaction with a digital device (e.g. smartphone), and also collects all available context data (such as from sensors in the digital device itself, in a smart watch, or in smart appliances) in order to build a full model of user application behaviour. The model built from the collected data, called the UDI (User Digital Imprint), is further augmented by analysis services, for example, a service to produce activity profiles from smartphone sensor data. The enhanced UDI model can then be the basis for building an appropriate adaptive application that is user-centred as it is based on an individual user model. As BaranC supports continuous user monitoring, an application can be dynamically adaptive in real-time to the current context (e.g. time, location or activity). Furthermore, since BaranC is continuously augmenting the user model with more monitored data, over time the user model changes, and the adaptive application can adapt gradually over time to changing user behaviour patterns. BaranC has been implemented as a service-oriented framework where the collection of data for the UDI and all sharing of the UDI data are kept strictly under the user's control. In addition, being service-oriented allows (with the user's permission) its monitoring and analysis services to be easily used by 3rd parties in order to provide 3rd party adaptive assistant services. An example 3rd party service demonstrator, built on top of BaranC, proactively assists a user by dynamic predication, based on the current context, what apps and contacts the user is likely to need. BaranC introduces an innovative user-controlled unified service model of monitoring and use of personal digital activity data in order to provide adaptive user-centred applications. This aims to improve on the current situation where the diversity of adaptive applications results in a proliferation of applications monitoring and using personal data, resulting in a lack of clarity, a dispersal of data, and a diminution of user control.
Resumo:
This article analyzed whether the practices of hearing health care were consistent with the principles of universality, comprehensiveness and equity from the standpoint of professionals. It involved qualitative research conducted at a Medium Complexity Hearing Health Care Center. A social worker, three speech therapists, a physician and a psychologist constituted the study subjects. Interviews were conducted as well as observation registered in a field diary. The thematic analysis technique was used in the analysis of the material. The analysis of interviews resulted in the construction of the following themes: Universality and access to hearing health, Comprehensive Hearing Health Care and Hearing Health and Equity. The study identified issues that interfere with the quality of service and run counter to the principles of Brazilian Unified Health System. The conclusion reached was that a relatively simple investment in training and professional qualification can bring about significant changes in order to promote a more universal, comprehensive and equitable health service.
Resumo:
The purpose is to present a scientific research that led to the modeling of an information system which aimed at the maintenance of traceability data in the Brazilian wine industry, according to the principles of a service-oriented architecture (SOA). Since 2005, traceability data maintenance is an obligation for all producers that intend to export to any European Union country. Also, final customers, including the Brazilian ones, have been asking for information about food products. A solution that collectively contemplated the industry was sought in order to permit that producer consortiums of associations could share the costs and benefits of such a solution. Following an extensive bibliographic review, a series of interviews conducted with Brazilian researchers and wine producers in Bento Goncalves - RS, Brazil, elucidated many aspects associated with the wine production process. Information technology issues related to the theme were also researched. The software was modeled with the Unified Modeling Language (UML) and uses web services for data exchange. A model for the wine production process was also proposed. A functional prototype showed that the adopted model is able to fulfill the demands of wine producers. The good results obtained lead us to consider the use of this model in other domains.
Resumo:
ABSTRACT OBJECTIVE To examine whether the level of complexity of the services structure and sociodemographic and clinical characteristics of patients in hemodialysis are associated with the prevalence of poor health self-assessment. METHODS In this cross-sectional study, we evaluated 1,621 patients with chronic terminal kidney disease on hemodialysis accompanied in 81 dialysis services in the Brazilian Unified Health System in 2007. Sampling was performed by conglomerate in two stages and a structured questionnaire was applied to participants. Multilevel multiple logistic regression was used for data analysis. RESULTS The prevalence of poor health self-assessment was of 54.5%, and in multivariable analysis it was associated with the following variables: increasing age (OR = 1.02; 95%CI 1.01–1.02), separated or divorced marital status (OR = 0.62; 95%CI 0.34–0.88), having 12 years or more of study (OR = 0.51; 95%CI 0.37–0.71), spending more than 60 minutes in commuting between home and the dialysis service (OR = 1.80; 95%CI 1.29–2.51), having three or more self-referred diseases (OR = 2.20; 95%CI 1.33–3.62), and reporting some (OR = 2.17; 95%CI 1.66–2.84) or a lot of (OR = 2.74; 95%CI 2.04–3.68) trouble falling asleep. Individuals in treatment in dialysis services with the highest level of complexity in the structure presented less chance of performing a self-assessment of their health as bad (OR = 0.59; 95%CI 0.42–0.84). CONCLUSIONS We showed poor health self-assessment is associated with age, years of formal education, marital status, home commuting time to the dialysis service, number of self-referred diseases, report of trouble sleeping, and also with the level of complexity of the structure of health services. Acknowledging these factors can contribute to the development of strategies to improve the health of patients in hemodialysis in the Brazilian Unified Health System.
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Electrotécnica e de Computadores Especialidade: Robótica e Manufactura Integrada
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Objective The objective of this study is to assess the performance of cytopathology laboratories providing services to the Brazilian Unified Health System (Sistema Único de Saúde - SUS) in the State of Minas Gerais, Brazil. Methods This descriptive study uses data obtained from the Cervical Cancer Information System from January to December 2012. Three quality indicators were analyzed to assess the quality of cervical cytopathology tests: positivity index, percentage of atypical squamous cells (ASCs) in abnormal tests, and percentage of tests compatiblewith high-grade squamous intraepithelial lesions (HSILs). Laboratories were classified according to their production scale in tests per year≤5,000; from 5,001 to 10,000; from 10,001 to 15,000; and 15,001. Based on the collection of variables and the classification of laboratories according to production scale, we created and analyzed a database using Microsoft Office Excel 97-2003. Results In the Brazilian state of Minas Gerais, 146 laboratories provided services to the SUS in 2012 by performing a total of 1,277,018 cervical cytopathology tests. Half of these laboratories had production scales≤5,000 tests/year and accounted for 13.1% of all tests performed in the entire state; in turn, 13.7% of these laboratories presented production scales of > 15,001 tests/year and accounted for 49.2% of the total of tests performed in the entire state. The positivity indexes of most laboratories providing services to the SUS in 2012, regardless of production scale, were below or well below recommended limits. Of the 20 laboratories that performed more than 15,001 tests per year, only three presented percentages of tests compatible with HSILs above the lower limit recommended by the Brazilian Ministry of Health. Conclusion The majority of laboratories providing services to the SUS in Minas Gerais presented quality indicators outside the range recommended by the Brazilian Ministry of Health.
Resumo:
The aim of this study is to analyze and understand a well-being service brand. Brands most definitely are a hot topic of today’s business world. Companies have started to realize the importance of branding, especially when talking about product related industries. Branding of services is a relatively new concept, which has gained less attention within academic literature. Though there is no legal distinction between product and service brands, they both have their own characteristics and qualities. The focus of the study is especially on the current brand images of both internal and external stakeholder groups. Understanding these brand perceptions will help in managing and developing the brand, so that it becomes even stronger, more recognized, and unified. This study is a quantitative, semi hypothetic-deductive single case study. The data for this study was collected throughout an online survey. The respondents represent both internal stakeholders, in direct contact with the service, as well as external stakeholders, who have no previous history with the service brand. The respondents represent a wide age spread, and are also geographically diverse. The study relies on a Finnish context. The study provides numerous managerial takeaways, especially because of its wide scope, on a topic that has never before been studied. All findings strongly reflected existing service brand theory, in addition with making the findings implementable for the case company.
Resumo:
This study discusses the evolution of an omni-channel model in managing customer experience. The purpose of this thesis is to expand the current academic literature available on omni-channel and offer suggestions for omni-channel creation. This is done by studying the features of an omni-channel approach into engaging with customers and through the sub-objectives of describing the process behind its initiation as well as the special features communication service providers need to take in consideration. Theories used as a background for this study are related to customer experience, channel management, omni-channel and finally change management. The empirical study of this thesis consists of seven expert interviews conducted in a case company. The interviews were held between March and November 2014. One of the interviewees is the manager of an omni-channel development team, whilst the rest were in charge of the management of the various customer channels of the company. The organization and analysis of the interview data was conducted topically. The use of themes related to major theories on the subject was utilized to create linkages between theory and practice. The responses were also organized in two groups based on the viewpoint to map responses related to the company perspective as well as the customers´ perspective. The findings in this study are that omni-channel is among the best tools for companies to respond to the challenge induced by changing customer needs and preferences, as well as intensifying competitive environment. The omni-channel model was found to promote excellent customer experience and thus to be a source of competition advantage and increasing financial returns by creating an omni-experience for the customer. Through omniexperience customers see all of the transactions with a company presenting one brand and providing ease and effortlessness in every encounter. The processes behind omni-channel formulation were identified as customer experience proclaimed as the most important strategic goal, mapping and establishing a unified brand experience in all (service) channels and empowering the first line personnel as the gate keepers of omniexperience. Further the tools, measurement and supporting strategies were to be in accordance with the omni-channel strategy and the customer needs to become a partner in a two way transaction with the firm. Based on these findings a model for omni-channel creation is offered. Future research is needed to firstly, further test these findings and expand the theoretical framework on omni-channel, as it is quite scarce to date and secondly, to increase the generalizability of the model suggested.
Resumo:
During the last decade, the Internet usage has been growing at an enormous rate which has beenaccompanied by the developments of network applications (e.g., video conference, audio/videostreaming, E-learning, E-Commerce and real-time applications) and allows several types ofinformation including data, voice, picture and media streaming. While end-users are demandingvery high quality of service (QoS) from their service providers, network undergoes a complex trafficwhich leads the transmission bottlenecks. Considerable effort has been made to study thecharacteristics and the behavior of the Internet. Simulation modeling of computer networkcongestion is a profitable and effective technique which fulfills the requirements to evaluate theperformance and QoS of networks. To simulate a single congested link, simulation is run with asingle load generator while for a larger simulation with complex traffic, where the nodes are spreadacross different geographical locations generating distributed artificial loads is indispensable. Onesolution is to elaborate a load generation system based on master/slave architecture.
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
Cover title.
Resumo:
IEEE 802.11 standard has achieved huge success in the past decade and is still under development to provide higher physical data rate and better quality of service (QoS). An important problem for the development and optimization of IEEE 802.11 networks is the modeling of the MAC layer channel access protocol. Although there are already many theoretic analysis for the 802.11 MAC protocol in the literature, most of the models focus on the saturated traffic and assume infinite buffer at the MAC layer. In this paper we develop a unified analytical model for IEEE 802.11 MAC protocol in ad hoc networks. The impacts of channel access parameters, traffic rate and buffer size at the MAC layer are modeled with the assistance of a generalized Markov chain and an M/G/1/K queue model. The performance of throughput, packet delivery delay and dropping probability can be achieved. Extensive simulations show the analytical model is highly accurate. From the analytical model it is shown that for practical buffer configuration (e.g. buffer size larger than one), we can maximize the total throughput and reduce the packet blocking probability (due to limited buffer size) and the average queuing delay to zero by effectively controlling the offered load. The average MAC layer service delay as well as its standard deviation, is also much lower than that in saturated conditions and has an upper bound. It is also observed that the optimal load is very close to the maximum achievable throughput regardless of the number of stations or buffer size. Moreover, the model is scalable for performance analysis of 802.11e in unsaturated conditions and 802.11 ad hoc networks with heterogenous traffic flows. © 2012 KSI.
Resumo:
Semantic Web Service, one of the most significant research areas within the Semantic Web vision, has attracted increasing attention from both the research community and industry. The Web Service Modelling Ontology (WSMO) has been proposed as an enabling framework for the total/partial automation of the tasks (e.g., discovery, selection, composition, mediation, execution, monitoring, etc.) involved in both intra- and inter-enterprise integration of Web services. To support the standardisation and tool support of WSMO, a formal model of the language is highly desirable. As several variants of WSMO have been proposed by the WSMO community, which are still under development, the syntax and semantics of WSMO should be formally defined to facilitate easy reuse and future development. In this paper, we present a formal Object-Z formal model of WSMO, where different aspects of the language have been precisely defined within one unified framework. This model not only provides a formal unambiguous model which can be used to develop tools and facilitate future development, but as demonstrated in this paper, can be used to identify and eliminate errors present in existing documentation.
Resumo:
Our aim was to approach an important and well-investigable phenomenon – connected to a relatively simple but real field situation – in such a way, that the results of field observations could be directly comparable with the predictions of a simulation model-system which uses a simple mathematical apparatus and to simultaneously gain such a hypothesis-system, which creates the theoretical opportunity for a later experimental series of studies. As a phenomenon of the study, we chose the seasonal coenological changes of aquatic and semiaquatic Heteroptera community. Based on the observed data, we developed such an ecological model-system, which is suitable for generating realistic patterns highly resembling to the observed temporal patterns, and by the help of which predictions can be given to alternative situations of climatic circumstances not experienced before (e.g. climate changes), and furthermore; which can simulate experimental circumstances. The stable coenological state-plane, which was constructed based on the principle of indirect ordination is suitable for unified handling of data series of monitoring and simulation, and also fits for their comparison. On the state-plane, such deviations of empirical and model-generated data can be observed and analysed, which could otherwise remain hidden.