78 resultados para Service Oriented Computing
Resumo:
Media content personalisation is a major challenge involving viewers as well as media content producer and distributor businesses. The goal is to provide viewers with media items aligned with their interests. Producers and distributors engage in item negotiations to establish the corresponding service level agreements (SLA). In order to address automated partner lookup and item SLA negotiation, this paper proposes the MultiMedia Brokerage (MMB) platform, which is a multiagent system that negotiates SLA regarding media items on behalf of media content producer and distributor businesses. The MMB platform is structured in four service layers: interface, agreement management, business modelling and market. In this context, there are: (i) brokerage SLA (bSLA), which are established between individual businesses and the platform regarding the provision of brokerage services; and (ii) item SLA (iSLA), which are established between producer and distributor businesses about the provision of media items. In particular, this paper describes the negotiation, establishment and enforcement of bSLA and iSLA, which occurs at the agreement and negotiation layers, respectively. The platform adopts a pay-per-use business model where the bSLA define the general conditions that apply to the related iSLA. To illustrate this process, we present a case study describing the negotiation of a bSLA instance and several related iSLA instances. The latter correspond to the negotiation of the Electronic Program Guide (EPG) for a specific end viewer.
Resumo:
Harnessing idle PCs CPU cycles, storage space and other resources of networked computers to collaborative are mainly fixated on for all major grid computing research projects. Most of the university computers labs are occupied with the high puissant desktop PC nowadays. It is plausible to notice that most of the time machines are lying idle or wasting their computing power without utilizing in felicitous ways. However, for intricate quandaries and for analyzing astronomically immense amounts of data, sizably voluminous computational resources are required. For such quandaries, one may run the analysis algorithms in very puissant and expensive computers, which reduces the number of users that can afford such data analysis tasks. Instead of utilizing single expensive machines, distributed computing systems, offers the possibility of utilizing a set of much less expensive machines to do the same task. BOINC and Condor projects have been prosperously utilized for solving authentic scientific research works around the world at a low cost. In this work the main goal is to explore both distributed computing to implement, Condor and BOINC, and utilize their potency to harness the ideal PCs resources for the academic researchers to utilize in their research work. In this thesis, Data mining tasks have been performed in implementation of several machine learning algorithms on the distributed computing environment.
Resumo:
Este documento descreve um modelo de tolerância a falhas para sistemas de tempo-real distribuídos. A sugestão deste modelo tem como propósito a apresentação de uma solu-ção fiável, flexível e adaptável às necessidades dos sistemas de tempo-real distribuídos. A tolerância a falhas é um aspeto extremamente importante na construção de sistemas de tempo-real e a sua aplicação traz inúmeros benefícios. Um design orientado para a to-lerância a falhas contribui para um melhor desempenho do sistema através do melhora-mento de aspetos chave como a segurança, a confiabilidade e a disponibilidade dos sis-temas. O trabalho desenvolvido centra-se na prevenção, deteção e tolerância a falhas de tipo ló-gicas (software) e físicas (hardware) e assenta numa arquitetura maioritariamente basea-da no tempo, conjugada com técnicas de redundância. O modelo preocupa-se com a efi-ciência e os custos de execução. Para isso utilizam-se também técnicas tradicionais de to-lerância a falhas, como a redundância e a migração, no sentido de não prejudicar o tempo de execução do serviço, ou seja, diminuindo o tempo de recuperação das réplicas, em ca-so de ocorrência de falhas. Neste trabalho são propostas heurísticas de baixa complexida-de para tempo-de-execução, a fim de se determinar para onde replicar os componentes que constituem o software de tempo-real e de negociá-los num mecanismo de coordena-ção por licitações. Este trabalho adapta e estende alguns algoritmos que fornecem solu-ções ainda que interrompidos. Estes algoritmos são referidos em trabalhos de investiga-ção relacionados, e são utilizados para formação de coligações entre nós coadjuvantes. O modelo proposto colmata as falhas através de técnicas de replicação ativa, tanto virtual como física, com blocos de execução concorrentes. Tenta-se melhorar ou manter a sua qualidade produzida, praticamente sem introduzir overhead de informação significativo no sistema. O modelo certifica-se que as máquinas escolhidas, para as quais os agentes migrarão, melhoram iterativamente os níveis de qualidade de serviço fornecida aos com-ponentes, em função das disponibilidades das respetivas máquinas. Caso a nova configu-ração de qualidade seja rentável para a qualidade geral do serviço, é feito um esforço no sentido de receber novos componentes em detrimento da qualidade dos já hospedados localmente. Os nós que cooperam na coligação maximizam o número de execuções para-lelas entre componentes paralelos que compõem o serviço, com o intuito de reduzir atra-sos de execução. O desenvolvimento desta tese conduziu ao modelo proposto e aos resultados apresenta-dos e foi genuinamente suportado por levantamentos bibliográficos de trabalhos de in-vestigação e desenvolvimento, literaturas e preliminares matemáticos. O trabalho tem também como base uma lista de referências bibliográficas.
Resumo:
In today’s highly competitive market, it is critical to provide customers services with a high level of configuration to answer their business needs. Knowing in advance the performance associated with a specific choreography of services (e.g., by taking into account the expected results of each component service) represents an important asset that allows businesses to provide a global service tailored to customers’ specific requests. This research work aims at advancing the state-of-the-art in this area by proposing an approach for service selection and ranking using services choreography, predicting the behavior of the services considering customers’ requirements and preferences, business process constraints and characteristics of the execution environment.
Resumo:
O desenvolvimento de aplicações para dispositivos móveis já não é uma área recente, contudo continua a crescer a um ritmo veloz. É notório o avanço tecnológico dos últimos anos e a crescente popularidade destes dispositivos. Este avanço deve-se não só à grande evolução no que diz respeito às características destes dispositivos, mas também à possibilidade de criar aplicações inovadoras, práticas e passíveis de solucionar os problemas dos utilizadores em geral. Nesse sentido, as necessidades do quotidiano obrigam à implementação de soluções que satisfaçam os utilizadores, e nos dias de hoje, essa satisfação muitas vezes passa pelos dispositivos móveis, que já tem um papel fundamental na vida das pessoas. Atendendo ao aumento do número de raptos de crianças e à insegurança que se verifica nos dias de hoje, as quais dificultam a tarefa de todos os pais/cuidadores que procuraram manter as suas crianças a salvo, é relevante criar uma nova ferramenta capaz de os auxiliar nesta árdua tarefa. A partir desta realidade, e com vista a cumprir os aspetos acima mencionados, surge assim esta dissertação de mestrado. Esta aborda o estudo e implementação efetuados no sentido de desenvolver um sistema de monitorização de crianças. Assim, o objetivo deste projeto passa por desenvolver uma aplicação nativa para Android e um back-end, utilizando um servidor de base de dados NoSQL para o armazenamento da informação, aplicando os conceitos estudados e as tecnologias existentes. A solução tem como principais premissas: ser o mais user-friendly possível, a otimização, a escalabilidade para outras situações (outros tipos de monitorizações) e a aplicação das mais recentes tecnologias. Assim sendo, um dos estudos mais aprofundados nesta dissertação de mestrado está relacionado com as bases de dados NoSQL, dada a sua importância no projeto.
Resumo:
Esta dissertação descreve o sistema de apoio à racionalização da utilização de energia eléctrica desenvolvido no âmbito da unidade curricular de Tese/Dissertação. O domínio de aplicação enquadra-se no contexto da Directiva da União Europeia 2006/32/EC que declara ser necessário colocar à disposição dos consumidores a informação e os meios que promovam a redução do consumo e o aumento da eficiência energética individual. O objectivo é o desenvolvimento de uma solução que permita a representação gráfica do consumo/produção, a definição de tectos de consumo, a geração automática de alertas e alarmes, a comparação anónima com clientes com perfil idêntico por região e a previsão de consumo/produção no caso de clientes industriais. Trata-se de um sistema distribuído composto por front-end e back-end. O front-end é composto pelas aplicações de interface com o utilizador desenvolvidas para dispositivos móveis Android e navegadores Web. O back-end efectua o armazenamento e processamento de informação e encontra-se alojado numa plataforma de cloud computing – o Google App Engine – que disponibiliza uma interface padrão do tipo serviço Web. Esta opção assegura interoperabilidade, escalabilidade e robustez ao sistema. Descreve-se em detalhe a concepção, desenvolvimento e teste do protótipo realizado, incluindo: (i) as funcionalidades de gestão e análise de consumo e produção de energia implementadas; (ii) as estruturas de dados; (iii) a base de dados e o serviço Web; e (iv) os testes e a depuração efectuados. (iv) Por fim, apresenta-se o balanço deste projecto e efectuam-se sugestões de melhoria.
Resumo:
Near real time media content personalisation is nowadays a major challenge involving media content sources, distributors and viewers. This paper describes an approach to seamless recommendation, negotiation and transaction of personalised media content. It adopts an integrated view of the problem by proposing, on the business-to-business (B2B) side, a brokerage platform to negotiate the media items on behalf of the media content distributors and sources, providing viewers, on the business-to-consumer (B2C) side, with a personalised electronic programme guide (EPG) containing the set of recommended items after negotiation. In this setup, when a viewer connects, the distributor looks up and invites sources to negotiate the contents of the viewer personal EPG. The proposed multi-agent brokerage platform is structured in four layers, modelling the registration, service agreement, partner lookup, invitation as well as item recommendation, negotiation and transaction stages of the B2B processes. The recommendation service is a rule-based switch hybrid filter, including six collaborative and two content-based filters. The rule-based system selects, at runtime, the filter(s) to apply as well as the final set of recommendations to present. The filter selection is based on the data available, ranging from the history of items watched to the ratings and/or tags assigned to the items by the viewer. Additionally, this module implements (i) a novel item stereotype to represent newly arrived items, (ii) a standard user stereotype for new users, (iii) a novel passive user tag cloud stereotype for socially passive users, and (iv) a new content-based filter named the collinearity and proximity similarity (CPS). At the end of the paper, we present off-line results and a case study describing how the recommendation service works. The proposed system provides, to our knowledge, an excellent holistic solution to the problem of recommending multimedia contents.
Resumo:
In recent years, vehicular cloud computing (VCC) has emerged as a new technology which is being used in wide range of applications in the area of multimedia-based healthcare applications. In VCC, vehicles act as the intelligent machines which can be used to collect and transfer the healthcare data to the local, or global sites for storage, and computation purposes, as vehicles are having comparatively limited storage and computation power for handling the multimedia files. However, due to the dynamic changes in topology, and lack of centralized monitoring points, this information can be altered, or misused. These security breaches can result in disastrous consequences such as-loss of life or financial frauds. Therefore, to address these issues, a learning automata-assisted distributive intrusion detection system is designed based on clustering. Although there exist a number of applications where the proposed scheme can be applied but, we have taken multimedia-based healthcare application for illustration of the proposed scheme. In the proposed scheme, learning automata (LA) are assumed to be stationed on the vehicles which take clustering decisions intelligently and select one of the members of the group as a cluster-head. The cluster-heads then assist in efficient storage and dissemination of information through a cloud-based infrastructure. To secure the proposed scheme from malicious activities, standard cryptographic technique is used in which the auotmaton learns from the environment and takes adaptive decisions for identification of any malicious activity in the network. A reward and penalty is given by the stochastic environment where an automaton performs its actions so that it updates its action probability vector after getting the reinforcement signal from the environment. The proposed scheme was evaluated using extensive simulations on ns-2 with SUMO. The results obtained indicate that the proposed scheme yields an improvement of 10 % in detection rate of malicious nodes when compared with the existing schemes.
Resumo:
This work proposes a novel approach for a suitable orientation of antibodies (Ab) on an immunosensing platform, applied here to the determination of 8-hydroxy-2′-deoxyguanosine (8OHdG), a biomarker of oxidative stress that has been associated to chronic diseases, such as cancer. The anti-8OHdG was bound to an amine modified gold support through its Fc region after activation of its carboxylic functions. Non-oriented approaches of Ab binding to the platform were tested in parallel, in order to show that the presented methodology favored Ab/Ag affinity and immunodetection of the antigen. The immunosensor design was evaluated by quartz-crystal microbalance with dissipation, atomic force microscopy, electrochemical impedance spectroscopy (EIS) and square-wave voltammetry. EIS was also a suitable technique to follow the analytical behavior of the device against 8OHdG. The affinity binding between 8OHdG and the antibody immobilized in the gold modified platform increased the charge transfer resistance across the electrochemical set-up. The observed behavior was linear from 0.02 to 7.0 ng/mL of 8OHdG concentrations. The interference from glucose, urea and creatinine was found negligible. An attempt of application to synthetic samples was also successfully conducted. Overall, the presented approach enabled the production of suitably oriented Abs over a gold platform by means of a much simpler process than other oriented-Ab binding approaches described in the literature, as far as we know, and was successful in terms of analytical features and sample application.
Resumo:
Prostate Specific Antigen (PSA) is the biomarker of choice for screening prostate cancer throughout the population, with PSA values above 10 ng/mL pointing out a high probability of associated cancer1. According to the most recent World Health Organization (WHO) data, prostate cancer is the commonest form of cancer in men in Europe2. Early detection of prostate cancer is thus very important and is currently made by screening PSA in men over 45 years old, combined with other alterations in serum and urine parameters. PSA is a glycoprotein with a molecular mass of approximately 32 kDa consisting of one polypeptide chain, which is produced by the secretory epithelium of human prostate. Currently, the standard methods available for PSA screening are immunoassays like Enzyme-Linked Immunoabsorbent Assay (ELISA). These methods are highly sensitive and specific for the detection of PSA, but they require expensive laboratory facilities and high qualify personal resources. Other highly sensitive and specific methods for the detection of PSA have also become available and are in its majority immunobiosensors1,3-5, relying on antibodies. Less expensive methods producing quicker responses are thus needed, which may be achieved by synthesizing artificial antibodies by means of molecular imprinting techniques. These should also be coupled to simple and low cost devices, such as those of the potentiometric kind, one approach that has been proven successful6. Potentiometric sensors offer the advantage of selectivity and portability for use in point-of-care and have been widely recognized as potential analytical tools in this field. The inherent method is simple, precise, accurate and inexpensive regarding reagent consumption and equipment involved. Thus, this work proposes a new plastic antibody for PSA, designed over the surface of graphene layers extracted from graphite. Charged monomers were used to enable an oriented tailoring of the PSA rebinding sites. Uncharged monomers were used as control. These materials were used as ionophores in conventional solid-contact graphite electrodes. The obtained results showed that the imprinted materials displayed a selective response to PSA. The electrodes with charged monomers showed a more stable and sensitive response, with an average slope of -44.2 mV/decade and a detection limit of 5.8X10-11 mol/L (2 ng/mL). The corresponding non-imprinted sensors showed smaller sensitivity, with average slopes of -24.8 mV/decade. The best sensors were successfully applied to the analysis of serum samples, with percentage recoveries of 106.5% and relatives errors of 6.5%.
Resumo:
Human chorionic gonadotropin (hCG) is a key diagnostic marker of pregnancy and an important biomarker for cancers in the prostate, ovaries and bladder and therefore of great importance in diagnosis. For this purpose, a new immunosensor of screen-printed electrodes (SPEs) is presented here. The device was fabricated by introducing a polyaniline (PANI) conductive layer, via in situ electropolymerization of aniline, onto a screen-printed graphene support. The PANI-coated graphene acts as the working electrode of a three terminal electrochemical sensor. The working electrode is functionalised with anti-hCG, by means of a simple process that enabled oriented antibody binding to the PANI layer. The antibody was attached to PANI following activation of the –COOH group at the Fc terminal. Functionalisation of the electrode was analysed and optimized using Electrochemical Impedance Spectroscopy (EIS). Chemical modification of the surface was characterised using Fourier transform infrared, and Raman spectroscopy with confocal microscopy. The graphene–SPE–PANI devices displayed linear responses to hCG in EIS assays from 0.001 to 50 ng mL−1 in real urine, with a detection limit of 0.286 pg mL−1. High selectivity was observed with respect to the presence of the constituent components of urine (urea, creatinine, magnesium chloride, calcium chloride, sodium dihydrogen phosphate, ammonium chloride, potassium sulphate and sodium chloride) at their normal levels, with a negligible sensor response to these chemicals. Successful detection of hCG was also achieved in spiked samples of real urine from a pregnant woman. The immunosensor developed is a promising tool for point-of-care detection of hCG, due to its excellent detection capability, simplicity of fabrication, low-cost, high sensitivity and selectivity.
Resumo:
Existing gamification services have features that preclude their use by e-learning tools. Odin is a gamification service that mimics the API of state-of-the-art services without these limitations. This paper describes Odin, its role in an e-learning system architecture requiring gamification, and details its implementation. The validation of Odin involved the creation of a small e-learning game, integrated in a Learning Management System (LMS) using the Learning Tools Interoperability (LTI) specification.
Resumo:
It is imperative to accept that failures can and will occur, even in meticulously designed distributed systems, and design proper measures to counter those failures. Passive replication minimises resource consumption by only activating redundant replicas in case of failures, as typically providing and applying state updates is less resource demanding than requesting execution. However, most existing solutions for passive fault tolerance are usually designed and configured at design time, explicitly and statically identifying the most critical components and their number of replicas, lacking the needed flexibility to handle the runtime dynamics of distributed component-based embedded systems. This paper proposes a cost-effective adaptive fault tolerance solution with a significant lower overhead compared to a strict active redundancy-based approach, achieving a high error coverage with the minimum amount of redundancy. The activation of passive replicas is coordinated through a feedback-based coordination model that reduces the complexity of the needed interactions among components until a new collective global service solution is determined, improving the overall maintainability and robustness of the system.
Resumo:
In order to increase the efficiency in the use of energy resources, the electrical grid is slowly evolving into a smart(er) grid that allows users' production and storage of energy, automatic and remote control of appliances, energy exchange between users, and in general optimizations over how the energy is managed and consumed. One of the main innovations of the smart grid is its organization over an energy plane that involves the actual exchange of energy, and a data plane that regards the Information and Communication Technology (ICT) infrastructure used for the management of the grid's data. In the particular case of the data plane, the exchange of large quantities of data can be facilitated by a middleware based on a messaging bus. Existing messaging buses follow different data management paradigms (e.g.: request/response, publish/subscribe, data-oriented messaging) and thus satisfy smart grids' communication requirements at different extents. This work contributes to the state of the art by identifying, in existing standards and architectures, common requirements that impact in the messaging system of a data plane for the smart grid. The paper analyzes existing messaging bus paradigms that can be used as a basis for the ICT infrastructure of a smart grid and discusses how these can satisfy smart grids' requirements.
Resumo:
Near real time media content personalisation is nowadays a major challenge involving media content sources, distributors and viewers. This paper describes an approach to seamless recommendation, negotiation and transaction of personalised media content. It adopts an integrated view of the problem by proposing, on the business-to-business (B2B) side, a brokerage platform to negotiate the media items on behalf of the media content distributors and sources, providing viewers, on the business-to-consumer (B2C) side, with a personalised electronic programme guide (EPG) containing the set of recommended items after negotiation. In this setup, when a viewer connects, the distributor looks up and invites sources to negotiate the contents of the viewer personal EPG. The proposed multi-agent brokerage platform is structured in four layers, modelling the registration, service agreement, partner lookup, invitation as well as item recommendation, negotiation and transaction stages of the B2B processes. The recommendation service is a rule-based switch hybrid filter, including six collaborative and two content-based filters. The rule-based system selects, at runtime, the filter(s) to apply as well as the final set of recommendations to present. The filter selection is based on the data available, ranging from the history of items watched to the ratings and/or tags assigned to the items by the viewer. Additionally, this module implements (i) a novel item stereotype to represent newly arrived items, (ii) a standard user stereotype for new users, (iii) a novel passive user tag cloud stereotype for socially passive users, and (iv) a new content-based filter named the collinearity and proximity similarity (CPS). At the end of the paper, we present off-line results and a case study describing how the recommendation service works. The proposed system provides, to our knowledge, an excellent holistic solution to the problem of recommending multimedia contents.