893 resultados para centralized server
Resumo:
A presente dissertação tem como objectivo descrever o trabalho desenvolvido sobre o projecto iCOPE, uma plataforma dedicada ao auxilio do processo psicoterapêutico para pessoas com perturbações psicóticas. A sua concepção e motivada pela necessidade de fornecer um meio psicoterapêutico com base na portabilidade dos dispositivos móveis. O desenvolvimento foi conseguido através de uma colaboração multidisciplinar, orientada por especialistas de terapia ocupacional, e pela engenharia de software. O iCOPE é um sistema centralizado, no qual o progresso de um paciente é registado e monitorizado através de outra aplicação, por um terapeuta designado. Esta filosofia levou à criação de uma API baseada em REST, capaz de comunicar com uma base de dados. A construção da API concretizou-se com recurso a linguagem PHP, aliada a micro-framework Slim. O objectivo desta API passa não só pela necessidade de fornecer um sistema acessível, mas também com a ambição de conceber uma plataforma com um potencial escalável e expansível, para o caso de ser necessário implementar novas funcionalidades futuras (future-proof). O autor desta dissertação foi responsável pelo levantamento de requisitos, o desenvolvimento da aplicação móvel, o desenvolvimento colaborativo do modelo de dados e base de dados e da interface da API de comunicação. No fim do desenvolvimento foi feita uma apreciação funcional pelos utilizadores alvo, que realizaram uma avaliação sobre a utilização e integração da aplicação no seu tratamento. Face aos resultados obtidos foram tiradas conclusões sobre o futuro desenvolvimento da aplicação e que outros aspectos poderiam ser integrados para efectivamente chegar a mais pacientes.
Resumo:
Työn tavoitteena on tutkia ja suunnitella kuinka paikkatietoa, mobiililaitteita sekä matkapuhelinverkkoja käyttäen voidaan toteuttaa käyttäjien seuranta- ja ohjausjärjestelmä. Järjestelmän avulla käyttäjät voivat seurata reaaliaikaisesti muiden käyttäjien sijainteja sekä ohjata muita käyttäjiä haluttuihin sijainteihin mobiililaitteen avulla. Järjestelmä pyritään suunnittelemaan laajennettavaksi. Lähtökohtana on toteuttaa seurannan sekä ohjauksen toteuttavat runkokomponentit, joiden päälle voidaan toteuttaa erityyppisiä sovelluksia. Tutkittavia asioita ovat matkapuhelinverkkojen tiedonsiirtomahdollisuudet, paikannustekniikat, mobiililaitteiden suorituskyky sekä resurssit, käyttäjien yksityisyyden ja tiedonsiirron turvaaminen sekä mobiililaitteista sekä langattomuudesta aiheutuvat haasteet yleisesti. Tutkimusten sekä suunnittelun pohjalta järjestelmästä toteutetaan esimerkkisovellus, jolla suoritetaan käytännön testaus. Testauksessa mitataan järjestelmän resurssien käyttöä ja suorituskykyä sekä testataan suunnittelun pohjalta tehtyjen ratkaisujen toimivuutta. Lopuksi työssä analysoidaan järjestelmän toimivuutta testaus- sekä mittaustulosten pohjalta.
Resumo:
In the last several years, micro-blogging Online Social Networks (OSNs), such as Twitter, have taken the world by storm, now boasting over 100 million subscribers. As an unparalleled stage for an enormous audience, they offer fast and reliable centralized diffusion of pithy tweets to great multitudes of information-hungry and always-connected followers. At the same time, this information gathering and dissemination paradigm prompts some important privacy concerns about relationships between tweeters, followers and interests of the latter. In this paper, we assess privacy in today?s Twitter-like OSNs and describe an architecture and a trial implementation of a privacy-preserving service called Hummingbird. It is essentially a variant of Twitter that protects tweet contents, hashtags and follower interests from the (potentially) prying eyes of the centralized server. We argue that, although inherently limited by Twitter?s mission of scalable information-sharing, this degree of privacy is valuable. We demonstrate, via a working prototype, that Hummingbird?s additional costs are tolerably low. We also sketch out some viable enhancements that might offer better privacy in the long term.
Resumo:
One of the key factors for a given application to take advantage of cloud computing is the ability to scale in an efficient, fast and reliable way. In centralized multi-party video conferencing, dynamically scaling a running conversation is a complex problem. In this paper we propose a methodology to divide the Multipoint Control Unit (the video conferencing server) into more simple units, broadcasters. Each broadcaster receives the media from a participant, processes it and forwards it to the rest. These broadcasters can be distributed among a group of CPUs. By using this methodology, video conferencing systems can scale in a more granular way, improving the deployment.
Resumo:
This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the reusability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective, providing new features and enriching the mobile user’s experience through a broad scope of potential applications.
Resumo:
This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the usability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective,providing new features and enriching the mobile user’s experience through a broad scope of potential applications.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
A massificação da utilização das tecnologias de informação e da Internet para os mais variados fins, e nas mais diversas áreas, criou problemas de gestão das infra-estruturas de informática, ímpares até ao momento. A gestão de redes informáticas converteu-se num factor vital para uma rede a operar de forma eficiente, produtiva e lucrativa. No entanto, a maioria dos sistemas são baseados no Simple Network Management Protocol (SNMP), assente no modelo cliente-servidor e com um paradigma centralizado. Desta forma subsiste sempre um servidor central que colecta e analisa dados provenientes dos diferentes elementos dispersos pela rede. Sendo que os dados de gestão estão armazenados em bases de dados de gestão ou Management Information Bases (MIB’s) localizadas nos diversos elementos da rede. O actual modelo de gestão baseado no SNMP não tem conseguido dar a resposta exigida. Pelo que, existe a necessidade de se estudar e utilizar novos paradigmas de maneira a que se possa encontrar uma nova abordagem capaz de aumentar a fiabilidade e a performance da gestão de redes. Neste trabalho pretende-se discutir os problemas existentes na abordagem tradicional de gestão de redes, procurando demonstrar a utilidade e as vantagens da utilização de uma abordagem baseada em Agentes móveis. Paralelamente, propõe-se uma arquitectura baseada em Agentes móveis para um sistema de gestão a utilizar num caso real.
Resumo:
Nowadays, due to the incredible grow of the mobile devices market, when we want to implement a client-server applications we must consider mobile devices limitations. In this paper we discuss which can be the more reliable and fast way to exchange information between a server and an Android mobile application. This is an important issue because with a responsive application the user experience is more enjoyable. In this paper we present a study that test and evaluate two data transfer protocols, socket and HTTP, and three data serialization formats (XML, JSON and Protocol Buffers) using different environments and mobile devices to realize which is the most practical and fast to use.
Resumo:
The goal of the this paper is to show that the DGPS data Internet service we designed and developed provides campus-wide real time access to Differential GPS (DGPS) data and, thus, supports precise outdoor navigation. First we describe the developed distributed system in terms of architecture (a three tier client/server application), services provided (real time DGPS data transportation from remote DGPS sources and campus wide data dissemination) and transmission modes implemented (raw and frame mode over TCP and UDP). Then we present and discuss the results obtained and, finally, we draw some conclusions.
Resumo:
Multicore platforms have transformed parallelism into a main concern. Parallel programming models are being put forward to provide a better approach for application programmers to expose the opportunities for parallelism by pointing out potentially parallel regions within tasks, leaving the actual and dynamic scheduling of these regions onto processors to be performed at runtime, exploiting the maximum amount of parallelism. It is in this context that this paper proposes a scheduling approach that combines the constant-bandwidth server abstraction with a priority-aware work-stealing load balancing scheme which, while ensuring isolation among tasks, enables parallel tasks to be executed on more than one processor at a given time instant.
Resumo:
Developing an efficient server-based real-time scheduling solution that supports dynamic task-level parallelism is now relevant to even the desktop and embedded domains and no longer only to the high performance computing market niche. This paper proposes a novel approach that combines the constantbandwidth server abstraction with a work-stealing load balancing scheme which, while ensuring isolation among tasks, enables a task to be executed on more than one processor at a given time instant.