22 resultados para Trusted computing platform
em Instituto Politécnico do Porto, Portugal
Resumo:
A satisfação do utente na comunicação com profissionais de saúde é um indicador de qualidade dos serviços ou instituições. Na literatura não encontramos instrumentos padronizados e validados, que avaliem a satisfação do utente na comunicação com os profissionais de saúde. O presente estudo tem como objetivo construir e validar um instrumento para avaliar a satisfação do utente na comunicação com os profissionais de saúde. Desenvolvemos este estudo em três ciclos. Um primeiro, revisão da literatura, para identificar dimensões e itens da comunicação interpessoal na saúde. No segundo ciclo, conduzimos um método de Delphi modificado em três rondas, com recurso à plataforma informática de questionários Survey Monkey, no qual participou um painel de 25 peritos; estabelecemos como critério mínimo de retenção para a ronda seguinte os itens que recebessem 70% do consenso por parte do painel. Após as três rondas, obtivemos um instrumento com seis dimensões comunicacionais (comunicação verbal, comunicação não verbal, empatia, respeito, resolução de problemas e material de apoio), vinte e cinco itens específicos, e mais seis dimensões genéricas, que avaliam cada uma das dimensões. No terceiro ciclo avaliamos as características psicométricas, em termos de sensibilidade, validade do construto e fidelidade, numa amostra de 348 participantes. Os resultados mostram que todas as categorias de resposta estavam representadas em todos os itens. Validade do construto- a análise fatorial identificou uma solução de seis componentes que explicam 71% da variância total. Fiabilidade - os valores da correlação item-total variam entre 0,387 e 0,722, existindo uma correlação positiva moderada a forte. O valor de alfa de Cronbach (α=0,928) indica que a consistência interna é excelente. O instrumento construído apresenta boas propriedades psicométricas. Fica assim disponível uma nova ferramenta para auxiliar na gestão e no processo de planeamento necessários ao incremento da qualidade nos serviços e instituições de saúde.
Resumo:
This paper presents the system developed to promote the rational use of electric energy among consumers and, thus, increase the energy efficiency. The goal is to provide energy consumers with an application that displays the energy consumption/production profiles, sets up consuming ceilings, defines automatic alerts and alarms, compares anonymously consumers with identical energy usage profiles by region and predicts, in the case of non-residential installations, the expected consumption/production values. The resulting distributed system is organized in two main blocks: front-end and back-end. The front-end includes user interface applications for Android mobile devices and Web browsers. The back-end provides data storage and processing functionalities and is installed in a cloud computing platform - the Google App Engine - which provides a standard Web service interface. This option ensures interoperability, scalability and robustness to the system.
Resumo:
A utilização massiva da internet e dos serviços que oferece por parte do utilizador final potencia a evolução dos mesmos, motivando as empresas a apostarem no desenvolvimento deste tipo de soluções. Requisitos como o poder de computação, flexibilidade e escalabilidade tornam-se cada vez mais indissociáveis do desenvolvimento aplicacional, o que leva ao surgimento de paradigmas como o de Cloud Computing. É neste âmbito que surge o presente trabalho. Com o objetivo de estudar o paradigma de Cloud Computing inicia-se um estudo sobre esta temática, onde é detalhado o seu conceito, a sua evolução histórica e comparados os diferentes tipos de implementações que suporta. O estudo detalha posteriormente a plataforma Azure, sendo analisada a sua topologia e arquitetura, detalhando-se os seus componentes e a forma como esta mitiga alguns dos problemas mencionados. Com o conhecimento teórico é desenvolvido um protótipo prático sobre esta plataforma, em que se exploram algumas das particularidades da topologia e se interage com as principais redes sociais. O estudo culmina com uma análise sobre os benefícios e desvantagens do Azure e através de um levantamento das necessidades da empresa, determinam-se as oportunidades que a utilização da plataforma poderá proporcionar.
Resumo:
This paper presents the TEC4SEA research infrastructure created in Portugal to support research, development, and validation of marine technologies. It is a multidisciplinary open platform, capable of supporting research, development, and test of marine robotics, telecommunications, and sensing technologies for monitoring and operating in the ocean environment. Due to the installed research facilities and its privileged geographic location, it allows fast access to deep sea, and can support multidisciplinary research, enabling full validation and evaluation of technological solutions designed for the ocean environment. It is a vertically integrated infrastructure, in the sense that it possesses a set of skills and resources which range from pure conceptual research to field deployment missions, with strong industrial and logistic capacities in the middle tier of prototype production. TEC4SEA is open to the entire scientific and enterprise community, with a free access policy for researchers affiliated with the research units that ensure its maintenance and sustainability. The paper describes the infrastructure in detail, and discusses associated research programs, providing a strategic vision for deep sea research initiatives, within the context of both the Portuguese National Ocean Strategy and European Strategy frameworks.
Resumo:
The increasing number of players that operate in power systems leads to a more complex management. In this paper a new multi-agent platform is proposed, which simulates the real operation of power system players. MASGriP – A Multi-Agent Smart Grid Simulation Platform is presented. Several consumer and producer agents are implemented and simulated, considering real characteristics and different goals and actuation strategies. Aggregator entities, such as Virtual Power Players and Curtailment Service Providers are also included. The integration of MASGriP agents in MASCEM (Multi-Agent System for Competitive Electricity Markets) simulator allows the simulation of technical and economical activities of several players. An energy resources management architecture used in microgrids is also explained.
Resumo:
Urban Computing (UrC) provides users with the situation-proper information by considering context of users, devices, and social and physical environment in urban life. With social network services, UrC makes it possible for people with common interests to organize a virtual-society through exchange of context information among them. In these cases, people and personal devices are vulnerable to fake and misleading context information which is transferred from unauthorized and unauthenticated servers by attackers. So called smart devices which run automatically on some context events are more vulnerable if they are not prepared for attacks. In this paper, we illustrate some UrC service scenarios, and show important context information, possible threats, protection method, and secure context management for people.
Resumo:
One of the most difficult issues of e-Learning is the students’ assessment. Being this an outstanding task regarding theoretical topics, it becomes even more challenging when the topics under evaluation are practical. ISCAP’s Information Systems Department is composed of about twenty teachers who have been for several years using an e-learning environment (at the moment Moodle 2.3) combined with traditional assessment. They are now planning and implementing a new e-learning assessment strategy. This effort was undertaken in order to evaluate a practical topic (the use of spreadsheets to solve management problems) common to shared courses of several undergraduate degree programs. The same team group is already experienced in the assessment of theoretical information systems topics using the b-learning platform. Therefore, this project works as an extension to previous experiences being the team aware of the additional difficulties due to the practical nature of the topics. This paper describes this project and presents two cycles of the action research methodology, used to conduct the research. The first cycle goal was to produce a database of questions. When it was implemented in order to be used with a pilot group of students, several problems were identified. Subsequently, the second cycle consisted in solving the identified problems preparing the database and all the players to a broader scope implementation. For each cycle, all the phases, its drawbacks and achievements are described. This paper suits all those who are or are planning to be in the process of shifting their assessment strategy from a traditional to one supported by an e-learning platform.
Resumo:
Field communication systems (fieldbuses) are widely used as the communication support for distributed computer-controlled systems (DCCS) within all sort of process control and manufacturing applications. There are several advantages in the use of fieldbuses as a replacement for the traditional point-to-point links between sensors/actuators and computer-based control systems, within which the most relevant is the decentralisation and distribution of the processing power over the field. A widely used fieldbus is the WorldFIP, which is normalised as European standard EN 50170. Using WorldFIP to support DCCS, an important issue is “how to guarantee the timing requirements of the real-time traffic?” WorldFIP has very interesting mechanisms to schedule data transfers, since it explicitly distinguishes periodic and aperiodic traffic. In this paper, we describe how WorldFIP handles these two types of traffic, and more importantly, we provide a comprehensive analysis on how to guarantee the timing requirements of the real-time traffic.
Resumo:
We focus on large-scale and dense deeply embedded systems where, due to the large amount of information generated by all nodes, even simple aggregate computations such as the minimum value (MIN) of the sensor readings become notoriously expensive to obtain. Recent research has exploited a dominance-based medium access control(MAC) protocol, the CAN bus, for computing aggregated quantities in wired systems. For example, MIN can be computed efficiently and an interpolation function which approximates sensor data in an area can be obtained efficiently as well. Dominance-based MAC protocols have recently been proposed for wireless channels and these protocols can be expected to be used for achieving highly scalable aggregate computations in wireless systems. But no experimental demonstration is currently available in the research literature. In this paper, we demonstrate that highly scalable aggregate computations in wireless networks are possible. We do so by (i) building a new wireless hardware platform with appropriate characteristics for making dominance-based MAC protocols efficient, (ii) implementing dominance-based MAC protocols on this platform, (iii) implementing distributed algorithms for aggregate computations (MIN, MAX, Interpolation) using the new implementation of the dominance-based MAC protocol and (iv) performing experiments to prove that such highly scalable aggregate computations in wireless networks are possible.
Resumo:
More than ever, the economic globalization is creating the need to increase business competitiveness. Lean manufacturing is a management philosophy oriented to the elimination of activities that do not create any type of value and are thus considered a waste. One of the main differences from other management philosophies is the shop-floor focus and the operators' involvement. Therefore, the training of all organization levels is crucial for the success of lean manufacturing. Universities should also participate actively in this process by developing students' lean management skills and promoting a better and faster integration of students into their future organizations. This paper proposes a single realistic manufacturing platform, involving production and assembly operations, to learn by playing many of the lean tools such as VSM, 5S, SMED, poke-yoke, line balance, TPM, Mizusumashi, plant layout, and JIT/kanban. This simulation game was built in tight cooperation with experienced lean companies under the international program “Lean Learning Academy,”http://www.leanlearningacademy.eu/ and its main aim is to make bachelor and master courses in applied sciences more attractive by integrating classic lectures with a simulated production environment that could result in more motivated students and higher study yields. The simulation game results show that our approach is efficient in providing a realistic platform for the effective learning of lean principles, tools, and mindset, which can be easily included in course classes of less than two hours.
Resumo:
Within the pedagogical community, Serious Games have arisen as a viable alternative to traditional course-based learning materials. Until now, they have been based strictly on software solutions. Meanwhile, research into Remote Laboratories has shown that they are a viable, low-cost solution for experimentation in an engineering context, providing uninterrupted access, low-maintenance requirements, and a heightened sense of reality when compared to simulations. This paper will propose a solution where both approaches are combined to deliver a Remote Laboratory-based Serious Game for use in engineering and school education. The platform for this system is the WebLab-Deusto Framework, already well-tested within the remote laboratory context, and based on open standards. The laboratory allows users to control a mobile robot in a labyrinth environment and take part in an interactive game where they must locate and correctly answer several questions, the subject of which can be adapted to educators' needs. It also integrates the Google Blockly graphical programming language, allowing students to learn basic programming and logic principles without needing to understand complex syntax.
Resumo:
Dynamically reconfigurable SRAM-based field-programmable gate arrays (FPGAs) enable the implementation of reconfigurable computing systems where several applications may be run simultaneously, sharing the available resources according to their own immediate functional requirements. To exclude malfunctioning due to faulty elements, the reliability of all FPGA resources must be guaranteed. Since resource allocation takes place asynchronously, an online structural test scheme is the only way of ensuring reliable system operation. On the other hand, this test scheme should not disturb the operation of the circuit, otherwise availability would be compromised. System performance is also influenced by the efficiency of the management strategies that must be able to dynamically allocate enough resources when requested by each application. As those resources are allocated and later released, many small free resource blocks are created, which are left unused due to performance and routing restrictions. To avoid wasting logic resources, the FPGA logic space must be defragmented regularly. This paper presents a non-intrusive active replication procedure that supports the proposed test methodology and the implementation of defragmentation strategies, assuring both the availability of resources and their perfect working condition, without disturbing system operation.
Resumo:
Composition is a practice of key importance in software engineering. When real-time applications are composed, it is necessary that their timing properties (such as meeting the deadlines) are guaranteed. The composition is performed by establishing an interface between the application and the physical platform. Such an interface typically contains information about the amount of computing capacity needed by the application. For multiprocessor platforms, the interface should also present information about the degree of parallelism. Several interface proposals have recently been put forward in various research works. However, those interfaces are either too complex to be handled or too pessimistic. In this paper we propose the generalized multiprocessor periodic resource model (GMPR) that is strictly superior to the MPR model without requiring a too detailed description. We then derive a method to compute the interface from the application specification. This method has been implemented in Matlab routines that are publicly available.
Resumo:
Media content personalisation is a major challenge involving viewers as well as media content producer and distributor businesses. The goal is to provide viewers with media items aligned with their interests. Producers and distributors engage in item negotiations to establish the corresponding service level agreements (SLA). In order to address automated partner lookup and item SLA negotiation, this paper proposes the MultiMedia Brokerage (MMB) platform, which is a multiagent system that negotiates SLA regarding media items on behalf of media content producer and distributor businesses. The MMB platform is structured in four service layers: interface, agreement management, business modelling and market. In this context, there are: (i) brokerage SLA (bSLA), which are established between individual businesses and the platform regarding the provision of brokerage services; and (ii) item SLA (iSLA), which are established between producer and distributor businesses about the provision of media items. In particular, this paper describes the negotiation, establishment and enforcement of bSLA and iSLA, which occurs at the agreement and negotiation layers, respectively. The platform adopts a pay-per-use business model where the bSLA define the general conditions that apply to the related iSLA. To illustrate this process, we present a case study describing the negotiation of a bSLA instance and several related iSLA instances. The latter correspond to the negotiation of the Electronic Program Guide (EPG) for a specific end viewer.
Resumo:
Environmental concerns and the shortage in the fossil fuel reserves have been potentiating the growth and globalization of distributed generation. Another resource that has been increasing its importance is the demand response, which is used to change consumers’ consumption profile, helping to reduce peak demand. Aiming to support small players’ participation in demand response events, the Curtailment Service Provider emerged. This player works as an aggregator for demand response events. The control of small and medium players which act in smart grid and micro grid environments is enhanced with a multi-agent system with artificial intelligence techniques – the MASGriP (Multi-Agent Smart Grid Platform). Using strategic behaviours in each player, this system simulates the profile of real players by using software agents. This paper shows the importance of modeling these behaviours for studying this type of scenarios. A case study with three examples shows the differences between each player and the best behaviour in order to achieve the higher profit in each situation.