762 resultados para Trusted computing platform
Resumo:
Dynamically reconfigurable SRAM-based field-programmable gate arrays (FPGAs) enable the implementation of reconfigurable computing systems where several applications may be run simultaneously, sharing the available resources according to their own immediate functional requirements. To exclude malfunctioning due to faulty elements, the reliability of all FPGA resources must be guaranteed. Since resource allocation takes place asynchronously, an online structural test scheme is the only way of ensuring reliable system operation. On the other hand, this test scheme should not disturb the operation of the circuit, otherwise availability would be compromised. System performance is also influenced by the efficiency of the management strategies that must be able to dynamically allocate enough resources when requested by each application. As those resources are allocated and later released, many small free resource blocks are created, which are left unused due to performance and routing restrictions. To avoid wasting logic resources, the FPGA logic space must be defragmented regularly. This paper presents a non-intrusive active replication procedure that supports the proposed test methodology and the implementation of defragmentation strategies, assuring both the availability of resources and their perfect working condition, without disturbing system operation.
Resumo:
As estruturas orgânicas empresariais estão cada vez mais obrigadas a garantir elevados padrões de qualidade de serviços, possibilitando ao mesmo tempo a sustentabilidade das estruturas e ainda, o alinhamento dos investimentos efetuados com as estratégias de negócio. O seu desenvolvimento obriga a que na área das tecnologias de informação e comunicação exista a necessidade de repensar estratégias em vigor, procurando novos modelos, mais ágeis e mais capazes de se enquadrar nestas novas exigências. Neste âmbito, é de esperar que as plataformas de identidade digital tenham um papel determinante no desenvolvimento destes novos modelos, pois são um instrumento único para se implementarem plataformas heterogéneas, intemperáveis, com elevados níveis de segurança e de garantia de controlo no acesso à informação. O trabalho agora apresentado tem como objectivo investigar e desenvolver uma plataforma de identidade digital e uma plataforma de testes, que permitam ao Politécnico do Porto a aquisição de um infraestrutura de Tecnologias de Informação e Comunicação que se torne um instrumento fundamental para o desenvolvimento contínuo, de garantia de qualidade e de sustentabilidade de todos os serviços prestados à sua comunidade.
Resumo:
This paper shows that a hierarchical architecture, distributing several control actions in growing levels of complexity and using resources of reconfigurable computing, enables one to take into account the ease of future modifications, updates and improvements in robotic applications. An experimental example of a Stewart—Gough platform control (a platform applied as the solution to countless practical problems) is presented using reconfigurable computing. The software and hardware developed are structured in independent blocks. This open architecture implementation allows easy expansion of the system and better adaptation of the platform to its related tasks.
Resumo:
Para muitos, o ato de ensinar, era e continua a ser uma “arte”, em que os professores e os grandes mestres mais eficientes são aqueles que têm a capacidade e a arte de fazer passar as suas mensagens e conhecimentos, de forma simples e apelativa, independentemente da área de estudo. A informação relacionada com a aula, é cada vez mais digital, sendo importante, por parte dos docentes, o domínio de tecnologias de criação, organização e disponibilização de conteúdos. Essa partilha foi inicialmente possível pelas páginas Web e mais tarde pelas plataformas LMS (Learning Management System). Criar um Website era uma tarefa complicada quer ao nível do seu custo quer ao nível do domínio da tecnologia Web e era por vezes necessário contratar profissionais para o efeito. Surgiram então as CMS (Content Management System) que são tecnologias Open Source, que permitem a gestão de conteúdos. Neste sentido foi realizado um estudo com o objetivo de aferir sobre as competências dos professores no domínio da partilha de Gestão de Conteúdos Digitais. O presente estudo permitiu retirar conclusões sobre o potencial e aplicabilidade das CMS no ensino. O principal objetivo do presente estudo incidiu no potencial de distribuição e partilha de Recursos Educativos Digitais organizados sobre o ponto de vista pedagógico aos alunos. Foi ainda analisado e estudado o papel do Cloud Computing no processo de partilha colaborativa de documentos. Foi delineado como suporte à presente investigação um curso modelo que por sua vez foi implementado nas três principais CMS da atualidade e avaliado o potencial de cada uma neste contexto. Finalmente foram apresentadas as conclusões retiradas do presente estudo.
Resumo:
Extracting the semantic relatedness of terms is an important topic in several areas, including data mining, information retrieval and web recommendation. This paper presents an approach for computing the semantic relatedness of terms using the knowledge base of DBpedia — a community effort to extract structured information from Wikipedia. Several approaches to extract semantic relatedness from Wikipedia using bag-of-words vector models are already available in the literature. The research presented in this paper explores a novel approach using paths on an ontological graph extracted from DBpedia. It is based on an algorithm for finding and weighting a collection of paths connecting concept nodes. This algorithm was implemented on a tool called Shakti that extract relevant ontological data for a given domain from DBpedia using its SPARQL endpoint. To validate the proposed approach Shakti was used to recommend web pages on a Portuguese social site related to alternative music and the results of that experiment are reported in this paper.
Resumo:
This paper reports on a first step towards the implementation of a framework for remote experimentation of electric machines ? the RemoteLabs platform. This project was focused on the development of two main modules: the user Web-based and the electric machines interfaces. The Web application provides the user with a front-end and interacts with the back-end ? the user and experiment persistent data. The electric machines interface is implemented as a distributed client server application where the clients, launched by the Web application, interact with the server modules located in platforms physically connected the electric machines drives. Users can register and authenticate, schedule, specify and run experiments and obtain results in the form of CSV, XML and PDF files. These functionalities were successfully tested with real data, but still without including the electric machines. This inclusion is part of another project scheduled to start soon.
Resumo:
This paper reports the development of a B2B platform for the personalization of the publicity transmitted during the program intervals. The platform as a whole must ensure that the intervals are filled with ads compatible with the profile, context and expressed interests of the viewers. The platform acts as an electronic marketplace for advertising agencies (content producer companies) and multimedia content providers (content distribution companies). The companies, once registered at the platform, are represented by agents who negotiate automatically the price of the interval timeslots according to the specified price range and adaptation behaviour. The candidate ads for a given viewer interval are selected through a matching mechanism between ad, viewer and the current context (program being watched) profiles. The overall architecture of the platform consists of a multiagent system organized into three layers consisting of: (i) interface agents that interact with companies; (ii) enterprise agents that model the companies, and (iii) delegate agents that negotiate a specific ad or interval. The negotiation follows a variant of the Iterated Contract Net Interaction Protocol (ICNIP) and is based on the price/s offered by the advertising agencies to occupy the viewer’s interval.
Resumo:
This paper proposes and reports the development of an open source solution for the integrated management of Infrastructure as a Service (IaaS) cloud computing resources, through the use of a common API taxonomy, to incorporate open source and proprietary platforms. This research included two surveys on open source IaaS platforms (OpenNebula, OpenStack and CloudStack) and a proprietary platform (Parallels Automation for Cloud Infrastructure - PACI) as well as on IaaS abstraction solutions (jClouds, Libcloud and Deltacloud), followed by a thorough comparison to determine the best approach. The adopted implementation reuses the Apache Deltacloud open source abstraction framework, which relies on the development of software driver modules to interface with different IaaS platforms, and involved the development of a new Deltacloud driver for PACI. The resulting interoperable solution successfully incorporates OpenNebula, OpenStack (reuses pre-existing drivers) and PACI (includes the developed Deltacloud PACI driver) nodes and provides a Web dashboard and a Representational State Transfer (REST) interface library. The results of the exchanged data payload and time response tests performed are presented and discussed. The conclusions show that open source abstraction tools like Deltacloud allow the modular and integrated management of IaaS platforms (open source and proprietary), introduce relevant time and negligible data overheads and, as a result, can be adopted by Small and Medium-sized Enterprise (SME) cloud providers to circumvent the vendor lock-in problem whenever service response time is not critical.
Resumo:
Wireless Body Area Networks (WBANs) have emerged as a promising technology for medical and non-medical applications. WBANs consist of a number of miniaturized, portable, and autonomous sensor nodes that are used for long-term health monitoring of patients. These sensor nodes continuously collect information of patients, which are used for ubiquitous health monitoring. In addition, WBANs may be used for managing catastrophic events and increasing the effectiveness and performance of rescue forces. The huge amount of data collected by WBAN nodes demands scalable, on-demand, powerful, and secure storage and processing infrastructure. Cloud computing is expected to play a significant role in achieving the aforementioned objectives. The cloud computing environment links different devices ranging from miniaturized sensor nodes to high-performance supercomputers for delivering people-centric and context-centric services to the individuals and industries. The possible integration of WBANs with cloud computing (WBAN-cloud) will introduce viable and hybrid platform that must be able to process the huge amount of data collected from multiple WBANs. This WBAN-cloud will enable users (including physicians and nurses) to globally access the processing and storage infrastructure at competitive costs. Because WBANs forward useful and life-critical information to the cloud – which may operate in distributed and hostile environments, novel security mechanisms are required to prevent malicious interactions to the storage infrastructure. Both the cloud providers and the users must take strong security measures to protect the storage infrastructure.
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising a constant number (denoted by t) of distinct types of processors—such a platform is referred to as a t-type platform. We present two algorithms, LPGIM and LPGNM, each providing the following guarantee. For a given t-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet their deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then: (i) LPGIM succeeds in finding such an assignment where the same restriction on task migration applies (intra-migrative) but given a platform in which only one processor of each type is 1 + α × t-1/t times faster and (ii) LPGNM succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which every processor is 1 + α times faster. The parameter α is a property of the task set; it is the maximum of all the task utilizations that are no greater than one. To the best of our knowledge, for t-type heterogeneous multiprocessors: (i) for the problem of intra-migrative task assignment, no previous algorithm exists with a proven bound and hence our algorithm, LPGIM, is the first of its kind and (ii) for the problem of non-migrative task assignment, our algorithm, LPGNM, has superior performance compared to state-of-the-art.
Resumo:
Consider scheduling of real-time tasks on a multiprocessor where migration is forbidden. Specifically, consider the problem of determining a task-to-processor assignment for a given collection of implicit-deadline sporadic tasks upon a multiprocessor platform in which there are two distinct types of processors. For this problem, we propose a new algorithm, LPC (task assignment based on solving a Linear Program with Cutting planes). The algorithm offers the following guarantee: for a given task set and a platform, if there exists a feasible task-to-processor assignment, then LPC succeeds in finding such a feasible task-to-processor assignment as well but on a platform in which each processor is 1.5 × faster and has three additional processors. For systems with a large number of processors, LPC has a better approximation ratio than state-of-the-art algorithms. To the best of our knowledge, this is the first work that develops a provably good real-time task assignment algorithm using cutting planes.
Resumo:
Composition is a practice of key importance in software engineering. When real-time applications are composed, it is necessary that their timing properties (such as meeting the deadlines) are guaranteed. The composition is performed by establishing an interface between the application and the physical platform. Such an interface typically contains information about the amount of computing capacity needed by the application. For multiprocessor platforms, the interface should also present information about the degree of parallelism. Several interface proposals have recently been put forward in various research works. However, those interfaces are either too complex to be handled or too pessimistic. In this paper we propose the generalized multiprocessor periodic resource model (GMPR) that is strictly superior to the MPR model without requiring a too detailed description. We then derive a method to compute the interface from the application specification. This method has been implemented in Matlab routines that are publicly available.
Resumo:
Single processor architectures are unable to provide the required performance of high performance embedded systems. Parallel processing based on general-purpose processors can achieve these performances with a considerable increase of required resources. However, in many cases, simplified optimized parallel cores can be used instead of general-purpose processors achieving better performance at lower resource utilization. In this paper, we propose a configurable many-core architecture to serve as a co-processor for high-performance embedded computing on Field-Programmable Gate Arrays. The architecture consists of an array of configurable simple cores with support for floating-point operations interconnected with a configurable interconnection network. For each core it is possible to configure the size of the internal memory, the supported operations and number of interfacing ports. The architecture was tested in a ZYNQ-7020 FPGA in the execution of several parallel algorithms. The results show that the proposed many-core architecture achieves better performance than that achieved with a parallel generalpurpose processor and that up to 32 floating-point cores can be implemented in a ZYNQ-7020 SoC FPGA.
Resumo:
The rapidly increasing computing power, available storage and communication capabilities of mobile devices makes it possible to start processing and storing data locally, rather than offloading it to remote servers; allowing scenarios of mobile clouds without infrastructure dependency. We can now aim at connecting neighboring mobile devices, creating a local mobile cloud that provides storage and computing services on local generated data. In this paper, we describe an early overview of a distributed mobile system that allows accessing and processing of data distributed across mobile devices without an external communication infrastructure. Copyright © 2015 ICST.
Resumo:
With an example taken from a late-Hauterivian series of the Lusitanian Basin (Portugal), we will demonstrate the sedimentary record of orbital pattern variations and, consequently, climate variations in an inner platform environment with patterns and isolation changes, allows us to establish 4 major orders of periodicity related to orbital components:- The large cycles ob bed thickness variation, constituted by 31-32 beds, recording the 400 ky eccentricity cycle component;- The medium cycles, represented by byndles of 8-9 beds, related to the 100 ky eccentricity cycle component; - The small cycles, of 3-5 beds, recording the 41 ky obliquity components;- The very small cycles, of 2 beds, related to the 22 ky and 26 ky precession components. The mean duration of each bed is around 11.8 ky, a number very close to that of the precession hemi-cycle. Climatic control on qualitative production is confirmed by the close relation between the bed thickness variations, the insolation variability and the variation of micritized elements concentrations.