54 resultados para cloud computing, hypervisor, virtualizzazione, live migration, infrastructure as a service
Resumo:
Knowing exactly where a mobile entity is and monitoring its trajectory in real-time has recently attracted a lot of interests from both academia and industrial communities, due to the large number of applications it enables, nevertheless, it is nowadays one of the most challenging problems from scientific and technological standpoints. In this work we propose a tracking system based on the fusion of position estimations provided by different sources, that are combined together to get a final estimation that aims at providing improved accuracy with respect to those generated by each system individually. In particular, exploiting the availability of a Wireless Sensor Network as an infrastructure, a mobile entity equipped with an inertial system first gets the position estimation using both a Kalman Filter and a fully distributed positioning algorithm (the Enhanced Steepest Descent, we recently proposed), then combines the results using the Simple Convex Combination algorithm. Simulation results clearly show good performance in terms of the final accuracy achieved. Finally, the proposed technique is validated against real data taken from an inertial sensor provided by THALES ITALIA.
Resumo:
Wireless sensor networks (WSNs) have attracted growing interest in the last decade as an infrastructure to support a diversity of ubiquitous computing and cyber-physical systems. However, most research work has focused on protocols or on specific applications. As a result, there remains a clear lack of effective and usable WSN system architectures that address both functional and non-functional requirements in an integrated fashion. This poster outlines the EMMON system architecture for large-scale, dense, real-time embedded monitoring. It provides a hierarchical communication architecture together with integrated middleware and command and control software. It has been designed to maintain as much as flexibility as possible while meeting specific applications requirements. EMMON has been validated through extensive analytical, simulation and experimental evaluations, including through a 300+ nodes test-bed the largest single-site WSN test-bed in Europe.
Resumo:
Wireless sensor networks (WSNs) have attracted growing interest in the last decade as an infrastructure to support a diversity of ubiquitous computing and cyber-physical systems. However, most research work has focused on protocols or on specific applications. As a result, there remains a clear lack of effective, feasible and usable system architectures that address both functional and non-functional requirements in an integrated fashion. In this paper, we outline the EMMON system architecture for large-scale, dense, real-time embedded monitoring. EMMON provides a hierarchical communication architecture together with integrated middleware and command and control software. It has been designed to use standard commercially-available technologies, while maintaining as much flexibility as possible to meet specific applications requirements. The EMMON architecture has been validated through extensive simulation and experimental evaluation, including a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.
Resumo:
Several projects in the recent past have aimed at promoting Wireless Sensor Networks as an infrastructure technology, where several independent users can submit applications that execute concurrently across the network. Concurrent multiple applications cause significant energy-usage overhead on sensor nodes, that cannot be eliminated by traditional schemes optimized for single-application scenarios. In this paper, we outline two main optimization techniques for reducing power consumption across applications. First, we describe a compiler based approach that identifies redundant sensing requests across applications and eliminates those. Second, we cluster the radio transmissions together by concatenating packets from independent applications based on Rate-Harmonized Scheduling.
Resumo:
Mobile applications are becoming increasingly more complex and making heavier demands on local system resources. Moreover, mobile systems are nowadays more open, allowing users to add more and more applications, including third-party developed ones. In this perspective, it is increasingly expected that users will want to execute in their devices applications which supersede currently available resources. It is therefore important to provide frameworks which allow applications to benefit from resources available on other nodes, capable of migrating some or all of its services to other nodes, depending on the user needs. These requirements are even more stringent when users want to execute Quality of Service (QoS) aware applications, such as voice or video. The required resources to guarantee the QoS levels demanded by an application can vary with time, and consequently, applications should be able to reconfigure themselves. This paper proposes a QoS-aware service-based framework able to support distributed, migration-capable, QoS-enabled applications on top of the Android Operating system.
Resumo:
Debugging electronic circuits is traditionally done with bench equipment directly connected to the circuit under debug. In the digital domain, the difficulties associated with the direct physical access to circuit nodes led to the inclusion of resources providing support to that activity, first at the printed circuit level, and then at the integrated circuit level. The experience acquired with those solutions led to the emergence of dedicated infrastructures for debugging cores at the system-on-chip level. However, all these developments had a small impact in the analog and mixed-signal domain, where debugging still depends, to a large extent, on direct physical access to circuit nodes. As a consequence, when analog and mixed-signal circuits are integrated as cores inside a system-on-chip, the difficulties associated with debugging increase, which cause the time-to-market and the prototype verification costs to also increase. The present work considers the IEEE1149.4 infrastructure as a means to support the debugging of mixed-signal circuits, namely to access the circuit nodes and also an embedded debug mechanism named mixed-signal condition detector, necessary for watch-/breakpoints and real-time analysis operations. One of the main advantages associated with the proposed solution is the seamless migration to the system-on-chip level, as the access is done through electronic means, thus easing debugging operations at different hierarchical levels.
Resumo:
Dependability is a critical factor in computer systems, requiring high quality validation & verification procedures in the development stage. At the same time, digital devices are getting smaller and access to their internal signals and registers is increasingly complex, requiring innovative debugging methodologies. To address this issue, most recent microprocessors include an on-chip debug (OCD) infrastructure to facilitate common debugging operations. This paper proposes an enhanced OCD infrastructure with the objective of supporting the verification of fault-tolerant mechanisms through fault injection campaigns. This upgraded on-chip debug and fault injection (OCD-FI) infrastructure provides an efficient fault injection mechanism with improved capabilities and dynamic behavior. Preliminary results show that this solution provides flexibility in terms of fault triggering and allows high speed real-time fault injection in memory elements
Resumo:
A good verification strategy should bring near the simulation and real functioning environments. In this paper we describe a system-level co-verification strategy that uses a common flow for functional simulation, timing simulation and functional debug. This last step requires using a BST infrastructure, now widely available on commercial devices, specially on FPGAs with medium/large pin-counts.
Resumo:
Weblabs are spreading their influence in Science and Engineering (S&E) courses providing a way to remotely conduct real experiments. Typically, they are implemented by different architectures and infrastructures supported by Instruments and Modules (I&Ms) able to be remotely controlled and observed. Besides the inexistence of a standard solution for implementing weblabs, their reconfiguration is limited to a setup procedure that enables interconnecting a set of preselected I&Ms into an Experiment Under Test (EUT). Moreover, those I&Ms are not able to be replicated or shared by different weblab infrastructures, since they are usually based on hardware platforms. Thus, to overcome these limitations, this paper proposes a standard solution that uses I&Ms embedded into Field-Programmable Gate Array (FPGAs) devices. It is presented an architecture based on the IEEE1451.0 Std. supported by a FPGA-based weblab infrastructure able to be remotely reconfigured with I&Ms, described through standard Hardware Description Language (HDL) files, using a Reconfiguration Tool (RecTool).
Resumo:
Dynamically reconfigurable SRAM-based field-programmable gate arrays (FPGAs) enable the implementation of reconfigurable computing systems where several applications may be run simultaneously, sharing the available resources according to their own immediate functional requirements. To exclude malfunctioning due to faulty elements, the reliability of all FPGA resources must be guaranteed. Since resource allocation takes place asynchronously, an online structural test scheme is the only way of ensuring reliable system operation. On the other hand, this test scheme should not disturb the operation of the circuit, otherwise availability would be compromised. System performance is also influenced by the efficiency of the management strategies that must be able to dynamically allocate enough resources when requested by each application. As those resources are allocated and later released, many small free resource blocks are created, which are left unused due to performance and routing restrictions. To avoid wasting logic resources, the FPGA logic space must be defragmented regularly. This paper presents a non-intrusive active replication procedure that supports the proposed test methodology and the implementation of defragmentation strategies, assuring both the availability of resources and their perfect working condition, without disturbing system operation.
Resumo:
Adopting standard-based weblab infrastructures can be an added value for spreading their influence and acceptance in education. This paper suggests a solution based on the IEEE1451.0 Std. and FPGA technology for creating reconfigurable weblab infrastructures using Instruments and Modules (I&Ms) described through standard Hardware Description Language (HDL) files. It describes a methodology for creating and binding I&Ms into an IEEE1451-module embedded in a FPGA-based board able to be remotely controlled/accessed using IEEE1451-HTTP commands. At the end, an example of a step-motor controller module bond to that IEEE1451-module is described.
Resumo:
Fault injection is frequently used for the verification and validation of the fault tolerant features of microprocessors. This paper proposes the modification of a common on-chip debugging (OCD) infrastructure to add fault injection capabilities and improve performance. The proposed solution imposes a very low logic overhead and provides a flexible and efficient mechanism for the execution of fault injection campaigns, being applicable to different target system architectures.
Resumo:
Extracting the semantic relatedness of terms is an important topic in several areas, including data mining, information retrieval and web recommendation. This paper presents an approach for computing the semantic relatedness of terms using the knowledge base of DBpedia — a community effort to extract structured information from Wikipedia. Several approaches to extract semantic relatedness from Wikipedia using bag-of-words vector models are already available in the literature. The research presented in this paper explores a novel approach using paths on an ontological graph extracted from DBpedia. It is based on an algorithm for finding and weighting a collection of paths connecting concept nodes. This algorithm was implemented on a tool called Shakti that extract relevant ontological data for a given domain from DBpedia using its SPARQL endpoint. To validate the proposed approach Shakti was used to recommend web pages on a Portuguese social site related to alternative music and the results of that experiment are reported in this paper.
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising a constant number (denoted by t) of distinct types of processors—such a platform is referred to as a t-type platform. We present two algorithms, LPGIM and LPGNM, each providing the following guarantee. For a given t-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet their deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then: (i) LPGIM succeeds in finding such an assignment where the same restriction on task migration applies (intra-migrative) but given a platform in which only one processor of each type is 1 + α × t-1/t times faster and (ii) LPGNM succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which every processor is 1 + α times faster. The parameter α is a property of the task set; it is the maximum of all the task utilizations that are no greater than one. To the best of our knowledge, for t-type heterogeneous multiprocessors: (i) for the problem of intra-migrative task assignment, no previous algorithm exists with a proven bound and hence our algorithm, LPGIM, is the first of its kind and (ii) for the problem of non-migrative task assignment, our algorithm, LPGNM, has superior performance compared to state-of-the-art.
Resumo:
Consider scheduling of real-time tasks on a multiprocessor where migration is forbidden. Specifically, consider the problem of determining a task-to-processor assignment for a given collection of implicit-deadline sporadic tasks upon a multiprocessor platform in which there are two distinct types of processors. For this problem, we propose a new algorithm, LPC (task assignment based on solving a Linear Program with Cutting planes). The algorithm offers the following guarantee: for a given task set and a platform, if there exists a feasible task-to-processor assignment, then LPC succeeds in finding such a feasible task-to-processor assignment as well but on a platform in which each processor is 1.5 × faster and has three additional processors. For systems with a large number of processors, LPC has a better approximation ratio than state-of-the-art algorithms. To the best of our knowledge, this is the first work that develops a provably good real-time task assignment algorithm using cutting planes.