974 resultados para Schermi, adattativi, pervasive, kinect, framework, ingegnerizzazione, OpenNI
Resumo:
In distributed soft real-time systems, maximizing the aggregate quality-of-service (QoS) is a typical system-wide goal, and addressing the problem through distributed optimization is challenging. Subtasks are subject to unpredictable failures in many practical environments, and this makes the problem much harder. In this paper, we present a robust optimization framework for maximizing the aggregate QoS in the presence of random failures. We introduce the notion of K-failure to bound the effect of random failures on schedulability. Using this notion we define the concept of K-robustness that quantifies the degree of robustness on QoS guarantee in a probabilistic sense. The parameter K helps to tradeoff achievable QoS versus robustness. The proposed robust framework produces optimal solutions through distributed computations on the basis of Lagrangian duality, and we present some implementation techniques. Our simulation results show that the proposed framework can probabilistically guarantee sub-optimal QoS which remains feasible even in the presence of random failures.
Resumo:
In this paper we propose a framework for the support of mobile application with Quality of Service (QoS) requirements, such as voice or video, capable of supporting distributed, migration-capable, QoS-enabled applications on top of the Android Operating system.
Resumo:
Link quality estimation is a fundamental building block for the design of several different mechanisms and protocols in wireless sensor networks (WSN). A thorough experimental evaluation of link quality estimators (LQEs) is thus mandatory. Several WSN experimental testbeds have been designed ([1–4]) but only [3] and [2] targeted link quality measurements. However, these were exploited for analyzing low-power links characteristics rather than the performance of LQEs. Despite its importance, the experimental performance evaluation of LQEs remains an open problem, mainly due to the difficulty to provide a quantitative evaluation of their accuracy. This motivated us to build a benchmarking testbed for LQE - RadiaLE, which we present here as a demo. It includes (i.) hardware components that represent the WSN under test and (ii.) a software tool for the set up and control of the experiments and also for analyzing the collected data, allowing for LQEs evaluation.
Resumo:
Workflows have been successfully applied to express the decomposition of complex scientific applications. This has motivated many initiatives that have been developing scientific workflow tools. However the existing tools still lack adequate support to important aspects namely, decoupling the enactment engine from workflow tasks specification, decentralizing the control of workflow activities, and allowing their tasks to run autonomous in distributed infrastructures, for instance on Clouds. Furthermore many workflow tools only support the execution of Direct Acyclic Graphs (DAG) without the concept of iterations, where activities are executed millions of iterations during long periods of time and supporting dynamic workflow reconfigurations after certain iteration. We present the AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic) model of computation, based on the Process Networks model, where the workflow activities (AWA) are autonomic processes with independent control that can run in parallel on distributed infrastructures, e. g. on Clouds. Each AWA executes a Task developed as a Java class that implements a generic interface allowing end-users to code their applications without concerns for low-level details. The data-driven coordination of AWA interactions is based on a shared tuple space that also enables support to dynamic workflow reconfiguration and monitoring of the execution of workflows. We describe how AWARD supports dynamic reconfiguration and discuss typical workflow reconfiguration scenarios. For evaluation we describe experimental results of AWARD workflow executions in several application scenarios, mapped to a small dedicated cluster and the Amazon (Elastic Computing EC2) Cloud.
Resumo:
Real-time scheduling usually considers worst-case values for the parameters of task (or message stream) sets, in order to provide safe schedulability tests for hard real-time systems. However, worst-case conditions introduce a level of pessimism that is often inadequate for a certain class of (soft) real-time systems. In this paper we provide an approach for computing the stochastic response time of tasks where tasks have inter-arrival times described by discrete probabilistic distribution functions, instead of minimum inter-arrival (MIT) values.
Resumo:
This report describes the development of a Test-bed Application for the ART-WiSe Framework with the aim of providing a means of access, validate and demonstrate that architecture. The chosen application is a kind of pursuit-evasion game where a remote controlled robot, navigating through an area covered by wireless sensor network (WSN), is detected and continuously tracked by the WSN. Then a centralized control station takes the appropriate actions for a pursuit robot to chase and “capture” the intruder one. This kind of application imposes stringent timing requirements to the underlying communication infrastructure. It also involves interesting research problems in WSNs like tracking, localization, cooperation between nodes, energy concerns and mobility. Additionally, it can be easily ported into a real-world application. Surveillance or search and rescue operations are two examples where this kind of functionality can be applied. This is still a first approach on the test-bed application and this development effort will be continuously pushed forward until all the envisaged objectives for the Art-WiSe architecture become accomplished.
Resumo:
Physical computing has spun a true global revolution in the way in which the digital interfaces with the real world. From bicycle jackets with turn signal lights to twitter-controlled christmas trees, the Do-it-Yourself (DiY) hardware movement has been driving endless innovations and stimulating an age of creative engineering. This ongoing (r)evolution has been led by popular electronics platforms such as the Arduino, the Lilypad, or the Raspberry Pi, however, these are not designed taking into account the specific requirements of biosignal acquisition. To date, the physiological computing community has been severely lacking a parallel to that found in the DiY electronics realm, especially in what concerns suitable hardware frameworks. In this paper, we build on previous work developed within our group, focusing on an all-in-one, low-cost, and modular biosignal acquisition hardware platform, that makes it quicker and easier to build biomedical devices. We describe the main design considerations, experimental evaluation and circuit characterization results, together with the results from a usability study performed with volunteers from multiple target user groups, namely health sciences and electrical, biomedical, and computer engineering. Copyright © 2014 SCITEPRESS - Science and Technology Publications. All rights reserved.
Resumo:
Worldwide competitiveness poses enormous challenges on managers, demanding a continuous quest to increase rationality in the use of resources. As a management philosophy, Lean Manufacturing focuses on the elimination of activities that do not create any type of value and therefore are considered waste. For companies to successfully implement the Lean Manufacturing philosophy it is crucial that the human resources of the organization have the necessary training, for which proper tools are required. At the same time, higher education institutions need innovative tools to increase the attractiveness of engineering curricula and develop a higher level of knowledge among students, improving their employability. This paper describes how Lean Learning Academy, an international collaboration project between five EU universities and five companies, from SME to Multinational/Global companies, developed and applied an innovative training programme for Engineers on Lean Manufacturing, a successful alternative to the traditional teaching methods in engineering courses.
Resumo:
Dissertação de Mestrado em Engenharia Informática
Resumo:
Dissertação apresentada para obtenção do Grau de Mestre em Informática, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Apresentação no âmbito da Dissertação de Mestrado Orientador: Doutora Alcina Dias
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova da Lisboa para obtenção do grau de Mestre em Engenharia e Gestão Industrial (MEGI)
Resumo:
Workflows have been successfully applied to express the decomposition of complex scientific applications. However the existing tools still lack adequate support to important aspects namely, decoupling the enactment engine from tasks specification, decentralizing the control of workflow activities allowing their tasks to run in distributed infrastructures, and supporting dynamic workflow reconfigurations. We present the AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic) model of computation, based on Process Networks, where the workflow activities (AWA) are autonomic processes with independent control that can run in parallel on distributed infrastructures. Each AWA executes a task developed as a Java class with a generic interface allowing end-users to code their applications without low-level details. The data-driven coordination of AWA interactions is based on a shared tuple space that also enables dynamic workflow reconfiguration. For evaluation we describe experimental results of AWARD workflow executions in several application scenarios, mapped to the Amazon (Elastic Computing EC2) Cloud.
Resumo:
To increase the amount of logic available in SRAM-based FPGAs manufacturers are using nanometric technologies to boost logic density and reduce prices. However, nanometric scales are highly vulnerable to radiation-induced faults that affect values stored in memory cells. Since the functional definition of FPGAs relies on memory cells, they become highly prone to this type of faults. Fault tolerant implementations, based on triple modular redundancy (TMR) infrastructures, help to keep the correct operation of the circuit. However, TMR is not sufficient to guarantee the safe operation of a circuit. Other issues like the effects of multi-bit upsets (MBU) or fault accumulation, have also to be addressed. Furthermore, in case of a fault occurrence the correct operation of the affected module must be restored and the current state of the circuit coherently re-established. A solution that enables the autonomous correct restoration of the functional definition of the affected module, avoiding fault accumulation, re-establishing the correct circuit state in realtime, while keeping the normal operation of the circuit, is presented in this paper.