904 resultados para 291605 Processor Architectures
Resumo:
The Robuter is a robotic mobile platform that is located in the “Hands-On” Laboratory of the IPP-Hurray! Research Group, at the School of Engineering of the Polytechnic Institute of Porto. Recently, the Robuter was subject of an upgrading process addressing two essential areas: the Hardware Architecture and the Software Architecture. This upgrade in process was triggered due to technical problems on-board of the robot and also to the fact that the hardware/software architecture has become obsolete. This Technical Report overviews the most important aspects of the new Hardware and Software Architectures of the Robuter. This document also presents a first approach on the first steps towards the use of the Robuter platform, and provides some hints on future work that may be carried out using this mobile platform.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Automação e Electrónica Industrial
Resumo:
Devido ao acréscimo significativo de viaturas e peões nas grandes cidades foi necessário recorrer aos mecanismos existentes para coordenar o tráfego. Nesta perspectiva surge a implementação de semáforos com o objectivo de ordenar o tráfego nas vias rodoviárias. A gestão de tráfego, tem sido sujeita a inovações tanto ao nível dos equipamentos, do software usado, gestão centralizada, monitorização das vias e na sincronização semafórica, sendo possível a criação de programas ajustados às diferentes exigências de tráfego verificadas durante as vinte e quatro horas para pontos distintos da cidade. Conceptualmente foram elaborados estudos, com o objectivo de identificar a relação entre a velocidade o fluxo e o intervalo num determinado intervalo de tempo, bem como a relação entre a velocidade e a sinistralidade. Até 1995 Portugal era um dos países com maior número de sinistros rodoviários Na sequência desta evolução foram instalados radares de controlo de velocidade no final de 2006 com o objectivo de obrigar ao cumprimento dos limites de velocidade impostos pelo código da estrada e reduzir a sinistralidade automóvel na cidade de Lisboa. Passados alguns anos sobre o investimento realizadoanteriormente, constatamos que existe a necessidade de implementar novas tecnologias na detecção das infracções, sejam estas de excesso de velocidade ou violação do semáforo vermelho (VSV), optimizar a informação disponibilizada aos automobilistas e aos peões, coordenar a interacção entre os veículos prioritários e os restantes presentes na via, dinamizar a gestão interna das contra ordenações, agilizar os procedimentos informatizar a recolha deinformação de modo a tornar os processos mais céleres.
Resumo:
A preliminary version of this paper appeared in Proceedings of the 31st IEEE Real-Time Systems Symposium, 2010, pp. 239–248.
Resumo:
LLF (Least Laxity First) scheduling, which assigns a higher priority to a task with a smaller laxity, has been known as an optimal preemptive scheduling algorithm on a single processor platform. However, little work has been made to illuminate its characteristics upon multiprocessor platforms. In this paper, we identify the dynamics of laxity from the system’s viewpoint and translate the dynamics into LLF multiprocessor schedulability analysis. More specifically, we first characterize laxity properties under LLF scheduling, focusing on laxity dynamics associated with a deadline miss. These laxity dynamics describe a lower bound, which leads to the deadline miss, on the number of tasks of certain laxity values at certain time instants. This lower bound is significant because it represents invariants for highly dynamic system parameters (laxity values). Since the laxity of a task is dependent of the amount of interference of higher-priority tasks, we can then derive a set of conditions to check whether a given task system can go into the laxity dynamics towards a deadline miss. This way, to the author’s best knowledge, we propose the first LLF multiprocessor schedulability test based on its own laxity properties. We also develop an improved schedulability test that exploits slack values. We mathematically prove that the proposed LLF tests dominate the state-of-the-art EDZL tests. We also present simulation results to evaluate schedulability performance of both the original and improved LLF tests in a quantitative manner.
Resumo:
In Distributed Computer-Controlled Systems (DCCS), both real-time and reliability requirements are of major concern. Architectures for DCCS must be designed considering the integration of processing nodes and the underlying communication infrastructure. Such integration must be provided by appropriate software support services. In this paper, an architecture for DCCS is presented, its structure is outlined, and the services provided by the support software are presented. These are considered in order to guarantee the real-time and reliability requirements placed by current and future systems.
Resumo:
Consider the problem of determining a task-toprocessor assignment for a given collection of implicit-deadline sporadic tasks upon a multiprocessor platform in which there are two distinct kinds of processors. We propose a polynomialtime approximation scheme (PTAS) for this problem. It offers the following guarantee: for a given task set and a given platform, if there exists a feasible task-to-processor assignment, then given an input parameter, ϵ, our PTAS succeeds, in polynomial time, in finding such a feasible task-to-processor assignment on a platform in which each processor is 1+3ϵ times faster. In the simulations, our PTAS outperforms the state-of-the-art PTAS [1] and also for the vast majority of task sets, it requires significantly smaller processor speedup than (its upper bound of) 1+3ϵ for successfully determining a feasible task-to-processor assignment.
Resumo:
In the past years, Software Architecture has attracted increased attention by academia and industry as the unifying concept to structure the design of complex systems. One particular research area deals with the possibility of reconfiguring architectures to adapt the systems they describe to new requirements. Reconfiguration amounts to adding and removing components and connections, and may have to occur without stopping the execution of the system being reconfigured. This work contributes to the formal description of such a process. Taking as a premise that a single formalism hardly ever satisfies all requirements in every situation, we present three approaches, each one with its own assumptions about the systems it can be applied to and with different advantages and disadvantages. Each approach is based on work of other researchers and has the aesthetic concern of changing as little as possible the original formalism, keeping its spirit. The first approach shows how a given reconfiguration can be specified in the same manner as the system it is applied to and in a way to be efficiently executed. The second approach explores the Chemical Abstract Machine, a formalism for rewriting multisets of terms, to describe architectures, computations, and reconfigurations in a uniform way. The last approach uses a UNITY-like parallel programming design language to describe computations, represents architectures by diagrams in the sense of Category Theory, and specifies reconfigurations by graph transformation rules.
Resumo:
This paper proposes a global multiprocessor scheduling algorithm for the Linux kernel that combines the global EDF scheduler with a priority-aware work-stealing load balancing scheme, enabling parallel real-time tasks to be executed on more than one processor at a given time instant. We state that some priority inversion may actually be acceptable, provided it helps reduce contention, communication, synchronisation and coordination between parallel threads, while still guaranteeing the expected system’s predictability. Experimental results demonstrate the low scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.
Resumo:
Dynamic parallel scheduling using work-stealing has gained popularity in academia and industry for its good performance, ease of implementation and theoretical bounds on space and time. Cores treat their own double-ended queues (deques) as a stack, pushing and popping threads from the bottom, but treat the deque of another randomly selected busy core as a queue, stealing threads only from the top, whenever they are idle. However, this standard approach cannot be directly applied to real-time systems, where the importance of parallelising tasks is increasing due to the limitations of multiprocessor scheduling theory regarding parallelism. Using one deque per core is obviously a source of priority inversion since high priority tasks may eventually be enqueued after lower priority tasks, possibly leading to deadline misses as in this case the lower priority tasks are the candidates when a stealing operation occurs. Our proposal is to replace the single non-priority deque of work-stealing with ordered per-processor priority deques of ready threads. The scheduling algorithm starts with a single deque per-core, but unlike traditional work-stealing, the total number of deques in the system may now exceed the number of processors. Instead of stealing randomly, cores steal from the highest priority deque.
Resumo:
Consider the problem of assigning real-time tasks on a heterogeneous multiprocessor platform comprising two different types of processors — such a platform is referred to as two-type platform. We present two linearithmic timecomplexity algorithms, SA and SA-P, each providing the follow- ing guarantee. For a given two-type platform and a given task set, if there exists a feasible task-to-processor-type assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type, then (i) using SA, it is guaranteed to find such a feasible task-to- processor-type assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding 2 a feasible task-to-processor assignment where tasks are not allowed to migrate between processors but given a platform in which processors are 1+α/times faster, where 0<α≤1. The parameter α is a property of the task set — it is the maximum utilization of any task which is less than or equal to 1.
Resumo:
Embedded real-time applications increasingly present high computation requirements, which need to be completed within specific deadlines, but that present highly variable patterns, depending on the set of data available in a determined instant. The current trend to provide parallel processing in the embedded domain allows providing higher processing power; however, it does not address the variability in the processing pattern. Dimensioning each device for its worst-case scenario implies lower average utilization, and increased available, but unusable, processing in the overall system. A solution for this problem is to extend the parallel execution of the applications, allowing networked nodes to distribute the workload, on peak situations, to neighbour nodes. In this context, this report proposes a framework to develop parallel and distributed real-time embedded applications, transparently using OpenMP and Message Passing Interface (MPI), within a programming model based on OpenMP. The technical report also devises an integrated timing model, which enables the structured reasoning on the timing behaviour of these hybrid architectures.
Resumo:
High-level parallel languages offer a simple way for application programmers to specify parallelism in a form that easily scales with problem size, leaving the scheduling of the tasks onto processors to be performed at runtime. Therefore, if the underlying system cannot efficiently execute those applications on the available cores, the benefits will be lost. In this paper, we consider how to schedule highly heterogenous parallel applications that require real-time performance guarantees on multicore processors. The paper proposes a novel scheduling approach that combines the global Earliest Deadline First (EDF) scheduler with a priority-aware work-stealing load balancing scheme, which enables parallel realtime tasks to be executed on more than one processor at a given time instant. Experimental results demonstrate the better scalability and lower scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.
Resumo:
Despite the steady increase in experimental deployments, most of research work on WSNs has focused only on communication protocols and algorithms, with a clear lack of effective, feasible and usable system architectures, integrated in a modular platform able to address both functional and non–functional requirements. In this paper, we outline EMMON [1], a full WSN-based system architecture for large–scale, dense and real–time embedded monitoring [3] applications. EMMON provides a hierarchical communication architecture together with integrated middleware and command and control software. Then, EM-Set, the EMMON engineering toolset will be presented. EM-Set includes a network deployment planning, worst–case analysis and dimensioning, protocol simulation and automatic remote programming and hardware testing tools. This toolset was crucial for the development of EMMON which was designed to use standard commercially available technologies, while maintaining as much flexibility as possible to meet specific applications requirements. Finally, the EMMON architecture has been validated through extensive simulation and experimental evaluation, including a 300+ nodes testbed.
Resumo:
Multicore platforms have transformed parallelism into a main concern. Parallel programming models are being put forward to provide a better approach for application programmers to expose the opportunities for parallelism by pointing out potentially parallel regions within tasks, leaving the actual and dynamic scheduling of these regions onto processors to be performed at runtime, exploiting the maximum amount of parallelism. It is in this context that this paper proposes a scheduling approach that combines the constant-bandwidth server abstraction with a priority-aware work-stealing load balancing scheme which, while ensuring isolation among tasks, enables parallel tasks to be executed on more than one processor at a given time instant.