983 resultados para Task-level parallelism
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising two different types of processors—such a platform is referred to as two-type platform. We present two low degree polynomial time-complexity algorithms, SA and SA-P, each providing the following guarantee. For a given two-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then (i) using SA, it is guaranteed to find such an assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which processors are 1+α times faster. The parameter 0<α≤1 is a property of the task set; it is the maximum of all the task utilizations that are no greater than 1. We evaluate average-case performance of both the algorithms by generating task sets randomly and measuring how much faster processors the algorithms need (which is upper bounded by 1+α/2 for SA and 1+α for SA-P) in order to output a feasible task assignment (intra-migrative for SA and non-migrative for SA-P). In our evaluations, for the vast majority of task sets, these algorithms require significantly smaller processor speedup than indicated by their theoretical bounds. Finally, we consider a special case where no task utilization in the given task set can exceed one and for this case, we (re-)prove the performance guarantees of SA and SA-P. We show, for both of the algorithms, that changing the adversary from intra-migrative to a more powerful one, namely fully-migrative, in which tasks can migrate between processors of any type, does not deteriorate the performance guarantees. For this special case, we compare the average-case performance of SA-P and a state-of-the-art algorithm by generating task sets randomly. In our evaluations, SA-P outperforms the state-of-the-art by requiring much smaller processor speedup and by running orders of magnitude faster.
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising a constant number (denoted by t) of distinct types of processors—such a platform is referred to as a t-type platform. We present two algorithms, LPGIM and LPGNM, each providing the following guarantee. For a given t-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet their deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then: (i) LPGIM succeeds in finding such an assignment where the same restriction on task migration applies (intra-migrative) but given a platform in which only one processor of each type is 1 + α × t-1/t times faster and (ii) LPGNM succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which every processor is 1 + α times faster. The parameter α is a property of the task set; it is the maximum of all the task utilizations that are no greater than one. To the best of our knowledge, for t-type heterogeneous multiprocessors: (i) for the problem of intra-migrative task assignment, no previous algorithm exists with a proven bound and hence our algorithm, LPGIM, is the first of its kind and (ii) for the problem of non-migrative task assignment, our algorithm, LPGNM, has superior performance compared to state-of-the-art.
Resumo:
Hard real- time multiprocessor scheduling has seen, in recent years, the flourishing of semi-partitioned scheduling algorithms. This category of scheduling schemes combines elements of partitioned and global scheduling for the purposes of achieving efficient utilization of the system’s processing resources with strong schedulability guarantees and with low dispatching overheads. The sub-class of slot-based “task-splitting” scheduling algorithms, in particular, offers very good trade-offs between schedulability guarantees (in the form of high utilization bounds) and the number of preemptions/migrations involved. However, so far there did not exist unified scheduling theory for such algorithms; each one was formulated in its own accompanying analysis. This article changes this fragmented landscape by formulating a more unified schedulability theory covering the two state-of-the-art slot-based semi-partitioned algorithms, S-EKG and NPS-F (both fixed job-priority based). This new theory is based on exact schedulability tests, thus also overcoming many sources of pessimism in existing analysis. In turn, since schedulability testing guides the task assignment under the schemes in consideration, we also formulate an improved task assignment procedure. As the other main contribution of this article, and as a response to the fact that many unrealistic assumptions, present in the original theory, tend to undermine the theoretical potential of such scheduling schemes, we identified and modelled into the new analysis all overheads incurred by the algorithms in consideration. The outcome is a new overhead-aware schedulability analysis that permits increased efficiency and reliability. The merits of this new theory are evaluated by an extensive set of experiments.
Resumo:
The multiprocessor scheduling scheme NPS-F for sporadic tasks has a high utilisation bound and an overall number of preemptions bounded at design time. NPS-F binpacks tasks offline to as many servers as needed. At runtime, the scheduler ensures that each server is mapped to at most one of the m processors, at any instant. When scheduled, servers use EDF to select which of their tasks to run. Yet, unlike the overall number of preemptions, the migrations per se are not tightly bounded. Moreover, we cannot know a priori which task a server will be currently executing at the instant when it migrates. This uncertainty complicates the estimation of cache-related preemption and migration costs (CPMD), potentially resulting in their overestimation. Therefore, to simplify the CPMD estimation, we propose an amended bin-packing scheme for NPS-F allowing us (i) to identify at design time, which task migrates at which instant and (ii) bound a priori the number of migrating tasks, while preserving the utilisation bound of NPS-F.
Resumo:
Consider scheduling of real-time tasks on a multiprocessor where migration is forbidden. Specifically, consider the problem of determining a task-to-processor assignment for a given collection of implicit-deadline sporadic tasks upon a multiprocessor platform in which there are two distinct types of processors. For this problem, we propose a new algorithm, LPC (task assignment based on solving a Linear Program with Cutting planes). The algorithm offers the following guarantee: for a given task set and a platform, if there exists a feasible task-to-processor assignment, then LPC succeeds in finding such a feasible task-to-processor assignment as well but on a platform in which each processor is 1.5 × faster and has three additional processors. For systems with a large number of processors, LPC has a better approximation ratio than state-of-the-art algorithms. To the best of our knowledge, this is the first work that develops a provably good real-time task assignment algorithm using cutting planes.
Resumo:
Task scheduling is one of the key mechanisms to ensure timeliness in embedded real-time systems. Such systems have often the need to execute not only application tasks but also some urgent routines (e.g. error-detection actions, consistency checkers, interrupt handlers) with minimum latency. Although fixed-priority schedulers such as Rate-Monotonic (RM) are in line with this need, they usually make a low processor utilization available to the system. Moreover, this availability usually decreases with the number of considered tasks. If dynamic-priority schedulers such as Earliest Deadline First (EDF) are applied instead, high system utilization can be guaranteed but the minimum latency for executing urgent routines may not be ensured. In this paper we describe a scheduling model according to which urgent routines are executed at the highest priority level and all other system tasks are scheduled by EDF. We show that the guaranteed processor utilization for the assumed scheduling model is at least as high as the one provided by RM for two tasks, namely 2(2√−1). Seven polynomial time tests for checking the system timeliness are derived and proved correct. The proposed tests are compared against each other and to an exact but exponential running time test.
Resumo:
Modular design is crucial to manage large-scale systems and to support the divide-and-conquer development approach. It allows hierarchical representations and, therefore, one can have a system overview, as well as observe component details. Petri nets are suitable to model concurrent systems, but lack on structuring mechanisms to support abstractions and the composition of sub-models, in particular when considering applications to embedded controllers design. In this paper we present a module construct, and an underlying high-level Petri net type, to model embedded controllers. Multiple interfaces can be declared in a module, thus, different instances of the same module can be used in different situations. The interface is a subset of the module nodes, through which the communication with the environment is made. Module places can be annotated with a generic type, overridden with a concrete type at instance level, and constants declared in a module may have a new value in each instance.
Resumo:
Dissertation submitted in Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa for the degree of Master in Biomedical Engineering
Resumo:
Resumo: Com base no conceito de implementação de intenções (Gollwitzer, 1993, 1999) e na teoria do contexto de resposta de Kirsch & Lynn (1997), o presente trabalho testou a eficácia de uma intervenção combinada de implementação de intenções com hipnose e sugestão pós-hipnótica na promoção da adesão a uma tarefa simples (avaliação do humor) e uma tarefa difícil (actividade física). Os participantes são estudantes universitários de uma universidade na Nova Jérsia, (N=124, Estudo 1, EUA) e em Lisboa (N=323, Estudo 2, Portugal). Em ambos os estudos os participantes foram seleccionados a partir de uma amostra mais vasta baseado num escrutínio da sua sugestibilidade hipnótica avaliada por meio da Escala de Grupo de Sugestibilidade Hipnótica de Waterloo-Stanford (WSGC): Forma C. O Estudo 1 usou um desenho factorial do tipo 2x2x3 (tipo de intenção formada x hipnose x nível de sugestionabilidade) e o Estudo 2 usou um desenho factorial do tipo 2 x 2x 2 x 4 (tipo de tarefa x tipo de intenção formada x hipnose x nível de sugestionabilidade). No Estudo 1 foi pedido aos participantes que corressem todos os dias e durante três semanas durante 5 minutos, que medissem a sua pulsação antes e depois da actividade física e que mandassem um e-mail ao experimentador, fornecendo assim uma medida comportamental e uma medida de auto-relato. Aos participantes no grupo de intenções de meta foi apenas pedido que corressem todos os dias. Aos participantes no grupo de implementação de intenções foi pedido que especificasses com exactidão quando e onde iriam correr e enviar o e-mail. Para além disso, cerca de metade dos participantes foram hipnotizados e receberam uma sugestão pós-hipnótica em que lhes foi sugerido que o pensamento de correr todos os dias lhes viria à mente sem esforço no momento apropriado. A outra metade dos participantes não recebeu qualquer sugestão hipnótica. No Estudo 2 foi seguido o mesmo procedimento, mas a cerca de metade dos participantes foi atribuída uma tarefa fácil (enviar um Adherence to health-related behaviors ix SMS com a avaliação diária do seu estado de humor naquele momento) e à outra metade da amostra foi atribuída a tarefa de exercício físico atrás descrita (tarefa difícil). Os resultados do estudo 1 mostraram uma interacção significativa entre o nível de sugestionabilidade dos participantes e a sugestão pós-hipnótica (p<.01) indicando que a administração da sugestão pós-hipnótica aumentou a adesão nos participantes muito sugestionáveis, mas baixou a adesão nos participantes pouco sugestionáveis. Não se encontraram diferenças entre os grupos que formaram intenções de meta e os que formaram implementação de intenções. No Estudo 2 os resultados indicaram que os participantes aderiram significativamente mais à tarefa fácil do que à tarefa difícil (p<.001). Os resultados não revelaram diferenças significativas entre as condições implementações de intenções, hipnose e as duas estratégias combinadas, indicando que a implementação de intenções não foi eficaz no aumento da adesão às duas tarefas propostas e não beneficiou da combinação com as sugestões pós-hipnóticas. A utilização da hipnose com sugestão pós-hipnótica significativamente reduziu a adesão a ambas as tarefas. Dado que não existiam instrumentos em Português destinados a avaliar a sugestionabilidade hipnótica, traduziu-se e adaptou-se para Português Escala de Grupo de sugestibilidade hipnótica de Waterloo-Stanford (WSGC): Forma C. A amostra Portuguesa (N=625) apresentou resultados semelhantes aos encontrados nas amostras de referência em termos do formato da distribuição dos padrões da pontuação e do índice de dificuldade dos itens. Contudo, a proporção de estudantes portugueses encontrada que pontuaram na zona superior de sugestionabilidade foi significativamente inferior à proporção de participantes na mesma zona encontrada nas amostras de referência. No sentido de lançar alguma luz sobre as razões para este resultado, inquiriu-se alguns dos participantes acerca das suas atitudes face à hipnose utilizando uma versão portuguesa da Escala de Valência de Atitudes e Crenças face à Hipnose e comparou-se com a opinião de Adherence to health-related behaviors xAbstract: On the basis of Gollwitzer’s (1993, 1999) implementation intentions’ concept, and Kirsch & Lynn’s (1997) response set theory, this dissertation tested the effectiveness of a combined intervention of implementation intentions with hypnosis with posthypnotic suggestions in enhancing adherence to a simple (mood report) and a difficult (physical activity) health-related task. Participants were enrolled in a university in New Jersey (N=124, Study 1, USA) and in two universities in Lisbon (N=323, Study 2, Portugal). In both studies participants were selected from a broader sample based on their suggestibility scores using the Waterloo-Stanford Group C (WSGC) scale of hypnotic susceptibility and then randomly assigned to the experimental groups. Study 1 used a 2x2x3 factorial design (instruction x hypnosis x level of suggestibility) and Study 2 used a 2 x 2x 2 x 4 factorial design (task x instructions x hypnosis x level of suggestibility). In Study 1 participants were asked to run in place for 5 minutes each day for a three-week period, to take their pulse rate before and after the activity, and to send a daily email report to the experimenter, thus providing both a self-report and a behavioral measure of adherence. Participants in the goal intention condition were simply asked to run in place and send the e-mail once a day. Those in the implementation intention condition were further asked to specify the exact place and time they would perform the physical activity and send the e-mail. In addition, half of the participants were given a post-hypnotic suggestion indicating that the thought of running in place would come to mind without effort at the appropriate moment. The other half did not receive a posthypnotic suggestion. Study 2 followed the same procedure, but additionally half of the participants were instructed to send a mood report by SMS (easy task) and half were assigned to the physical activity task described above (difficult task). Adherence to health-related behaviors vii Study 1 result’s showed a significant interaction between participant’s suggestibility level and posthypnotic suggestion (p<.01) indicating that posthypnotic suggestion enhanced adherence among highly suggestible participants, but lowered it among low suggestible individuals. No differences between the goal intention and the implementation intentions groups were found. In Study 2, participants adhered significantly more (p<.001) to the easy task than to the difficult task. Results did not revealed significant differences between the implementation intentions, hypnosis and the two conditions combined, indicating that implementation intentions was not enhanced by hypnosis with posthypnotic suggestion, neither was effective as single intervention in enhancing adherence to any of the tasks. Hypnosis with posthypnotic suggestion alone significantly reduced adherence to both tasks in comparison with participants that did not receive hypnosis. Since there were no instruments in Portuguese language to asses hypnotic suggestibility, the Waterloo-Stanford Group C (WSGC) scale of hypnotic susceptibility was translated and adapted to Portuguese and was used in the screening of a sample of college students from Lisbon (N=625). Results showed that the Portuguese sample has distribution shapes and difficulty patterns of hypnotic suggestibility scores similar to the reference samples, with the exception of the proportion of Portuguese students scoring in the high range of hypnotic suggestibility, that was found lower than the in reference samples. In order to shed some light on the reasons for this finding participant’s attitudes toward hypnosis were inquired using a Portuguese translation and adaptation of the Escala de Valencia de Actitudes y Creencias Hacia la Hipnosis, Versión Cliente, and compared with participants with no prior hypnosis experience (N=444). Significant differences were found between the two groups with participants without hypnosis experience scoring higher in factors indicating misconceptions and negative attitudes about hypnosis.
Resumo:
Journal of Hydraulic Engineering, Vol. 135, No. 11, November 1, 2009
Resumo:
Thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the subject of Electrical and Computer Engineering
Resumo:
Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.
Resumo:
This paper is about a PV system linked to the electric grid through power converters under cloud scope. The PV system is modeled by the five parameters equivalent circuit and a MPPT procedure is integrated into the modeling. The modeling for the converters models the association of a DC-DC boost with a three-level inverter. PI controllers are used with PWM by sliding mode control associated with space vector modulation controlling the booster and the inverter. A case study addresses a simulation to assess the performance of a PV system linked to the electric grid. Conclusions regarding the integration of the PV system into the electric grid are presented. © IFIP International Federation for Information Processing 2015.
Resumo:
This paper is on a simulation for offshore wind systems in deep water under cloud scope. The system is equipped with a permanent magnet synchronous generator and a full-power three-level converter, converting the electric energy at variable frequency in one at constant frequency. The control strategies for the three-level are based on proportional integral controllers. The electric energy is injected through a HVDC transmission submarine cable into the grid. The drive train is modeled by a three-mass model taking into account the resistant stiffness torque, structure and tower in the deep water due to the moving surface elevation. Conclusions are taken on the influence of the moving surface on the energy conversion. © IFIP International Federation for Information Processing 2015.
Resumo:
Mestrado em Engenharia Civil – Ramo de Construções