983 resultados para Task-level parallelism
Resumo:
Includes bibliographical references.
Resumo:
Advancements in technology have enabled increasingly sophisticated automation to be introduced into the flight decks of modern aircraft. Generally, this automation was added to accomplish worthy objectives such as reducing flight crew workload, adding additional capability, or increasing fuel economy. Automation is necessary due to the fact that not all of the functions required for mission accomplishment in today’s complex aircraft are within the capabilities of the unaided human operator, who lacks the sensory capacity to detect much of the information required for flight. To a large extent, these objectives have been achieved. Nevertheless, despite all the benefits from the increasing amounts of highly reliable automation, vulnerabilities do exist in flight crew management of automation and Situation Awareness (SA). Issues associated with flight crew management of automation include: • Pilot understanding of automation’s capabilities, limitations, modes, and operating principles and techniques. • Differing pilot decisions about the appropriate automation level to use or whether to turn automation on or off when they get into unusual or emergency situations. • Human-Machine Interfaces (HMIs) are not always easy to use, and this aspect could be problematic when pilots experience high workload situations. • Complex automation interfaces, large differences in automation philosophy and implementation among different aircraft types, and inadequate training also contribute to deficiencies in flight crew understanding of automation.
Resumo:
The present study was designed to examine the main and interactive effects of task demands, work control, and task information on levels of adjustment. Task demands, work control, and task information were manipulated in an experimental setting where participants completed a letter-sorting activity (N = 128). Indicators of adjustment included measures of positive mood, participants' perceptions of task performance, and task satisfaction. Results of the present study provided some support for the main effects of objective task demands, work control, and task information on levels of adjustment. At the subjective level of analysis, there was some evidence to suggest that work control and task information interacted in their effects on levels of adjustment. There was minimal support for the proposal that work control and task information would buffer the negative effects of task demands on adjustment. There was, however, some evidence to suggest that the stress-buffering role of subjective work control was more marked at high, rather than low, levels of subjective task information.
Resumo:
Background and Purpose-Functional MRI is a powerful tool to investigate recovery of brain function in patients with stroke. An inherent assumption in functional MRI data analysis is that the blood oxygenation level-dependent (BOLD) signal is stable over the course of the examination. In this study, we evaluated the validity of such assumption in patients with chronic stroke. Methods-Fifteen patients performed a simple motor task with repeated epochs using the paretic and the unaffected hand in separate runs. The corresponding BOLD signal time courses were extracted from the primary and supplementary motor areas of both hemispheres. Statistical maps were obtained by the conventional General Linear Model and by a parametric General Linear Model. Results-Stable BOLD amplitude was observed when the task was executed with the unaffected hand. Conversely, the BOLD signal amplitude in both primary and supplementary motor areas was progressively attenuated in every patient when the task was executed with the paretic hand. The conventional General Linear Model analysis failed to detect brain activation during movement of the paretic hand. However, the proposed parametric General Linear Model corrected the misdetection problem and showed robust activation in both primary and supplementary motor areas. Conclusions-The use of data analysis tools that are built on the premise of a stable BOLD signal may lead to misdetection of functional regions and underestimation of brain activity in patients with stroke. The present data urge the use of caution when relying on the BOLD response as a marker of brain reorganization in patients with stroke. (Stroke. 2010; 41:1921-1926.)
Resumo:
It has long been supposed that the interference observed in certain patterns of coordination is mediated, at least in part, by peripheral afference from the moving limbs. We manipulated the level of afferent input, arising from movement of the opposite limb, during the acquisition of a complex coordination task. Participants learned to generate flexion and extension movements of the right wrist, of 75degrees amplitude, that were a quarter cycle out of phase with a 1-Hz sinusoidal visual reference signal. On separate trials, the left wrist either was at rest, or was moved passively by a torque motor through 50degrees, 75degrees or 100degrees, in synchrony with the reference signal. Five acquisition sessions were conducted on successive days. A retention session was conducted I week later. Performance was initially superior when the opposite limb was moved passively than when it was static. The amplitude and frequency of active movement were lower in the static condition than in the driven conditions and the variation in the relative phase relation across trials was greater than in the driven conditions. In addition, the variability of amplitude, frequency and the relative phase relation during each trial was greater when the opposite limb was static than when driven. Similar effects were expressed in electromyograms. The most marked and consistent differences in the accuracy and consistency of performance (defined in terms of relative phase) were between the static condition and the condition in which the left wrist was moved through 50degrees. These outcomes were exhibited most prominently during initial exposure to the task. Increases in task performance during the acquisition period, as assessed by a number of kinematic variables, were generally well described by power functions. In addition, the recruitment of extensor carpi radialis (ECR), and the degree of co-contraction of flexor carpi radialis and ECR, decreased during acquisition. Our results indicate that, in an appropriate task context, afferent feedback from the opposite limb, even when out of phase with the focal movement, may have a positive influence upon the stability of coordination.
Resumo:
In the picture-word interference task, naming responses are facilitated when a distractor word is orthographically and phonologically related to the depicted object as compared to an unrelated word. We used event-related functional magnetic resonance imaging (fMRI) to investigate the cerebral hemodynamic responses associated with this priming effect. Serial (or independent-stage) and interactive models of word production that explicitly account for picture-word interference effects assume that the locus of the effect is at the level of retrieving phonological codes, a role attributed recently to the left posterior superior temporal cortex (Wernicke's area). This assumption was tested by randomly presenting participants with trials from orthographically related and unrelated distractor conditions and acquiring image volumes coincident with the estimated peak hemodynamic response for each trial. Overt naming responses occurred in the absence of scanner noise, allowing reaction time data to be recorded. Analysis of this data confirmed the priming effect. Analysis of the fMRI data revealed blood oxygen level-dependent signal decreases in Wernicke's area and the right anterior temporal cortex, whereas signal increases were observed in the anterior cingulate, the right orbitomedial prefrontal, somatosensory, and inferior parietal cortices, and the occipital lobe. The results are interpreted as supporting the locus for the facilitation effect as assumed by both classes of theoretical model of word production. In addition, our results raise the possibilities that, counterintuitively, picture-word interference might be increased by the presentation of orthographically related distractors, due to competition introduced by activation of phonologically related word forms, and that this competition requires inhibitory processes to be resolved. The priming effect is therefore viewed as being sufficient to offset the increased interference. We conclude that information from functional imaging studies might be useful for constraining theoretical models of word production. (C) 2002 Elsevier Science (USA).
Resumo:
In this paper we survey the most relevant results for the prioritybased schedulability analysis of real-time tasks, both for the fixed and dynamic priority assignment schemes. We give emphasis to the worst-case response time analysis in non-preemptive contexts, which is fundamental for the communication schedulability analysis. We define an architecture to support priority-based scheduling of messages at the application process level of a specific fieldbus communication network, the PROFIBUS. The proposed architecture improves the worst-case messages’ response time, overcoming the limitation of the first-come-first-served (FCFS) PROFIBUS queue implementations.
Resumo:
Dynamic parallel scheduling using work-stealing has gained popularity in academia and industry for its good performance, ease of implementation and theoretical bounds on space and time. Cores treat their own double-ended queues (deques) as a stack, pushing and popping threads from the bottom, but treat the deque of another randomly selected busy core as a queue, stealing threads only from the top, whenever they are idle. However, this standard approach cannot be directly applied to real-time systems, where the importance of parallelising tasks is increasing due to the limitations of multiprocessor scheduling theory regarding parallelism. Using one deque per core is obviously a source of priority inversion since high priority tasks may eventually be enqueued after lower priority tasks, possibly leading to deadline misses as in this case the lower priority tasks are the candidates when a stealing operation occurs. Our proposal is to replace the single non-priority deque of work-stealing with ordered per-processor priority deques of ready threads. The scheduling algorithm starts with a single deque per-core, but unlike traditional work-stealing, the total number of deques in the system may now exceed the number of processors. Instead of stealing randomly, cores steal from the highest priority deque.
Resumo:
Consider a single processor and a software system. The software system comprises components and interfaces where each component has an associated interface and each component comprises a set of constrained-deadline sporadic tasks. A scheduling algorithm (called global scheduler) determines at each instant which component is active. The active component uses another scheduling algorithm (called local scheduler) to determine which task is selected for execution on the processor. The interface of a component makes certain information about a component visible to other components; the interfaces of all components are used for schedulability analysis. We address the problem of generating an interface for a component based on the tasks inside the component. We desire to (i) incur only a small loss in schedulability analysis due to the interface and (ii) ensure that the amount of space (counted in bits) of the interface is small; this is because such an interface hides as much details of the component as possible. We present an algorithm for generating such an interface.
Resumo:
A QoS adaptation to dynamically changing system conditions that takes into consideration the user’s constraints on the stability of service provisioning is presented. The goal is to allow the system to make QoS adaptation decisions in response to fluctuations in task traffic flow, under the control of the user. We pay special attention to the case where monitoring the stability period and resource load variation of Service Level Agreements for different types of services is used to dynamically adapt future stability periods, according to a feedback control scheme. System’s adaptation behaviour can be configured according to a desired confidence level on future resource usage. The viability of the proposed approach is validated by preliminary experiments.
Resumo:
Locomotor tasks characterization plays an important role in trying to improve the quality of life of a growing elderly population. This paper focuses on this matter by trying to characterize the locomotion of two population groups with different functional fitness levels (high or low) while executing three different tasks-gait, stair ascent and stair descent. Features were extracted from gait data, and feature selection methods were used in order to get the set of features that allow differentiation between functional fitness level. Unsupervised learning was used to validate the sets obtained and, ultimately, indicated that it is possible to distinguish the two population groups. The sets of best discriminate features for each task are identified and thoroughly analysed. Copyright © 2014 SCITEPRESS - Science and Technology Publications. All rights reserved.
Resumo:
Heterogeneous multicore platforms are becoming an interesting alternative for embedded computing systems with limited power supply as they can execute specific tasks in an efficient manner. Nonetheless, one of the main challenges of such platforms consists of optimising the energy consumption in the presence of temporal constraints. This paper addresses the problem of task-to-core allocation onto heterogeneous multicore platforms such that the overall energy consumption of the system is minimised. To this end, we propose a two-phase approach that considers both dynamic and leakage energy consumption: (i) the first phase allocates tasks to the cores such that the dynamic energy consumption is reduced; (ii) the second phase refines the allocation performed in the first phase in order to achieve better sleep states by trading off the dynamic energy consumption with the reduction in leakage energy consumption. This hybrid approach considers core frequency set-points, tasks energy consumption and sleep states of the cores to reduce the energy consumption of the system. Major value has been placed on a realistic power model which increases the practical relevance of the proposed approach. Finally, extensive simulations have been carried out to demonstrate the effectiveness of the proposed algorithm. In the best-case, savings up to 18% of energy are reached over the first fit algorithm, which has shown, in previous works, to perform better than other bin-packing heuristics for the target heterogeneous multicore platform.
Resumo:
This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.
Resumo:
Nos dias de hoje, os sistemas de tempo real crescem em importância e complexidade. Mediante a passagem do ambiente uniprocessador para multiprocessador, o trabalho realizado no primeiro não é completamente aplicável no segundo, dado que o nível de complexidade difere, principalmente devido à existência de múltiplos processadores no sistema. Cedo percebeu-se, que a complexidade do problema não cresce linearmente com a adição destes. Na verdade, esta complexidade apresenta-se como uma barreira ao avanço científico nesta área que, para já, se mantém desconhecida, e isto testemunha-se, essencialmente no caso de escalonamento de tarefas. A passagem para este novo ambiente, quer se trate de sistemas de tempo real ou não, promete gerar a oportunidade de realizar trabalho que no primeiro caso nunca seria possível, criando assim, novas garantias de desempenho, menos gastos monetários e menores consumos de energia. Este último fator, apresentou-se desde cedo, como, talvez, a maior barreira de desenvolvimento de novos processadores na área uniprocessador, dado que, à medida que novos eram lançados para o mercado, ao mesmo tempo que ofereciam maior performance, foram levando ao conhecimento de um limite de geração de calor que obrigou ao surgimento da área multiprocessador. No futuro, espera-se que o número de processadores num determinado chip venha a aumentar, e como é óbvio, novas técnicas de exploração das suas inerentes vantagens têm de ser desenvolvidas, e a área relacionada com os algoritmos de escalonamento não é exceção. Ao longo dos anos, diferentes categorias de algoritmos multiprocessador para dar resposta a este problema têm vindo a ser desenvolvidos, destacando-se principalmente estes: globais, particionados e semi-particionados. A perspectiva global, supõe a existência de uma fila global que é acessível por todos os processadores disponíveis. Este fato torna disponível a migração de tarefas, isto é, é possível parar a execução de uma tarefa e resumir a sua execução num processador distinto. Num dado instante, num grupo de tarefas, m, as tarefas de maior prioridade são selecionadas para execução. Este tipo promete limites de utilização altos, a custo elevado de preempções/migrações de tarefas. Em contraste, os algoritmos particionados, colocam as tarefas em partições, e estas, são atribuídas a um dos processadores disponíveis, isto é, para cada processador, é atribuída uma partição. Por essa razão, a migração de tarefas não é possível, acabando por fazer com que o limite de utilização não seja tão alto quando comparado com o caso anterior, mas o número de preempções de tarefas decresce significativamente. O esquema semi-particionado, é uma resposta de caráter hibrido entre os casos anteriores, pois existem tarefas que são particionadas, para serem executadas exclusivamente por um grupo de processadores, e outras que são atribuídas a apenas um processador. Com isto, resulta uma solução que é capaz de distribuir o trabalho a ser realizado de uma forma mais eficiente e balanceada. Infelizmente, para todos estes casos, existe uma discrepância entre a teoria e a prática, pois acaba-se por se assumir conceitos que não são aplicáveis na vida real. Para dar resposta a este problema, é necessário implementar estes algoritmos de escalonamento em sistemas operativos reais e averiguar a sua aplicabilidade, para caso isso não aconteça, as alterações necessárias sejam feitas, quer a nível teórico quer a nível prá
Resumo:
Doctoral Program in Computer Science