899 resultados para Task Performance
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising a constant number (denoted by t) of distinct types of processors—such a platform is referred to as a t-type platform. We present two algorithms, LPGIM and LPGNM, each providing the following guarantee. For a given t-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet their deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then: (i) LPGIM succeeds in finding such an assignment where the same restriction on task migration applies (intra-migrative) but given a platform in which only one processor of each type is 1 + α × t-1/t times faster and (ii) LPGNM succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which every processor is 1 + α times faster. The parameter α is a property of the task set; it is the maximum of all the task utilizations that are no greater than one. To the best of our knowledge, for t-type heterogeneous multiprocessors: (i) for the problem of intra-migrative task assignment, no previous algorithm exists with a proven bound and hence our algorithm, LPGIM, is the first of its kind and (ii) for the problem of non-migrative task assignment, our algorithm, LPGNM, has superior performance compared to state-of-the-art.
Resumo:
Consider scheduling of real-time tasks on a multiprocessor where migration is forbidden. Specifically, consider the problem of determining a task-to-processor assignment for a given collection of implicit-deadline sporadic tasks upon a multiprocessor platform in which there are two distinct types of processors. For this problem, we propose a new algorithm, LPC (task assignment based on solving a Linear Program with Cutting planes). The algorithm offers the following guarantee: for a given task set and a platform, if there exists a feasible task-to-processor assignment, then LPC succeeds in finding such a feasible task-to-processor assignment as well but on a platform in which each processor is 1.5 × faster and has three additional processors. For systems with a large number of processors, LPC has a better approximation ratio than state-of-the-art algorithms. To the best of our knowledge, this is the first work that develops a provably good real-time task assignment algorithm using cutting planes.
Resumo:
Structure and Infrastructure Engineering, 1-17
Resumo:
The goal of this study was to propose a new functional magnetic resonance imaging (fMRI) paradigm using a language-free adaptation of a 2-back working memory task to avoid cultural and educational bias. We additionally provide an index of the validity of the proposed paradigm and test whether the experimental task discriminates the behavioural performances of healthy participants from those of individuals with working memory deficits. Ten healthy participants and nine patients presenting working memory (WM) deficits due to acquired brain injury (ABI) performed the developed task. To inspect whether the paradigm activates brain areas typically involved in visual working memory (VWM), brain activation of the healthy participants was assessed with fMRIs. To examine the task's capacity to discriminate behavioural data, performances of the healthy participants in the task were compared with those of ABI patients. Data were analysed with GLM-based random effects procedures and t-tests. We found an increase of the BOLD signal in the specialized areas of VWM. Concerning behavioural performances, healthy participants showed the predicted pattern of more hits, less omissions and a tendency for fewer false alarms, more self-corrected responses, and faster reaction times, when compared with subjects presenting WM impairments. The results suggest that this task activates brain areas involved in VWM and discriminates behavioural performances of clinical and non-clinical groups. It can thus be used as a research methodology for behavioural and neuroimaging studies of VWM in block-design paradigms.
Resumo:
Postural control deficits are the most disabling aspects of Parkinson's disease (PD), resulting in decreased mobility and functional independence. The aim of this study was to assess the postural control stability, revealed by variables based on the centre of pressure (CoP), in individuals with PD while performing a sit-to-stand-to-sit sequence under single- and dual-task conditions. An observational, analytical and cross-sectional study was performed. The sample consisted of 9 individuals with PD and 9 healthy controls. A force platform was used to measure the CoP displacement and velocity during the sit-to-stand-to-sit sequence. The results were statistically analysed. Individuals with PD required greater durations for the sit-to-stand-to-sit sequence than the controls (p < 0.05). The anteroposterior and mediolateral CoP displacement were higher in the individuals with PD (p < 0.05). However, only the anteroposterior CoP velocity in the stand-to-sit phase (p = 0.006) was lower in the same individuals. Comparing the single- and dual-task conditions in both groups, the duration, the anteroposterior CoP displacement and velocity were higher in the dual-task condition (p < 0.05). The individuals with PD presented reduced postural control stability during the sit-to-stand-to-sit sequence, especially when under the dual-task condition. These individuals have deficits not only in motor performance, but also in cognitive performance when performing the sit-to-stand-to-sit sequence in their daily life tasks. Moreover, both deficits tend to be intensified when two tasks are performed simultaneously.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Nos dias de hoje, os sistemas de tempo real crescem em importância e complexidade. Mediante a passagem do ambiente uniprocessador para multiprocessador, o trabalho realizado no primeiro não é completamente aplicável no segundo, dado que o nível de complexidade difere, principalmente devido à existência de múltiplos processadores no sistema. Cedo percebeu-se, que a complexidade do problema não cresce linearmente com a adição destes. Na verdade, esta complexidade apresenta-se como uma barreira ao avanço científico nesta área que, para já, se mantém desconhecida, e isto testemunha-se, essencialmente no caso de escalonamento de tarefas. A passagem para este novo ambiente, quer se trate de sistemas de tempo real ou não, promete gerar a oportunidade de realizar trabalho que no primeiro caso nunca seria possível, criando assim, novas garantias de desempenho, menos gastos monetários e menores consumos de energia. Este último fator, apresentou-se desde cedo, como, talvez, a maior barreira de desenvolvimento de novos processadores na área uniprocessador, dado que, à medida que novos eram lançados para o mercado, ao mesmo tempo que ofereciam maior performance, foram levando ao conhecimento de um limite de geração de calor que obrigou ao surgimento da área multiprocessador. No futuro, espera-se que o número de processadores num determinado chip venha a aumentar, e como é óbvio, novas técnicas de exploração das suas inerentes vantagens têm de ser desenvolvidas, e a área relacionada com os algoritmos de escalonamento não é exceção. Ao longo dos anos, diferentes categorias de algoritmos multiprocessador para dar resposta a este problema têm vindo a ser desenvolvidos, destacando-se principalmente estes: globais, particionados e semi-particionados. A perspectiva global, supõe a existência de uma fila global que é acessível por todos os processadores disponíveis. Este fato torna disponível a migração de tarefas, isto é, é possível parar a execução de uma tarefa e resumir a sua execução num processador distinto. Num dado instante, num grupo de tarefas, m, as tarefas de maior prioridade são selecionadas para execução. Este tipo promete limites de utilização altos, a custo elevado de preempções/migrações de tarefas. Em contraste, os algoritmos particionados, colocam as tarefas em partições, e estas, são atribuídas a um dos processadores disponíveis, isto é, para cada processador, é atribuída uma partição. Por essa razão, a migração de tarefas não é possível, acabando por fazer com que o limite de utilização não seja tão alto quando comparado com o caso anterior, mas o número de preempções de tarefas decresce significativamente. O esquema semi-particionado, é uma resposta de caráter hibrido entre os casos anteriores, pois existem tarefas que são particionadas, para serem executadas exclusivamente por um grupo de processadores, e outras que são atribuídas a apenas um processador. Com isto, resulta uma solução que é capaz de distribuir o trabalho a ser realizado de uma forma mais eficiente e balanceada. Infelizmente, para todos estes casos, existe uma discrepância entre a teoria e a prática, pois acaba-se por se assumir conceitos que não são aplicáveis na vida real. Para dar resposta a este problema, é necessário implementar estes algoritmos de escalonamento em sistemas operativos reais e averiguar a sua aplicabilidade, para caso isso não aconteça, as alterações necessárias sejam feitas, quer a nível teórico quer a nível prá
Resumo:
Modelling of ventilation is strongly dependent on the physical characteristics of the building of which precise evaluation is a complex and time consuming task. In the frame of a research project, two children day care centres (CDCC) have been selected in order to measure the envelope air permeability, the flow rate of mechanical ventilation systems and indoor and outdoor temperature. The data obtained was used as input to the computer code CONTAM for ventilation simulations. The results obtained were compared with direct measurements of ventilation flow from short term measurements with CO2 tracer gas and medium term measurements with perfluorocarbon tracer (PFT) gas decay method. After validation, in order to analyse the main parameters that affect ventilation, the model was used to predict the ventilation rates for a wide range of conditions. The purpose of this assessment was to find the best practices to improve natural ventilation. A simple analytical method to predict the ventilation flow rate of rooms is also presented. The method is based on the estimation of wind effect on the room through the evaluation of an average factor and on the assessment of relevant cross section of gaps and openings combined in series or in parallel. It is shown that it may be applied with acceptable accuracy for this type of buildings when ventilation is due essentially to wind action.
Resumo:
Allergy affects at least one-quarter of European schoolchildren, it reduces quality of life and may impair school performance; there is a risk of severe reactions and, in rare cases, death. Allergy is a multi-system disorder, and children often have several co-existing diseases, i.e. allergic rhinitis, asthma, eczema and food allergy. Severe food allergy reactions may occur for the first time at school, and overall 20% of food allergy reactions occur in schools. Up to two-thirds of schools have at least one child at risk of anaphylaxis but many are poorly prepared. A cooperative partnership between doctors, community and school nurses, school staff, parents and the child is necessary to ensure allergic children are protected. Schools and doctors should adopt a comprehensive approach to allergy training, ensuring that all staff can prevent, recognize and initiate treatment of allergic reactions.
Resumo:
Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Eletrónica Médica)
Resumo:
The paper reflects the work of COST Action TU1403 Workgroup 3/Task group 1. The aim is to identify research needs from a review of the state of the art of three aspects related to adaptive façade systems: (1) dynamic performance requirements; (2) façade design under stochastic boundary conditions and (3) experiences with adaptive façade systems and market needs.
Resumo:
The relationship between estimated and real motor competences was analyzed for several tasks. Participants were 303 children (160 boys and 143 girls), which had between 6 and 10 years of age (M=8.63, SD=1.16). None of the children presented developmental difficulties or learning disabilities, and all attended age-appropriate classes. Children were divided into three groups according to their age: group 1 (N= 102; age range: 6.48-8.01 years); group 2 (N= 101; age range: 8.02-9.22 years); and group 3 (N=100; age range: 9.24-10.93 years). Children were asked to predict their maximum distance for a locomotor, a manipulative, and a balance task, prior to performing those tasks. Children’s estimations were compared with their real performance to determine their accuracy. Children had, in general, a tendency to overestimate their performance (standing long jump: 56.11%, kicking: 63.37%, throwing: 73.60%, and Walking Backwards (WB) on a balance beam: 45.21%), and older children tended to be more accurate, except for the manipulative tasks. Furthermore, the relationship between estimation and real performance in children with different levels of motor coordination (Köperkoordinationstest für Kinder, KTK) was analyzed. The 75 children with the highest score comprised the Highest Motor Coordination (HMC) group, and the 78 children with the lowest score were placed in the Lowest Motor Coordination (LMC) group. There was a tendency for LMC and HMC children to overestimate their skills at all tasks, except for the HMC group at the WB task. Children with the HMC level tended to be more accurate when predicting their motor performance; however, differences in absolute percent error were only significant for the throwing and WB tasks. In conclusion, children display a tendency to overestimate their performance independently of their motor coordination level and task. This fact may be determinant to the development of their motor competences, since they are more likely to engage and persist in motor tasks, but it might also increase the occurrence of unintended injuries.
Resumo:
Rats were treated postnatally (PND 5-16) with BSO (l-buthionine-(S,R)-sulfoximine) in an animal model of schizophrenia based on transient glutathione deficit. The BSO treated rats were impaired in patrolling a maze or a homing table when adult, yet demonstrated preserved escape learning, place discrimination and reversal in a water maze task [37]. In the present work, BSO rats' performance in the water maze was assessed in conditions controlling for the available visual cues. First, in a completely curtained environment with two salient controlled cues, BSO rats showed little accuracy compared to control rats. Secondly, pre-trained BSO rats were impaired in reaching the familiar spatial position when curtains partially occluded different portions of the room environment in successive sessions. The apparently preserved place learning in a classical water maze task thus appears to require the stability and the richness of visual landmarks from the surrounding environment. In other words, the accuracy of BSO rats in place and reversal learning is impaired in a minimal cue condition or when the visual panorama changes between trials. However, if the panorama remains rich and stable between trials, BSO rats are equally efficient in reaching a familiar position or in learning a new one. This suggests that the BSO accurate performance in the water maze does not satisfy all the criteria for a cognitive map based navigation on the integration of polymodal cues. It supports the general hypothesis of a binding deficit in BSO rats.
Resumo:
Performance analysis is the task of monitor the behavior of a program execution. The main goal is to find out the possible adjustments that might be done in order improve the performance. To be able to get that improvement it is necessary to find the different causes of overhead. Nowadays we are already in the multicore era, but there is a gap between the level of development of the two main divisions of multicore technology (hardware and software). When we talk about multicore we are also speaking of shared memory systems, on this master thesis we talk about the issues involved on the performance analysis and tuning of applications running specifically in a shared Memory system. We move one step ahead to take the performance analysis to another level by analyzing the applications structure and patterns. We also present some tools specifically addressed to the performance analysis of OpenMP multithread application. At the end we present the results of some experiments performed with a set of OpenMP scientific application.
Resumo:
Manual dexterity, a prerogative of primates, is under the control of the corticospinal (CS) tract. Because 90-95% of CS axons decussate, it is assumed that this control is exerted essentially on the contralateral hand. Consistently, unilateral lesion of the hand representation in the motor cortex is followed by a complete loss of dexterity of the contralesional hand. During the months following lesion, spontaneous recovery of manual dexterity takes place to a highly variable extent across subjects, although largely incomplete. In the present study, we tested the hypothesis that after a significant postlesion period, manual performance in the ipsilesional hand is correlated with the extent of functional recovery in the contralesional hand. To this aim, ten adult macaque monkeys were subjected to permanent unilateral motor cortex lesion. Monkeys' manual performance was assessed for each hand during several months postlesion, using our standard behavioral test (modified Brinkman board task) that provides a quantitative measure of reach and grasp ability. The ipsilesional hand's performance was found to be significantly enhanced over the long term (100-300 days postlesion) in six of ten monkeys, with the six exhibiting the best, though incomplete, recovery of the contralesional hand. There was a statistically significant correlation (r = 0.932; P < 0.001) between performance in the ipsilesional hand after significant postlesion period and the extent of recovery of the contralesional hand. This observation is interpreted in terms of different possible mechanisms of recovery, dependent on the recruitment of motor areas in the lesioned and/or intact hemispheres.