16 resultados para Dynamic task allocation
em Aston University Research Archive
Resumo:
A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task allocation in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. The problem is constrained so that agents are penalised for switching mail types. When an agent process a mail batch of different type to the previous one, it must undergo a change-over, with repeated change-overs rendering the agent inactive. The efficiency (average amount of mail retrieved), and the flexibility (ability of the agents to react to changes in the environment) are investigated both in static and dynamic environments and with respect to sudden changes. New rules for mail selection and specialisation are proposed and are shown to exhibit improved efficiency and flexibility compared to existing ones. We employ a evolutionary algorithm which allows the various rules to evolve and compete. Apart from obtaining optimised parameters for the various rules for any environment, we also observe extinction and speciation.
Resumo:
Multi-agent algorithms inspired by the division of labour in social insects and by markets, are applied to a constrained problem of distributed task allocation. The efficiency (average number of tasks performed), the flexibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved efficiency and robustness. We employ nature inspired particle swarm optimisation to obtain optimised parameters for all algorithms in a range of representative environments. Although results are obtained for large population sizes to avoid finite size effects, the influence of population size on the performance is also analysed. From a theoretical point of view, we analyse the causes of efficiency loss, derive theoretical upper bounds for the efficiency, and compare these with the experimental results.
Resumo:
When designing a practical swarm robotics system, self-organized task allocation is key to make best use of resources. Current research in this area focuses on task allocation which is either distributed (tasks must be performed at different locations) or sequential (tasks are complex and must be split into simpler sub-tasks and processed in order). In practice, however, swarms will need to deal with tasks which are both distributed and sequential. In this paper, a classic foraging problem is extended to incorporate both distributed and sequential tasks. The problem is analysed theoretically, absolute limits on performance are derived, and a set of conditions for a successful algorithm are established. It is shown empirically that an algorithm which meets these conditions, by causing emergent cooperation between robots can achieve consistently high performance under a wide range of settings without the need for communication. © 2013 IEEE.
Resumo:
Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.
Resumo:
Third Generation cellular communication systems are expected to support mixed cell architecture in which picocells, microcells and macrocells are used to achieve full coverage and increase the spectral capacity. Supporting higher numbers of mobile terminals and the use of smaller cells will result in an increase in the number of handovers, and consequently an increase in the time delays required to perform these handovers. Higher time delays will generate call interruptions and forced terminations, particularly for time sensitive applications like real-time multimedia and data services. Currently in the Global System for Mobile communications (GSM), the handover procedure is initiated and performed by the fixed part of the Public Land Mobile Network (PLMN). The mobile terminal is only capable of detecting candidate base stations suitable for the handover; it is the role of the network to interrogate a candidate base station for a free channel. Handover signalling is exchanged via the fixed network and the time delay required to perform the handover is greatly affected by the levels of teletraffic handled by the network. In this thesis, a new handover strategy is developed to reduce the total time delay for handovers in a microcellular system. The handover signalling is diverted from the fixed network to the air interface to prevent extra delays due to teletraffic congestion, and to allow the mobile terminal to exchange signalling directly with the candidate base station. The new strategy utilises Packet Reservation Multiple Access (PRMA) technique as a mechanism to transfer the control of the handover procedure from the fixed network to the mobile terminal. Simulation results are presented to show a dramatic reduction in the handover delay as compared to those obtained using fixed channel allocation and dynamic channel allocation schemes.
Resumo:
The research was instigated by the Civil Aviation Authority (CAA) to examine the implications for air traffic controllers' (ATCO) job satisfaction of the possible introduction of systems incorporating computer-assisted decision making. Additional research objectives were to assess the possible costs of reductions in ATCO job satisfaction, and to recommend appropriate task allocation between ATCOs and computer for future systems design (Chapter 1). Following a review of the literature (Chapter 2) it is argued that existing approaches to systems and job design do not allow for a sufficiently early consideration of employee needs and satisfactions in the design of complex systems. The present research develops a methodology for assessing affective reactions to an existing system as a basis for making reommendations for future systems design (Chapter 3). The method required analysis of job content using two techniques: (a) task analysis (Chapter 4.1) and (b) the Job Diagnostic Survey (JDS). ATCOs' affective reactions to the several operational positions on which they work were investigated at three levels of detail: (a) Reactions to positions, obtained by ranking techniques (Chapter 4.2); (b) Reactions to job characteristics, obtained by use of JDS (Chapter 4.3); and (c) Reactions to tasks, obtained by use of Repertory Grid technique (Chapter 4.4). The conclusion is drawn that ATCOs' motivation and satisfaction is greatly dependent on the presence of challenge, often through tasks requiring the use of decision making and other cognitive skills. Results suggest that the introduction of systems incorporating computer-assisted decision making might result in financial penalties for the CAA and significant reductions in job satisfaction for ATCOs. General recommendations are made for allocation of tasks in future systems design (Chapter 5).
Resumo:
Fibre overlay is a cost-effective technique to alleviate wavelength blocking in some links of a wavelength-routed optical network by increasing the number of wavelengths in those links. In this letter, we investigate the effects of overlaying fibre in an all-optical network (AON) based on GÉANT2 topology. The constraint-based routing and wavelength assignment (CB-RWA) algorithm locates where cost-efficient upgrades should be implemented. Through numerical examples, we demonstrate that the network capacity improves by 25 per cent by overlaying fibre on 10 per cent of the links, and by 12 per cent by providing hop reduction links comprising 2 per cent of the links. For the upgraded network, we also show the impact of dynamic traffic allocation on the blocking probability. Copyright © 2010 John Wiley & Sons, Ltd.
Resumo:
Fibre overlay is a cost-effective technique to alleviate wavelength blocking in some links of a wavelength-routed optical network by increasing the number of wavelengths in those links. In this letter, we investigate the effects of overlaying fibre in an all-optical network (AON) based on GÉANT2 topology. The constraint-based routing and wavelength assignment (CB-RWA) algorithm locates where cost-efficient upgrades should be implemented. Through numerical examples, we demonstrate that the network capacity improves by 25 per cent by overlaying fibre on 10 per cent of the links, and by 12 per cent by providing hop reduction links comprising 2 per cent of the links. For the upgraded network, we also show the impact of dynamic traffic allocation on the blocking probability. Copyright © 2010 John Wiley & Sons, Ltd.
Resumo:
The problem of resource allocation in sparse graphs with real variables is studied using methods of statistical physics. An efficient distributed algorithm is devised on the basis of insight gained from the analysis and is examined using numerical simulations, showing excellent performance and full agreement with the theoretical results.
Resumo:
Recent functional magnetic resonance imaging (fMRI) investigations of the interaction between cognition and reward processing have found that the lateral prefrontal cortex (PFC) areas are preferentially activated to both increasing cognitive demand and reward level. Conversely, ventromedial PFC (VMPFC) areas show decreased activation to the same conditions, indicating a possible reciprocal relationship between cognitive and emotional processing regions. We report an fMRI study of a rewarded working memory task, in which we further explore how the relationship between reward and cognitive processing is mediated. We not only assess the integrity of reciprocal neural connections between the lateral PFC and VMPFC brain regions in different experimental contexts but also test whether additional cortical and subcortical regions influence this relationship. Psychophysiological interaction analyses were used as a measure of functional connectivity in order to characterize the influence of both cognitive and motivational variables on connectivity between the lateral PFC and the VMPFC. Psychophysiological interactions revealed negative functional connectivity between the lateral PFC and the VMPFC in the context of high memory load, and high memory load in tandem with a highly motivating context, but not in the context of reward alone. Physiophysiological interactions further indicated that the dorsal anterior cingulate and the caudate nucleus modulate this pathway. These findings provide evidence for a dynamic interplay between lateral PFC and VMPFC regions and are consistent with an emotional gating role for the VMPFC during cognitively demanding tasks. Our findings also support neuropsychological theories of mood disorders, which have long emphasized a dysfunctional relationship between emotion/motivational and cognitive processes in depression.
Resumo:
Adults show great variation in their auditory skills, such as being able to discriminate between foreign speech-sounds. Previous research has demonstrated that structural features of auditory cortex can predict auditory abilities; here we are interested in the maturation of 2-Hz frequency-modulation (FM) detection, a task thought to tap into mechanisms underlying language abilities. We hypothesized that an individual's FM threshold will correlate with gray-matter density in left Heschl's gyrus, and that this function-structure relationship will change through adolescence. To test this hypothesis, we collected anatomical magnetic resonance imaging data from participants who were tested and scanned at three time points: at 10, 11.5 and 13 years of age. Participants judged which of two tones contained FM; the modulation depth was adjusted using an adaptive staircase procedure and their threshold was calculated based on the geometric mean of the last eight reversals. Using voxel-based morphometry, we found that FM threshold was significantly correlated with gray-matter density in left Heschl's gyrus at the age of 10 years, but that this correlation weakened with age. While there were no differences between girls and boys at Times 1 and 2, at Time 3 there was a relationship between gray-matter density in left Heschl's gyrus in boys but not in girls. Taken together, our results confirm that the structure of the auditory cortex can predict temporal processing abilities, namely that gray-matter density in left Heschl's gyrus can predict 2-Hz FM detection threshold. This ability is dependent on the processing of sounds changing over time, a skill believed necessary for speech processing. We tested this assumption and found that FM threshold significantly correlated with spelling abilities at Time 1, but that this correlation was found only in boys. This correlation decreased at Time 2, and at Time 3 we found a significant correlation between reading and FM threshold, but again, only in boys. We examined the sex differences in both the imaging and behavioral data taking into account pubertal stages, and found that the correlation between FM threshold and spelling was strongest pre-pubertally, and the correlation between FM threshold and gray-matter density in left Heschl's gyrus was strongest mid-pubertally.
Resumo:
Neuroimaging studies have consistently shown that working memory (WM) tasks engage a distributed neural network that primarily includes the dorsolateral prefrontal cortex, the parietal cortex, and the anterior cingulate cortex. The current challenge is to provide a mechanistic account of the changes observed in regional activity. To achieve this, we characterized neuroplastic responses in effective connectivity between these regions at increasing WM loads using dynamic causal modeling of functional magnetic resonance imaging data obtained from healthy individuals during a verbal n-back task. Our data demonstrate that increasing memory load was associated with (a) right-hemisphere dominance, (b) increasing forward (i.e., posterior to anterior) effective connectivity within the WM network, and (c) reduction in individual variability in WM network architecture resulting in the right-hemisphere forward model reaching an exceedance probability of 99% in the most demanding condition. Our results provide direct empirical support that task difficulty, in our case WM load, is a significant moderator of short-term plasticity, complementing existing theories of task-related reduction in variability in neural networks. Hum Brain Mapp, 2013. © 2013 Wiley Periodicals, Inc.
Resumo:
Class-based service differentiation is provided in DiffServ networks. However, this differentiation will be disordered under dynamic traffic loads due to the fixed weighted scheduling. An adaptive weighted scheduling scheme is proposed in this paper to achieve fair bandwidth allocation among different service classes. In this scheme, the number of active flows and the subscribed bandwidth are estimated based on the measurement of local queue metrics, then the scheduling weights of each service class are adjusted for the per-flow fairness of excess bandwidth allocation. This adaptive scheme can be combined with any weighted scheduling algorithm. Simulation results show that, comparing with fixed weighted scheduling, it effectively improve the fairness of excess bandwidth allocation.
Resumo:
Four patients that had received an anterior cingulotomy (ACING) and five patients that had received both an ACING and an anterior capsulotomy (ACAPS) as an intervention for chronic, treatment refractory depression were presented with a series of dynamic emotional stimuli and invited to identify the emotion portrayed. Their performance was compared with that of a group of non-surgically treated patients with major depression (n = 17) and with a group of matched, never-depressed controls (n = 22). At the time of testing, four of the nine neurosurgery patients had recovered from their depressive episode, whereas five remained depressed. Analysis of emotion recognition accuracy revealed no significant differences between depressed and non-depressed neurosurgically treated patients. Similarly, no significant differences were observed between the patients treated with ACING alone and those treated with both ACING and ACAPS. Comparison of the emotion recognition accuracy of the neurosurgically treated patients and the depressed and healthy control groups revealed that the surgically treated patients exhibited a general impairment in their recognition accuracy compared to healthy controls. Regression analysis revealed that participants' emotion recognition accuracy was predicted by the number of errors they made on the Stroop colour-naming task. It is plausible that the observed deficit in emotion recognition accuracy was a consequence of impaired attentional control, which may have been a result of the surgical lesions to the anterior cingulate cortex. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
Computational performance increasingly depends on parallelism, and many systems rely on heterogeneous resources such as GPUs and FPGAs to accelerate computationally intensive applications. However, implementations for such heterogeneous systems are often hand-crafted and optimised to one computation scenario, and it can be challenging to maintain high performance when application parameters change. In this paper, we demonstrate that machine learning can help to dynamically choose parameters for task scheduling and load-balancing based on changing characteristics of the incoming workload. We use a financial option pricing application as a case study. We propose a simulation of processing financial tasks on a heterogeneous system with GPUs and FPGAs, and show how dynamic, on-line optimisations could improve such a system. We compare on-line and batch processing algorithms, and we also consider cases with no dynamic optimisations.