886 resultados para Task partitioning
Resumo:
In this paper, we propose the Distributed using Optimal Priority Assignment (DOPA) heuristic that finds a feasible partitioning and priority assignment for distributed applications based on the linear transactional model. DOPA partitions the tasks and messages in the distributed system, and makes use of the Optimal Priority Assignment (OPA) algorithm known as Audsley’s algorithm, to find the priorities for that partition. The experimental results show how the use of the OPA algorithm increases in average the number of schedulable tasks and messages in a distributed system when compared to the use of Deadline Monotonic (DM) usually favoured in other works. Afterwards, we extend these results to the assignment of Parallel/Distributed applications and present a second heuristic named Parallel-DOPA (P-DOPA). In that case, we show how the partitioning process can be simplified by using the Distributed Stretch Transformation (DST), a parallel transaction transformation algorithm introduced in [1].
Resumo:
By comparing the behavior of three Acromyrmex (Hymenoptera, Formicidae) species during foraging on artificial trails of different lengths, we observed the occurrence of task partitioning and its relation to the food distance from the nest. Task partitioning was verified by leaf cache formation along the trail and leaf direct transferring among workers. There was significant difference between the number of leaf fragments carried directly to the fungus chamber and those transferred direct or indirectly, via cache, depending upon the trail length. Task partitioning could be a strategy used by leaf-cutting ants that allows the workers to use food sources far from their nests.
Resumo:
Social facilitation occurs when an animal is more likely to behave in a certain way in response to other animals engaged in the same behaviour. For example, an individual returning to the nest with food stimulates other ants to leave and to forage. In the present study we demonstrate the existence of new facets in the colony organization of Dinoponera quadriceps: a positive feedback between the incoming food and the activation of new foragers, and the occurrence of incipient task partitioning during the food sharing. Lower-ranked workers located inside the nest process protein resources and higher-ranked workers handle smaller pieces and distribute them to the larvae. In conclusion, D. quadriceps has a decentralized pattern of task allocation with a double regulatory mechanism, which can be considered a sophisticated aspect of division of labour in ponerine ants.
Resumo:
Stingless bees collect plant resins and make it into propolis, although they have a wider range of use for this material than do honey bees (Apis spp.). Plebeia spp. workers employ propolis mixed with wax (cerumen) for constructing and sealing nest structures, while they use viscous (sticky) propolis for defense by applying it onto their enemies. Isolated viscous propolis deposits are permanently maintained at the interior of their colonies, as also seen in other Meliponini species. Newly-emerged Plebeia emerina (Friese) workers were observed stuck to and unable to escape these viscous propolis stores. We examined the division of labor involved in propolis manipulation, by observing marked bees of known age in four colonies of P. emerina from southern Brazil. Activities on brood combs, the nest involucrum and food pots were observed from the first day of life of the marked bees. However, work on viscous propolis deposits did not begin until the 13th day of age and continued until the 56th day (maximum lifespan in our sample). Although worker bees begin to manipulate cerumen early, they seem to be unable to handle viscous propolis till they become older.
Resumo:
The performance benefit when using grid systems comes from different strategies, among which partitioning the applications into parallel tasks is the most important. However, in most cases the enhancement coming from partitioning is smoothed by the effects of synchronization overheads, mainly due to the high variability in the execution times of the different tasks, which, in turn, is accentuated by the large heterogeneity of grid nodes. In this paper we design hierarchical, queuing network performance models able to accurately analyze grid architectures and applications. Thanks to the model results, we introduce a new allocation policy based on a combination between task partitioning and task replication. The models are used to study two real applications and to evaluate the performance benefits obtained with allocation policies based on task replication.
Resumo:
In honeybees (Apis niellifera), the process of nectar collection is considered a straightforward example of task partitioning with two subtasks or two intersecting cycles of activity: (1) foraging and (2) storing of nectar, linked via its transfer between foragers and food processors. Many observations suggest, however, that nectar colleclion and processing in honeybees is a complex process, involving workers of other sub-castes and depending on variables such as resource profitability or the amount of stored honey. It has been observed that food processor bees often distribute food to other hive bees after receiving it from incoming foragers, instead of storing it immediately in honey cells. While there is little information about the sub-caste affiliation and the behaviour of these second-order receivers, this stage may be important for the rapid distribution of nutrients and related information. To investigate the identity of these second-order receivers, we quantified behaviours following nectar transfer and compared these behaviours with the behaviour of average worker hive-bees. Furthermore, we tested whether food quality (sugar concentration) affects the behaviour of the second-order receivers. Of all identified second-order receivers, 59.3% performed nurse duties, 18.5% performed food-processor duties and 22.2% performed forager duties. After food intake, these bees were more active, had more trophallaxes (especially offering contacts) compared to average workers and they were found mainly in the brood area, independent of food quality. Our results show that the liquid food can be distributed rapidly among many bees of the three main worker sub-castes, without being stored in honey cells first. Furthermore, the results suggest that the rapid distribution of food partly depends on the high activity of second-order receivers.
Resumo:
Irregular computations pose sorne of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures, which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. Starting in the mid 80s there has been significant progress in the development of parallelizing compilers for logic programming (and more recently, constraint programming) resulting in quite capable parallelizers. The typical applications of these paradigms frequently involve irregular computations, and make heavy use of dynamic data structures with pointers, since logical variables represent in practice a well-behaved form of pointers. This arguably makes the techniques used in these compilers potentially interesting. In this paper, we introduce in a tutoríal way, sorne of the problems faced by parallelizing compilers for logic and constraint programs and provide pointers to sorne of the significant progress made in the area. In particular, this work has resulted in a series of achievements in the areas of inter-procedural pointer aliasing analysis for independence detection, cost models and cost analysis, cactus-stack memory management, techniques for managing speculative and irregular computations through task granularity control and dynamic task allocation such as work-stealing schedulers), etc.
Resumo:
In this paper, a system that allows applying precision agriculture techniques is described. The application is based on the deployment of a team of unmanned aerial vehicles that are able to take georeferenced pictures in order to create a full map by applying mosaicking procedures for postprocessing. The main contribution of this work is practical experimentation with an integrated tool. Contributions in different fields are also reported. Among them is a new one-phase automatic task partitioning manager, which is based on negotiation among the aerial vehicles, considering their state and capabilities. Once the individual tasks are assigned, an optimal path planning algorithm is in charge of determining the best path for each vehicle to follow. Also, a robust flight control based on the use of a control law that improves the maneuverability of the quadrotors has been designed. A set of field tests was performed in order to analyze all the capabilities of the system, from task negotiations to final performance. These experiments also allowed testing control robustness under different weather conditions.
Resumo:
Irregular computations pose some of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. In the past decade there has been significant progress in the development of parallelizing compilers for logic programming and, more recently, constraint programming. The typical applications of these paradigms frequently involve irregular computations, which arguably makes the techniques used in these compilers potentially interesting. In this paper we introduce in a tutorial way some of the problems faced by parallelizing compilers for logic and constraint programs. These include the need for inter-procedural pointer aliasing analysis for independence detection and having to manage speculative and irregular computations through task granularity control and dynamic task allocation. We also provide pointers to some of the progress made in these áreas. In the associated talk we demónstrate representatives of several generations of these parallelizing compilers.
Resumo:
Consumers are often less satisfied with a product chosen from a large assortment than a limited one. Experienced choice difficulty presumably causes this as consumers have to engage in a great number of individual comparisons. In two studies we tested whether partitioning the choice task so that consumers decided sequentially on each individual attribute may provide a solution. In a Starbucks coffee house, consumers who chose from the menu rated the coffee as less tasty when chosen from a large rather than a small assortment. However, when the consumers chose it by sequentially deciding about one attribute at a time, the effect reversed. In a tailored-suit customization, consumers who chose multiple attributes at a time were less satisfied with their suit, compared to those who chose one attribute at a time. Sequential attribute-based processing proves to be an effective strategy to reap the benefits of a large assortment.
Resumo:
The cost of spatial join processing can be very high because of the large sizes of spatial objects and the computation-intensive spatial operations. While parallel processing seems a natural solution to this problem, it is not clear how spatial data can be partitioned for this purpose. Various spatial data partitioning methods are examined in this paper. A framework combining the data-partitioning techniques used by most parallel join algorithms in relational databases and the filter-and-refine strategy for spatial operation processing is proposed for parallel spatial join processing. Object duplication caused by multi-assignment in spatial data partitioning can result in extra CPU cost as well as extra communication cost. We find that the key to overcome this problem is to preserve spatial locality in task decomposition. We show in this paper that a near-optimal speedup can be achieved for parallel spatial join processing using our new algorithms.
Resumo:
Modern multicore processors for the embedded market are often heterogeneous in nature. One feature often available are multiple sleep states with varying transition cost for entering and leaving said sleep states. This research effort explores the energy efficient task-mapping on such a heterogeneous multicore platform to reduce overall energy consumption of the system. This is performed in the context of a partitioned scheduling approach and a very realistic power model, which improves over some of the simplifying assumptions often made in the state-of-the-art. The developed heuristic consists of two phases, in the first phase, tasks are allocated to minimise their active energy consumption, while the second phase trades off a higher active energy consumption for an increased ability to exploit savings through more efficient sleep states. Extensive simulations demonstrate the effectiveness of the approach.
Resumo:
Consider the problem of scheduling a set of sporadic tasks on a multiprocessor system to meet deadlines using a task-splitting scheduling algorithm. Task-splitting (also called semi-partitioning) scheduling algorithms assign most tasks to just one processor but a few tasks are assigned to two or more processors, and they are dispatched in a way that ensures that a task never executes on two or more processors simultaneously. A particular type of task-splitting algorithms, called slot-based task-splitting dispatching, is of particular interest because of its ability to schedule tasks with high processor utilizations. Unfortunately, no slot-based task-splitting algorithm has been implemented in a real operating system so far. In this paper we discuss and propose some modifications to the slot-based task-splitting algorithm driven by implementation concerns, and we report the first implementation of this family of algorithms in a real operating system running Linux kernel version 2.6.34. We have also conducted an extensive range of experiments on a 4-core multicore desktop PC running task-sets with utilizations of up to 88%. The results show that the behavior of our implementation is in line with the theoretical framework behind it.
Resumo:
Heterogeneous multicore platforms are becoming an interesting alternative for embedded computing systems with limited power supply as they can execute specific tasks in an efficient manner. Nonetheless, one of the main challenges of such platforms consists of optimising the energy consumption in the presence of temporal constraints. This paper addresses the problem of task-to-core allocation onto heterogeneous multicore platforms such that the overall energy consumption of the system is minimised. To this end, we propose a two-phase approach that considers both dynamic and leakage energy consumption: (i) the first phase allocates tasks to the cores such that the dynamic energy consumption is reduced; (ii) the second phase refines the allocation performed in the first phase in order to achieve better sleep states by trading off the dynamic energy consumption with the reduction in leakage energy consumption. This hybrid approach considers core frequency set-points, tasks energy consumption and sleep states of the cores to reduce the energy consumption of the system. Major value has been placed on a realistic power model which increases the practical relevance of the proposed approach. Finally, extensive simulations have been carried out to demonstrate the effectiveness of the proposed algorithm. In the best-case, savings up to 18% of energy are reached over the first fit algorithm, which has shown, in previous works, to perform better than other bin-packing heuristics for the target heterogeneous multicore platform.
Resumo:
Presented at Work in Progress Session, IEEE Real-Time Systems Symposium (RTSS 2015). 1 to 4, Dec, 2015. San Antonio, U.S.A..