816 resultados para Task allocation


Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose three research problems to explore the relations between trust and security in the setting of distributed computation. In the first problem, we study trust-based adversary detection in distributed consensus computation. The adversaries we consider behave arbitrarily disobeying the consensus protocol. We propose a trust-based consensus algorithm with local and global trust evaluations. The algorithm can be abstracted using a two-layer structure with the top layer running a trust-based consensus algorithm and the bottom layer as a subroutine executing a global trust update scheme. We utilize a set of pre-trusted nodes, headers, to propagate local trust opinions throughout the network. This two-layer framework is flexible in that it can be easily extensible to contain more complicated decision rules, and global trust schemes. The first problem assumes that normal nodes are homogeneous, i.e. it is guaranteed that a normal node always behaves as it is programmed. In the second and third problems however, we assume that nodes are heterogeneous, i.e, given a task, the probability that a node generates a correct answer varies from node to node. The adversaries considered in these two problems are workers from the open crowd who are either investing little efforts in the tasks assigned to them or intentionally give wrong answers to questions. In the second part of the thesis, we consider a typical crowdsourcing task that aggregates input from multiple workers as a problem in information fusion. To cope with the issue of noisy and sometimes malicious input from workers, trust is used to model workers' expertise. In a multi-domain knowledge learning task, however, using scalar-valued trust to model a worker's performance is not sufficient to reflect the worker's trustworthiness in each of the domains. To address this issue, we propose a probabilistic model to jointly infer multi-dimensional trust of workers, multi-domain properties of questions, and true labels of questions. Our model is very flexible and extensible to incorporate metadata associated with questions. To show that, we further propose two extended models, one of which handles input tasks with real-valued features and the other handles tasks with text features by incorporating topic models. Our models can effectively recover trust vectors of workers, which can be very useful in task assignment adaptive to workers' trust in the future. These results can be applied for fusion of information from multiple data sources like sensors, human input, machine learning results, or a hybrid of them. In the second subproblem, we address crowdsourcing with adversaries under logical constraints. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. The third part of the thesis considers the problem of optimal assignment under budget constraints when workers are unreliable and sometimes malicious. In a real crowdsourcing market, each answer obtained from a worker incurs cost. The cost is associated with both the level of trustworthiness of workers and the difficulty of tasks. Typically, access to expert-level (more trustworthy) workers is more expensive than to average crowd and completion of a challenging task is more costly than a click-away question. In this problem, we address the problem of optimal assignment of heterogeneous tasks to workers of varying trust levels with budget constraints. Specifically, we design a trust-aware task allocation algorithm that takes as inputs the estimated trust of workers and pre-set budget, and outputs the optimal assignment of tasks to workers. We derive the bound of total error probability that relates to budget, trustworthiness of crowds, and costs of obtaining labels from crowds naturally. Higher budget, more trustworthy crowds, and less costly jobs result in a lower theoretical bound. Our allocation scheme does not depend on the specific design of the trust evaluation component. Therefore, it can be combined with generic trust evaluation algorithms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Companies operating in the wood processing industry need to increase their productivity by implementing automation technologies in their production systems. An increasing global competition and rising raw material prizes challenge their competitiveness. Yet, too extensive automation brings risks such as a deterioration in situation awareness and operator deskilling. The concept of Levels of Automation is generally seen as means to achieve a balanced task allocation between the operators’ skills and competences and the need for automation technology relieving the humans from repetitive or hazardous work activities. The aim of this thesis was to examine to what extent existing methods for assessing Levels of Automation in production processes are applicable in the wood processing industry when focusing on an improved competitiveness of production systems. This was done by answering the following research questions (RQ): RQ1: What method is most appropriate to be applied with measuring Levels of Automation in the wood processing industry? RQ2: How can the measurement of Levels of Automation contribute to an improved competitiveness of the wood processing industry’s production processes? Literature reviews were used to identify the main characteristics of the wood processing industry affecting its automation potential and appropriate assessment methods for Levels of Automation in order to answer RQ1. When selecting the most suitable method, factors like the relevance to the target industry, application complexity or operational level the method is penetrating were important. The DYNAMO++ method, which covers both a rather quantitative technical-physical and a more qualitative social-cognitive dimension, was seen as most appropriate when taking into account these factors. To answer RQ 2, a case study was undertaken at a major Swedish manufacturer of interior wood products to point out paths how the measurement of Levels of Automation contributes to an improved competitiveness of the wood processing industry. The focus was on the task level on shop floor and concrete improvement suggestions were elaborated after applying the measurement method for Levels of Automation. Main aspects considered for generalization were enhancements regarding ergonomics in process design and cognitive support tools for shop-floor personnel through task standardization. Furthermore, difficulties regarding the automation of grading and sorting processes due to the heterogeneous material properties of wood argue for a suitable arrangement of human intervention options in terms of work task allocation.  The application of a modified version of DYNAMO++ reveals its pros and cons during a case study which covers a high operator involvement in the improvement process and the distinct predisposition of DYNAMO++ to be applied in an assembly system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over one million people lost their lives in the last twenty years from natural disasters like wildfires, earthquakes and man-made disasters. In such scenarios the usage of a fleet of robots aims at the parallelization of the workload and thus increasing speed and capabilities to complete time sensitive missions. This work focuses on the development of a dynamic fleet management system, which consists in the management of multiple agents cooperating in order to accomplish tasks. We presented a Mixed Integer Programming problem for the management and planning of mission’s tasks. The problem was solved using both an exact and a heuristic approach. The latter is based on the idea of solving iteratively smaller instances of the complete problem. Alongside, a fast and efficient algorithm for estimation of travel times between tasks is proposed. Experimental results demonstrate that the proposed heuristic approach is able to generate quality solutions, within specific time limits, compared to the exact one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A dissociation between two putative measures of resource allocation skin conductance responding, and secondary task reaction time (RT), has been observed during auditory discrimination tasks. Four experiments investigated the time course of the dissociation effect with a visual discrimination task. participants were presented with circles and ellipses and instructed to count the number of longer-than-usual presentations of one shape (task-relevant) and to ignore presentations of the other shape (task-irrelevant). Concurrent with this task, participants made a speeded motor response to an auditory probe. Experiment 1 showed that skin conductance responses were larger during task-relevant stimuli than during task-irrelevant stimuli, whereas RT to probes presented at 150 ms following shape onset was slower during task-irrelevant stimuli. Experiments 2 to 4 found slower RT during task-irrelevant stimuli at probes presented at 300 ms before shape onset until 150 ms following shape onset. At probes presented 3,000 and 4,000 ms following shape onset probe RT was slower during task-relevant stimuli. The similarities between the observed time course and the so-called psychological refractory period (PRF) effect are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect that the difficulty of the discrimination between task-relevant and task-irrelevant stimuli has on the relationship between skin conductance orienting and secondary task reaction time (RT) was examined. Participants (N = 72) counted the number of longer-than-usual presentations of one shape (task-relevant) and ignored presentations of another shape (task-irrelevant). The difficulty of discriminating between the two shapes varied across three groups (low, medium, and high difficulty). Simultaneous with the primary counting task, participants performed a secondary RT task to acoustic probes presented 50, 150, and 2000 ms following shape onset. Skin conductance orienting was larger, and secondary RT at the 2000 ms probe position was slower during task-relevant shapes than during task-irrelevant shapes in the low-difficulty group. This difference declined as the discrimination difficulty was increased, such that there was no difference in the high-difficulty group. Secondary RT was slower during task-irrelevant shapes than during task-relevant shapes only in the medium-difficulty group-and only at the 150 ms probe position in the first half of the experiment. The close relationship between autonomic orienting and secondary RT at the 2000 ms probe position suggests that orienting reflects the resource allocation that results from the number of matching features between a stimulus input and a mental representation primed as significant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heterogeneous multicore platforms are becoming an interesting alternative for embedded computing systems with limited power supply as they can execute specific tasks in an efficient manner. Nonetheless, one of the main challenges of such platforms consists of optimising the energy consumption in the presence of temporal constraints. This paper addresses the problem of task-to-core allocation onto heterogeneous multicore platforms such that the overall energy consumption of the system is minimised. To this end, we propose a two-phase approach that considers both dynamic and leakage energy consumption: (i) the first phase allocates tasks to the cores such that the dynamic energy consumption is reduced; (ii) the second phase refines the allocation performed in the first phase in order to achieve better sleep states by trading off the dynamic energy consumption with the reduction in leakage energy consumption. This hybrid approach considers core frequency set-points, tasks energy consumption and sleep states of the cores to reduce the energy consumption of the system. Major value has been placed on a realistic power model which increases the practical relevance of the proposed approach. Finally, extensive simulations have been carried out to demonstrate the effectiveness of the proposed algorithm. In the best-case, savings up to 18% of energy are reached over the first fit algorithm, which has shown, in previous works, to perform better than other bin-packing heuristics for the target heterogeneous multicore platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction : Une augmentation de la plasticité cérébrale est susceptible d’être impliquée dans la réallocation des régions corticales et dans les nombreuses altérations microstructurelles observées en autisme. Considérant les nombreux résultats démontrant un surfonctionnement perceptif et un fonctionnement moteur atypique en autisme, l’augmentation de la plasticité cérébrale suggère une plus grande variabilité individuelle de l’allocation fonctionnelle chez cette population, plus spécifiquement dans les régions perceptives et motrices. Méthode : Afin de tester cette hypothèse, 23 participants autistes de haut-niveau et 22 non-autistes appariés pour l’âge, le quotient intellectuel, les résultats au test des Matrices de Raven et la latéralité, ont réalisé une tâche d’imitation visuo-motrice dans un appareil d’imagerie par résonnance magnétique fonctionnelle (IRMf). Pour chaque participant, les coordonnées du pic d’activation le plus élevé ont été extraites des aires motrices primaires (Aire de Brodmann 4 (BA4)) et supplémentaires (BA6), du cortex visuo-moteur pariétal supérieur (BA7) ainsi que des aires visuelles primaires (BA17) et associatives (BA18+19) des deux hémisphères. L’étendue des activations, mesurée en fonction du nombre de voxels activés, et la différence d’intensité des activations, calculée en fonction du changement moyen d’intensité du signal ont également été considérées. Pour chaque région d’intérêt et hémisphère, la distance entre la localisation de l’activation maximale de chaque participant par rapport à celle de la moyenne de son groupe a servi de variable d’intérêt. Les moyennes de ces distances individuelles obtenues pour chaque groupe et chacune des régions d’intérêt ont ensuite été soumises à une ANOVA à mesures répétées afin de déterminer s’il existait des différences de variabilité dans la localisation des activations entre les groupes. Enfin, l’activation fonctionnelle générale à l’intérieur de chaque groupe et entre les groupes a également été étudiée. Résultats : Les résultats démontrent qu’une augmentation de la variabilité individuelle en terme de localisation des activations s’est produite à l’intérieur des deux groupes dans les aires associatives motrices et visuelles comparativement aux aires primaires associées. Néanmoins, malgré le fait que cette augmentation de variabilité dans les aires associatives soit partagée, une comparaison directe de celle-ci entre les groupes a démontré que les autistes présentaient une plus grande variabilité de la localisation des activations fonctionnelles dans le cortex visuo-moteur pariétal supérieur (BA7) et les aires associatives visuelles (BA18+19) de l’hémisphère gauche. Conclusion : Des stratégies différentes et possiblement uniques pour chaque individu semblent être observées en autisme. L’augmentation de la variabilité individuelle de la localisation des activations fonctionnelles retrouvée chez les autistes dans les aires associatives, où l’on observe également davantage de variabilité chez les non-autistes, suggère qu’une augmentation et/ou une altération des mécanismes de plasticité est impliquée dans l’autisme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Attentional allocation to emotional stimuli is often proposed to be driven by valence and in particular by negativity. However, many negative stimuli are also arousing leaving the question whether valence or arousal accounts for this effect. The authors examined whether the valence or the arousal level of emotional stimuli influences the allocation of spatial attention using a modified spatial cueing task. Participants responded to targets that were preceded by cues consisting of emotional pictures varying on arousal and valence. Response latencies showed that disengagement of spatial attention was slower for stimuli high in arousal than for stimuli low in arousal. The effect was independent of the valence of the pictures and not gender-specific. The findings support the idea that arousal affects the allocation of attention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel mobile sink area allocation scheme for consumer based mobile robotic devices with a proven application to robotic vacuum cleaners. In the home or office environment, rooms are physically separated by walls and an automated robotic cleaner cannot make a decision about which room to move to and perform the cleaning task. Likewise, state of the art cleaning robots do not move to other rooms without direct human interference. In a smart home monitoring system, sensor nodes may be deployed to monitor each separate room. In this work, a quad tree based data gathering scheme is proposed whereby the mobile sink physically moves through every room and logically links all separated sub-networks together. The proposed scheme sequentially collects data from the monitoring environment and transmits the information back to a base station. According to the sensor nodes information, the base station can command a cleaning robot to move to a specific location in the home environment. The quad tree based data gathering scheme minimizes the data gathering tour length and time through the efficient allocation of data gathering areas. A calculated shortest path data gathering tour can efficiently be allocated to the robotic cleaner to complete the cleaning task within a minimum time period. Simulation results show that the proposed scheme can effectively allocate and control the cleaning area to the robot vacuum cleaner without any direct interference from the consumer. The performance of the proposed scheme is then validated with a set of practical sequential data gathering tours in a typical office/home environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Arousing stimuli, either threat-related or pleasant, may be selected for priority at different stages within the processing stream. Here we examine the pattern of processing for non-task-relevant threatening (spiders: arousing to some) and pleasant stimuli (babies or chocolate: arousing to all) by recording the gaze of a spider Fearful and Non-fearful group while they performed a simple “follow the cross” task. There was no difference in first saccade latencies. Saccade trajectories showed a general hypervigilance for all stimuli in the Fearful group. Saccade landing positions corresponded to what each group would find arousing, such that the Fearful group deviated towards both types of images whereas the Non-fearful group deviated towards pleasant images. Secondary corrective saccade latencies away from threat-related stimuli were longer for the Fearful group (difficulty in disengaging) compared with the Non-fearful group. These results suggest that attentional biases towards arousing stimuli may occur at different processing stages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Arm hemiparesis secondary to stroke is common and disabling. We aimed to assess whether robotic training of an affected arm with ARMin--an exoskeleton robot that allows task-specific training in three dimensions-reduces motor impairment more effectively than does conventional therapy. METHODS: In a prospective, multicentre, parallel-group randomised trial, we enrolled patients who had had motor impairment for more than 6 months and moderate-to-severe arm paresis after a cerebrovascular accident who met our eligibility criteria from four centres in Switzerland. Eligible patients were randomly assigned (1:1) to receive robotic or conventional therapy using a centre-stratified randomisation procedure. For both groups, therapy was given for at least 45 min three times a week for 8 weeks (total 24 sessions). The primary outcome was change in score on the arm (upper extremity) section of the Fugl-Meyer assessment (FMA-UE). Assessors tested patients immediately before therapy, after 4 weeks of therapy, at the end of therapy, and 16 weeks and 34 weeks after start of therapy. Assessors were masked to treatment allocation, but patients, therapists, and data analysts were unmasked. Analyses were by modified intention to treat. This study is registered with ClinicalTrials.gov, number NCT00719433. FINDINGS: Between May 4, 2009, and Sept 3, 2012, 143 individuals were tested for eligibility, of whom 77 were eligible and agreed to participate. 38 patients assigned to robotic therapy and 35 assigned to conventional therapy were included in analyses. Patients assigned to robotic therapy had significantly greater improvements in motor function in the affected arm over the course of the study as measured by FMA-UE than did those assigned to conventional therapy (F=4.1, p=0.041; mean difference in score 0.78 points, 95% CI 0.03-1.53). No serious adverse events related to the study occurred. INTERPRETATION: Neurorehabilitation therapy including task-oriented training with an exoskeleton robot can enhance improvement of motor function in a chronically impaired paretic arm after stroke more effectively than conventional therapy. However, the absolute difference between effects of robotic and conventional therapy in our study was small and of weak significance, which leaves the clinical relevance in question.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The authors evaluate a model suggesting that the performance of highly neurotic individuals, relative to their stable counterparts, is more strongly influenced by factors relating to the allocation of attentional resources. First, an air traffic control simulation was used to examine the interaction between effort intensity and scores on the Anxiety subscale of Eysenck Personality Profiler Neuroticism in the prediction of task performance. Overall effort intensity enhanced performance for highly anxious individuals more so than for individuals with low anxiety. Second, a longitudinal field study was used to examine the interaction between office busyness and Eysenck Personality Inventory Neuroticism in the prediction of telesales performance. Changes in office busyness were associated with greater performance improvements for highly neurotic individuals compared with less neurotic individuals. These studies suggest that highly neurotic individuals outperform their stable counterparts in a busy work environment or if they are expending a high level of effort.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research adopts a resource allocation theoretical framework to generate predictions regarding the relationship between self-efficacy and task performance from two levels of analysis and specificity. Participants were given multiple trials of practice on an air traffic control task. Measures of task-specific self-efficacy and performance were taken at repeated intervals. The authors used multilevel analysis to demonstrate dynamic main effects, dynamic mediation and dynamic moderation. As predicted, the positive effects of overall task specific self-efficacy and general self-efficacy on task performance strengthened throughout practice. In line with these dynamic main effects, the effect of general self-efficacy was mediated by overall task specific self-efficacy; however this pattern emerged over time. Finally, changes in task specific self-efficacy were negatively associated with changes in performance at the within-person level; however this effect only emerged towards the end of practice for individuals with high levels of overall task specific self-efficacy. These novel findings emphasise the importance of conceptualising self-efficacy within a multi-level and multi-specificity framework and make a significant contribution to understanding the way this construct relates to task performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of resource allocation in sparse graphs with real variables is studied using methods of statistical physics. An efficient distributed algorithm is devised on the basis of insight gained from the analysis and is examined using numerical simulations, showing excellent performance and full agreement with the theoretical results.