973 resultados para Dynamic task allocation
Resumo:
This research used resource allocation theory to generate predictions regarding dynamic relationships between self-efficacy and task performance from 2 levels of analysis and specificity. Participants were given multiple trials of practice on an air traffic control task. Measures of task-specific self-efficacy and performance were taken at repeated intervals. The authors used multilevel analysis to demonstrate differential and dynamic effects. As predicted, task-specific self-efficacy was negatively associated with task performance at the within-person level. On the other hand, average levels of task-specific self-efficacy were positively related to performance at the between-persons level and mediated the effect of general self-efficacy. The key findings from this research relate to dynamic effects - these results show that self-efficacy effects can change over time, but it depends on the level of analysis and specificity at which self-efficacy is conceptualized. These novel findings emphasize the importance of conceptualizing self-efficacy within a multilevel and multispecificity framework and make a significant contribution to understanding the way this construct relates to task performance.
Resumo:
This research adopts a resource allocation theoretical framework to generate predictions regarding the relationship between self-efficacy and task performance from two levels of analysis and specificity. Participants were given multiple trials of practice on an air traffic control task. Measures of task-specific self-efficacy and performance were taken at repeated intervals. The authors used multilevel analysis to demonstrate dynamic main effects, dynamic mediation and dynamic moderation. As predicted, the positive effects of overall task specific self-efficacy and general self-efficacy on task performance strengthened throughout practice. In line with these dynamic main effects, the effect of general self-efficacy was mediated by overall task specific self-efficacy; however this pattern emerged over time. Finally, changes in task specific self-efficacy were negatively associated with changes in performance at the within-person level; however this effect only emerged towards the end of practice for individuals with high levels of overall task specific self-efficacy. These novel findings emphasise the importance of conceptualising self-efficacy within a multi-level and multi-specificity framework and make a significant contribution to understanding the way this construct relates to task performance.
Resumo:
Each disaster presents itself with a unique set of characteristics that are hard to determine a priori. Thus disaster management tasks are inherently uncertain, requiring knowledge sharing and quick decision making that involves coordination across different levels and collaborators. While there has been an increasing interest among both researchers and practitioners in utilizing knowledge management to improve disaster management, little research has been reported about how to assess the dynamic nature of disaster management tasks, and what kinds of knowledge sharing are appropriate for different dimensions of task uncertainty characteristics. ^ Using combinations of qualitative and quantitative methods, this research study developed the dimensions and their corresponding measures of the uncertain dynamic characteristics of disaster management tasks and tested the relationships between the various dimensions of uncertain dynamic disaster management tasks and task performance through the moderating and mediating effects of knowledge sharing. ^ Furthermore, this research work conceptualized and assessed task uncertainty along three dimensions: novelty, unanalyzability, and significance; knowledge sharing along two dimensions: knowledge sharing purposes and knowledge sharing mechanisms; and task performance along two dimensions: task effectiveness and task efficiency. Analysis results of survey data collected from Miami-Dade County emergency managers suggested that knowledge sharing purposes and knowledge sharing mechanisms moderate and mediate uncertain dynamic disaster management task and task performance. Implications for research and practice as well directions for future research are discussed.^
Resumo:
In the half-duplex relay channel applying the decode-and-forward protocol the relay introduces energy over random time intervals into the channel as observed at the destination. Consequently, during simulation the average signal power seen at the destination becomes known at run-time only. Therefore, in order to obtain specific performance measures at the signal-to-noise ratio (SNR) of interest, strategies are required to adjust the noise variance during simulation run-time. It is necessary that these strategies result in the same performance as measured under real-world conditions. This paper introduces three noise power allocation strategies and demonstrates their applicability using numerical and simulation results.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Background: Detailed analysis of the dynamic interactions among biological, environmental, social, and economic factors that favour the spread of certain diseases is extremely useful for designing effective control strategies. Diseases like tuberculosis that kills somebody every 15 seconds in the world, require methods that take into account the disease dynamics to design truly efficient control and surveillance strategies. The usual and well established statistical approaches provide insights into the cause-effect relationships that favour disease transmission but they only estimate risk areas, spatial or temporal trends. Here we introduce a novel approach that allows figuring out the dynamical behaviour of the disease spreading. This information can subsequently be used to validate mathematical models of the dissemination process from which the underlying mechanisms that are responsible for this spreading could be inferred. Methodology/Principal Findings: The method presented here is based on the analysis of the spread of tuberculosis in a Brazilian endemic city during five consecutive years. The detailed analysis of the spatio-temporal correlation of the yearly geo-referenced data, using different characteristic times of the disease evolution, allowed us to trace the temporal path of the aetiological agent, to locate the sources of infection, and to characterize the dynamics of disease spreading. Consequently, the method also allowed for the identification of socio-economic factors that influence the process. Conclusions/Significance: The information obtained can contribute to more effective budget allocation, drug distribution and recruitment of human skilled resources, as well as guiding the design of vaccination programs. We propose that this novel strategy can also be applied to the evaluation of other diseases as well as other social processes.
Resumo:
Studies concerning the processing of natural scenes using eye movement equipment have revealed that observers retain surprisingly little information from one fixation to the next. Other studies, in which fixation remained constant while elements within the scene were changed, have shown that, even without refixation, objects within a scene are surprisingly poorly represented. Although this effect has been studied in some detail in static scenes, there has been relatively little work on scenes as we would normally experience them, namely dynamic and ever changing. This paper describes a comparable form of change blindness in dynamic scenes, in which detection is performed in the presence of simulated observer motion. The study also describes how change blindness is affected by the manner in which the observer interacts with the environment, by comparing detection performance of an observer as the passenger or driver of a car. The experiments show that observer motion reduces the detection of orientation and location changes, and that the task of driving causes a concentration of object analysis on or near the line of motion, relative to passive viewing of the same scene.
Resumo:
A dissociation between two putative measures of resource allocation skin conductance responding, and secondary task reaction time (RT), has been observed during auditory discrimination tasks. Four experiments investigated the time course of the dissociation effect with a visual discrimination task. participants were presented with circles and ellipses and instructed to count the number of longer-than-usual presentations of one shape (task-relevant) and to ignore presentations of the other shape (task-irrelevant). Concurrent with this task, participants made a speeded motor response to an auditory probe. Experiment 1 showed that skin conductance responses were larger during task-relevant stimuli than during task-irrelevant stimuli, whereas RT to probes presented at 150 ms following shape onset was slower during task-irrelevant stimuli. Experiments 2 to 4 found slower RT during task-irrelevant stimuli at probes presented at 300 ms before shape onset until 150 ms following shape onset. At probes presented 3,000 and 4,000 ms following shape onset probe RT was slower during task-relevant stimuli. The similarities between the observed time course and the so-called psychological refractory period (PRF) effect are discussed.
Resumo:
The effect that the difficulty of the discrimination between task-relevant and task-irrelevant stimuli has on the relationship between skin conductance orienting and secondary task reaction time (RT) was examined. Participants (N = 72) counted the number of longer-than-usual presentations of one shape (task-relevant) and ignored presentations of another shape (task-irrelevant). The difficulty of discriminating between the two shapes varied across three groups (low, medium, and high difficulty). Simultaneous with the primary counting task, participants performed a secondary RT task to acoustic probes presented 50, 150, and 2000 ms following shape onset. Skin conductance orienting was larger, and secondary RT at the 2000 ms probe position was slower during task-relevant shapes than during task-irrelevant shapes in the low-difficulty group. This difference declined as the discrimination difficulty was increased, such that there was no difference in the high-difficulty group. Secondary RT was slower during task-irrelevant shapes than during task-relevant shapes only in the medium-difficulty group-and only at the 150 ms probe position in the first half of the experiment. The close relationship between autonomic orienting and secondary RT at the 2000 ms probe position suggests that orienting reflects the resource allocation that results from the number of matching features between a stimulus input and a mental representation primed as significant.
The acquisition of movement skills: Practice enhances the dynamic stability of bimanual coordination
Resumo:
During bimanual movements, two relatively stable inherent patterns of coordination (in-phase and anti-phase) are displayed (e.g., Kelso, Am. J. Physiol. 246 (1984) R1000). Recent research has shown that new patterns of coordination can be learned. For example, following practice a 90 degrees out-of-phase pattern can emerge as an additional, relatively stable, state (e.g., Zanone & Kelso, J. Exp. Psychol.: Human Performance and Perception 18 (1992) 403). On this basis, it has been concluded that practice leads to the evolution and stabilisation of the newly learned pattern and that this process of learning changes the entire attractor layout of the dynamic system. A general feature of such research has been to observe the changes of the targeted pattern's stability characteristics during training at a single movement frequency. The present study was designed to examine how practice affects the maintenance of a coordinated pattern as the movement frequency is scaled. Eleven volunteers were asked to perform a bimanual forearm pronation-supination task. Time to transition onset was used as an index of the subjects' ability to maintain two symmetrically opposite coordinated patterns (target task - 90 degrees out-of-phase - transfer task - 270 degrees out-of-phase). Their ability to maintain the target task and the transfer task were examined again after five practice sessions each consisting of 15 trials of only the 90 degrees out-of-phase pattern. Concurrent performance feedback (a Lissajous figure) was available to the participants during each practice trial. A comparison of the time to transition onset showed that the target task was more stable after practice (p = 0.025). These changes were still observed one week (p = 0.05) and two months (p = 0.075) after the practice period. Changes in the stability of the transfer task were not observed until two months after practice (p = 0.025). Notably, following practice, transitions from the 90 degrees pattern were generally to the anti-phase (180 degrees) pattern, whereas, transitions from the 270 degrees pattern were to the 90 degrees pattern. These results suggest that practice does improve the stability of a 90 degrees pattern, and that such improvements are transferable to the performance of the unpractised 270 degrees pattern. In addition, the anti-phase pattern remained more stable than the practised 90 degrees pattern throughout. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Users of wireless devices increasingly demand access to multimedia content with speci c quality of service requirements. Users might tolerate di erent levels of service, or could be satis ed with di erent quality combinations choices. However, multimedia processing introduces heavy resource requirements on the client side. Our work tries to address the growing demand on resources and performance requirements, by allowing wireless nodes to cooperate with each other to meet resource allocation requests and handle stringent constraints, opportunistically taking advantage of the local ad-hoc network that is created spontaneously, as nodes move in range of each other, forming a temporary coalition for service execution. Coalition formation is necessary when a single node cannot execute a speci c service, but it may also be bene cial when groups perform more e ciently when compared to a single s node performance.
Resumo:
In this paper we survey the most relevant results for the prioritybased schedulability analysis of real-time tasks, both for the fixed and dynamic priority assignment schemes. We give emphasis to the worst-case response time analysis in non-preemptive contexts, which is fundamental for the communication schedulability analysis. We define an architecture to support priority-based scheduling of messages at the application process level of a specific fieldbus communication network, the PROFIBUS. The proposed architecture improves the worst-case messages’ response time, overcoming the limitation of the first-come-first-served (FCFS) PROFIBUS queue implementations.
Resumo:
Dynamic parallel scheduling using work-stealing has gained popularity in academia and industry for its good performance, ease of implementation and theoretical bounds on space and time. Cores treat their own double-ended queues (deques) as a stack, pushing and popping threads from the bottom, but treat the deque of another randomly selected busy core as a queue, stealing threads only from the top, whenever they are idle. However, this standard approach cannot be directly applied to real-time systems, where the importance of parallelising tasks is increasing due to the limitations of multiprocessor scheduling theory regarding parallelism. Using one deque per core is obviously a source of priority inversion since high priority tasks may eventually be enqueued after lower priority tasks, possibly leading to deadline misses as in this case the lower priority tasks are the candidates when a stealing operation occurs. Our proposal is to replace the single non-priority deque of work-stealing with ordered per-processor priority deques of ready threads. The scheduling algorithm starts with a single deque per-core, but unlike traditional work-stealing, the total number of deques in the system may now exceed the number of processors. Instead of stealing randomly, cores steal from the highest priority deque.
Resumo:
Wireless Sensor Networks (WSNs) are highly distributed systems in which resource allocation (bandwidth, memory) must be performed efficiently to provide a minimum acceptable Quality of Service (QoS) to the regions where critical events occur. In fact, if resources are statically assigned independently from the location and instant of the events, these resources will definitely be misused. In other words, it is more efficient to dynamically grant more resources to sensor nodes affected by critical events, thus providing better network resource management and reducing endto- end delays of event notification and tracking. In this paper, we discuss the use of a WSN management architecture based on the active network management paradigm to provide the real-time tracking and reporting of dynamic events while ensuring efficient resource utilization. The active network management paradigm allows packets to transport not only data, but also program scripts that will be executed in the nodes to dynamically modify the operation of the network. This presumes the use of a runtime execution environment (middleware) in each node to interpret the script. We consider hierarchical (e.g. cluster-tree, two-tiered architecture) WSN topologies since they have been used to improve the timing performance of WSNs as they support deterministic medium access control protocols.
Resumo:
A QoS adaptation to dynamically changing system conditions that takes into consideration the user’s constraints on the stability of service provisioning is presented. The goal is to allow the system to make QoS adaptation decisions in response to fluctuations in task traffic flow, under the control of the user. We pay special attention to the case where monitoring the stability period and resource load variation of Service Level Agreements for different types of services is used to dynamically adapt future stability periods, according to a feedback control scheme. System’s adaptation behaviour can be configured according to a desired confidence level on future resource usage. The viability of the proposed approach is validated by preliminary experiments.