811 resultados para Task complexity


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study was designed to examine the main and interactive effects of task demands, work control, and task information on levels of adjustment. Task demands, work control, and task information were manipulated in an experimental setting where participants completed a letter-sorting activity (N = 128). Indicators of adjustment included measures of positive mood, participants' perceptions of task performance, and task satisfaction. Results of the present study provided some support for the main effects of objective task demands, work control, and task information on levels of adjustment. At the subjective level of analysis, there was some evidence to suggest that work control and task information interacted in their effects on levels of adjustment. There was minimal support for the proposal that work control and task information would buffer the negative effects of task demands on adjustment. There was, however, some evidence to suggest that the stress-buffering role of subjective work control was more marked at high, rather than low, levels of subjective task information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

C. L. Isaac and A. R. Mayes (1999a, 1999b) compared forgetting rates in amnesic patients and normal participants across a range of memory tasks. Although the results are complex, many of them appear to be replicable and there are several commendable features to the design and analysis. Nevertheless, the authors largely ignored 2 relevant literatures: the traditional literature on proactive inhibition/interference and the formal analyses of the complexity of the bindings (associations) required for memory tasks. It is shown how the empirical results and conceptual analyses in these literatures are needed to guide the choice of task, the design of experiments, and the interpretation of results for amnesic patients and normal participants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two experiments tested predictions from a theory in which processing load depends on relational complexity (RC), the number of variables related in a single decision. Tasks from six domains (transitivity, hierarchical classification, class inclusion, cardinality, relative-clause sentence comprehension, and hypothesis testing) were administered to children aged 3-8 years. Complexity analyses indicated that the domains entailed ternary relations (three variables). Simpler binary-relation (two variables) items were included for each domain. Thus RC was manipulated with other factors tightly controlled. Results indicated that (i) ternary-relation items were more difficult than comparable binary-relation items, (ii) the RC manipulation was sensitive to age-related changes, (iii) ternary relations were processed at a median age of 5 years, (iv) cross-task correlations were positive, with all tasks loading on a single factor (RC), (v) RC factor scores accounted for 80% (88%) of age-related variance in fluid intelligence (compositionality of sets), (vi) binary- and ternary-relation items formed separate complexity classes, and (vii) the RC approach to defining cognitive complexity is applicable to different content domains. (C) 2002 Elsevier Science (USA). All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While the earliest deadline first algorithm is known to be optimal as a uniprocessor scheduling policy, the implementation comes at a cost in terms of complexity. Fixed taskpriority algorithms on the other hand have lower complexity but higher likelihood of task sets being declared unschedulable, when compared to earliest deadline first (EDF). Various attempts have been undertaken to increase the chances of proving a task set schedulable with similar low complexity. In some cases, this was achieved by modifying applications to limit preemptions, at the cost of flexibility. In this work, we explore several variants of a concept to limit interference by locking down the ready queue at certain instances. The aim is to increase the prospects of schedulability of a given task system, without compromising on complexity or flexibility, when compared to the regular fixed task-priority algorithm. As a final contribution, a new preemption threshold assignment algorithm is provided which is less complex and more straightforward than the previous method available in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the growing complexity and dynamism of many embedded application domains (including consumer electronics, robotics, automotive and telecommunications), it is increasingly difficult to react to load variations and adapt the system's performance in a controlled fashion within an useful and bounded time. This is particularly noticeable when intending to benefit from the full potential of an open distributed cooperating environment, where service characteristics are not known beforehand and tasks may exhibit unrestricted QoS inter-dependencies. This paper proposes a novel anytime adaptive QoS control policy in which the online search for the best set of QoS levels is combined with each user's personal preferences on their services' adaptation behaviour. Extensive simulations demonstrate that the proposed anytime algorithms are able to quickly find a good initial solution and effectively optimise the rate at which the quality of the current solution improves as the algorithms are given more time to run, with a minimum overhead when compared against their traditional versions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising two different types of processors—such a platform is referred to as two-type platform. We present two low degree polynomial time-complexity algorithms, SA and SA-P, each providing the following guarantee. For a given two-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then (i) using SA, it is guaranteed to find such an assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which processors are 1+α times faster. The parameter 0<α≤1 is a property of the task set; it is the maximum of all the task utilizations that are no greater than 1. We evaluate average-case performance of both the algorithms by generating task sets randomly and measuring how much faster processors the algorithms need (which is upper bounded by 1+α/2 for SA and 1+α for SA-P) in order to output a feasible task assignment (intra-migrative for SA and non-migrative for SA-P). In our evaluations, for the vast majority of task sets, these algorithms require significantly smaller processor speedup than indicated by their theoretical bounds. Finally, we consider a special case where no task utilization in the given task set can exceed one and for this case, we (re-)prove the performance guarantees of SA and SA-P. We show, for both of the algorithms, that changing the adversary from intra-migrative to a more powerful one, namely fully-migrative, in which tasks can migrate between processors of any type, does not deteriorate the performance guarantees. For this special case, we compare the average-case performance of SA-P and a state-of-the-art algorithm by generating task sets randomly. In our evaluations, SA-P outperforms the state-of-the-art by requiring much smaller processor speedup and by running orders of magnitude faster.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An ever increasing need for extra functionality in a single embedded system demands for extra Input/Output (I/O) devices, which are usually connected externally and are expensive in terms of energy consumption. To reduce their energy consumption, these devices are equipped with power saving mechanisms. While I/O device scheduling for real-time (RT) systems with such power saving features has been studied in the past, the use of energy resources by these scheduling algorithms may be improved. Technology enhancements in the semiconductor industry have allowed the hardware vendors to reduce the device transition and energy overheads. The decrease in overhead of sleep transitions has opened new opportunities to further reduce the device energy consumption. In this research effort, we propose an intra-task device scheduling algorithm for real-time systems that wakes up a device on demand and reduces its active time while ensuring system schedulability. This intra-task device scheduling algorithm is extended for devices with multiple sleep states to further minimise the overall device energy consumption of the system. The proposed algorithms have less complexity when compared to the conservative inter-task device scheduling algorithms. The system model used relaxes some of the assumptions commonly made in the state-of-the-art that restrict their practical relevance. Apart from the aforementioned advantages, the proposed algorithms are shown to demonstrate the substantial energy savings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nos dias de hoje, os sistemas de tempo real crescem em importância e complexidade. Mediante a passagem do ambiente uniprocessador para multiprocessador, o trabalho realizado no primeiro não é completamente aplicável no segundo, dado que o nível de complexidade difere, principalmente devido à existência de múltiplos processadores no sistema. Cedo percebeu-se, que a complexidade do problema não cresce linearmente com a adição destes. Na verdade, esta complexidade apresenta-se como uma barreira ao avanço científico nesta área que, para já, se mantém desconhecida, e isto testemunha-se, essencialmente no caso de escalonamento de tarefas. A passagem para este novo ambiente, quer se trate de sistemas de tempo real ou não, promete gerar a oportunidade de realizar trabalho que no primeiro caso nunca seria possível, criando assim, novas garantias de desempenho, menos gastos monetários e menores consumos de energia. Este último fator, apresentou-se desde cedo, como, talvez, a maior barreira de desenvolvimento de novos processadores na área uniprocessador, dado que, à medida que novos eram lançados para o mercado, ao mesmo tempo que ofereciam maior performance, foram levando ao conhecimento de um limite de geração de calor que obrigou ao surgimento da área multiprocessador. No futuro, espera-se que o número de processadores num determinado chip venha a aumentar, e como é óbvio, novas técnicas de exploração das suas inerentes vantagens têm de ser desenvolvidas, e a área relacionada com os algoritmos de escalonamento não é exceção. Ao longo dos anos, diferentes categorias de algoritmos multiprocessador para dar resposta a este problema têm vindo a ser desenvolvidos, destacando-se principalmente estes: globais, particionados e semi-particionados. A perspectiva global, supõe a existência de uma fila global que é acessível por todos os processadores disponíveis. Este fato torna disponível a migração de tarefas, isto é, é possível parar a execução de uma tarefa e resumir a sua execução num processador distinto. Num dado instante, num grupo de tarefas, m, as tarefas de maior prioridade são selecionadas para execução. Este tipo promete limites de utilização altos, a custo elevado de preempções/migrações de tarefas. Em contraste, os algoritmos particionados, colocam as tarefas em partições, e estas, são atribuídas a um dos processadores disponíveis, isto é, para cada processador, é atribuída uma partição. Por essa razão, a migração de tarefas não é possível, acabando por fazer com que o limite de utilização não seja tão alto quando comparado com o caso anterior, mas o número de preempções de tarefas decresce significativamente. O esquema semi-particionado, é uma resposta de caráter hibrido entre os casos anteriores, pois existem tarefas que são particionadas, para serem executadas exclusivamente por um grupo de processadores, e outras que são atribuídas a apenas um processador. Com isto, resulta uma solução que é capaz de distribuir o trabalho a ser realizado de uma forma mais eficiente e balanceada. Infelizmente, para todos estes casos, existe uma discrepância entre a teoria e a prática, pois acaba-se por se assumir conceitos que não são aplicáveis na vida real. Para dar resposta a este problema, é necessário implementar estes algoritmos de escalonamento em sistemas operativos reais e averiguar a sua aplicabilidade, para caso isso não aconteça, as alterações necessárias sejam feitas, quer a nível teórico quer a nível prá

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the complexity of rationalizing choice behavior. We do so by analyzing two polar cases, and a number of intermediate ones. In our most structured case, that is where choice behavior is defined in universal choice domains and satisfies the "weak axiom of revealed preference," finding the complete preorder rationalizing choice behavior is a simple matter. In the polar case, where no restriction whatsoever is imposed, either on choice behavior or on choice domain, finding the complete preordersthat rationalize behavior turns out to be intractable. We show that the task of finding the rationalizing complete preorders is equivalent to a graph problem. This allows the search for existing algorithms in the graph theory literature, for the rationalization of choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of a complexly worded counterattitudinal appeal on laypeople's attitudes toward a legal issue were examined, using the Elaboration Likelihood Model (ELM) of persuasion as a theoretical framework. This model states that persuasion can result from the elaboration and scrutiny of the message arguments (i.e., central route processing), or can result from less cognitively effortful strategies, such as relying on source characteristics as a cue to message validity (i.e., peripheral route processing). One hundred and sixty-seven undergraduates (85 men and 81 women) listened to eitller a low status or high status source deliver a counterattitudinal speech on a legal issue. The speech was designed to contain strong or weak arguments. These arguments were 'worded in a simple and, therefore, easy to comprehend manner, or in a complex and, therefore, difficult to comprehend manner. Thus, there were three experimental manipulations: argument comprehensibility (easy to comprehend vs. difficult to comprehend), argumel11 strength (weak vs. strong), and source status (low vs. high). After listening to tIle speec.J] participants completed a measure 'of their attitude toward the legal issue, a thought listil1g task, an argument recall task,manipulation checks, measures of motivation to process the message, and measures of mood. As a result of the failure of the argument strength manipulation, only the effects of the comprehel1sibility and source status manipulations were tested. There was, however, some evidence of more central route processing in the easy comprehension condition than in the difficult comprehension condition, as predicted. Significant correlations were found between attitude and favourable and unfavourable thoughts about the legal issue with easy to comprehend arguments; whereas, there was a correlation only between attitude and favourable thoughts 11 toward the issue with difficult to comprehend arguments, suggesting, perhaps, that central route processing, \vhich involves argument scrutiny and elaboration, occurred under conditions of easy comprehension to a greater extent than under conditions of difficult comprehension. The results also revealed, among other findings, several significant effects of gender. Men had more favourable attitudes toward the legal issue than did women, men recalled more arguments from the speech than did women, men were less frustrated while listening to the speech than were ,vomen, and men put more effort into thinking about the message arguments than did women. When the arguments were difficult to comprehend, men had more favourable thoughts and fewer unfavourable thoughts about the legal issue than did women. Men and women may have had different affective responses to the issue of plea bargaining (with women responding more negatively than men), especially in light of a local and controversial plea bargain that occurred around the time of this study. Such pre-existing gender differences may have led to tIle lower frustration, the greater effort, the greater recall, and more positive attitudes for men than for WOlnen. Results· from this study suggest that current cognitive models of persuasion may not be very applicable to controversial issues which elicit strong emotional responses. Finally, these data indicate that affective responses, the controversial and emotional nature ofthe issue, gender and other individual differences are important considerations when experts are attempting to persuade laypeople toward a counterattitudinal position.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

n this paper, a time series complexity analysis of dense array electroencephalogram signals is carried out using the recently introduced Sample Entropy (SampEn) measure. This statistic quantifies the regularity in signals recorded from systems that can vary from the purely deterministic to purely stochastic realm. The present analysis is conducted with an objective of gaining insight into complexity variations related to changing brain dynamics for EEG recorded from the three cases of passive, eyes closed condition, a mental arithmetic task and the same mental task carried out after a physical exertion task. It is observed that the statistic is a robust quantifier of complexity suited for short physiological signals such as the EEG and it points to the specific brain regions that exhibit lowered complexity during the mental task state as compared to a passive, relaxed state. In the case of mental tasks carried out before and after the performance of a physical exercise, the statistic can detect the variations brought in by the intermediate fatigue inducing exercise period. This enhances its utility in detecting subtle changes in the brain state that can find wider scope for applications in EEG based brain studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Four experiments consider some of the circumstances under which children follow two different rule pairs when sorting cards. Previous research has repeatedly found that 3-year-olds encounter substantial difficulties implementing the second of two conflicting rule sets, despite their knowledge of these rules. One interpretation of this phenomenon [Cognitive Complexity and Control (CCC) theory] is that 3-year-olds have problems establishing an appropriate hierarchical ordering for rules. The present data suggest an alternative account of children's card sorting behaviour, according to which the cognitive salience of test card features may be more important than inflexibility with respect to rule representation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This investigation moves beyond the traditional studies of word reading to identify how the production complexity of words affects reading accuracy in an individual with deep dyslexia (JO). We examined JO’s ability to read words aloud while manipulating both the production complexity of the words and the semantic context. The classification of words as either phonetically simple or complex was based on the Index of Phonetic Complexity. The semantic context was varied using a semantic blocking paradigm (i.e., semantically blocked and unblocked conditions). In the semantically blocked condition words were grouped by semantic categories (e.g., table, sit, seat, couch,), whereas in the unblocked condition the same words were presented in a random order. JO’s performance on reading aloud was also compared to her performance on a repetition task using the same items. Results revealed a strong interaction between word complexity and semantic blocking for reading aloud but not for repetition. JO produced the greatest number of errors for phonetically complex words in semantically blocked condition. This interaction suggests that semantic processes are constrained by output production processes which are exaggerated when derived from visual rather than auditory targets. This complex relationship between orthographic, semantic, and phonetic processes highlights the need for word recognition models to explicitly account for production processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The overarching aim of the research reported here was to investigate the effects of task structure and storyline complexity of oral narrative tasks on second language task performance. Participants were 60 Iranian language learners of English who performed six narrative tasks of varying degree of structure and storyline complexity in an assessment setting. A number of analytic detailed measures were employed to examine whether there were any differences in the participants’ performances elicited by the different tasks in terms of their accuracy, fluency, syntactic complexity and lexical diversity. Results of the data analysis showed that performance in the more structured tasks was more accurate and to a great extent more fluent than that in the less structured tasks. The results further revealed that syntactic complexity of L2 performance was related to the storyline complexity, i.e. more syntactic complexity was associated with narratives that had both foreground and background storylines. These findings strongly suggest that there is some unsystematic variance in the participants’ performance triggered by the different aspects of task design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article reports on a detailed empirical study of the way narrative task design influences the oral performance of second-language (L2) learners. Building on previous research findings, two dimensions of narrative design were chosen for investigation: narrative complexity and inherent narrative structure. Narrative complexity refers to the presence of simultaneous storylines; in this case, we compared single-story narratives with dual-story narratives. Inherent narrative structure refers to the order of events in a narrative; we compared narratives where this was fixed to others where the events could be reordered without loss of coherence. Additionally, we explored the influence of learning context on performance by gathering data from two comparable groups of participants: 60 learners in a foreign language context in Teheran and 40 in an L2 context in London. All participants recounted two of four narratives from cartoon pictures prompts, giving a between-subjects design for narrative complexity and a within-subjects design for inherent narrative structure. The results show clearly that for both groups, L2 performance was affected by the design of the task: Syntactic complexity was supported by narrative storyline complexity and grammatical accuracy was supported by an inherently fixed narrative structure. We reason that the task of recounting simultaneous events leads learners into attempting more hypotactic language, such as subordinate clauses that follow, for example, while, although, at the same time as, etc. We reason also that a tight narrative structure allows learners to achieve greater accuracy in the L2 (within minutes of performing less accurately on a loosely structured narrative) because the tight ordering of events releases attentional resources that would otherwise be spent on finding connections between the pictures. The learning context was shown to have no effect on either accuracy or fluency but an unexpectedly clear effect on syntactic complexity and lexical diversity. The learners in London seem to have benefited from being in the target language environment by developing not more accurate grammar but a more diverse resource of English words and syntactic choices. In a companion article (Foster & Tavakoli, 2009) we compared their performance with native-speaker baseline data and see that, in terms of nativelike selection of vocabulary and phrasing, the learners in London are closing in on native-speaker norms. The study provides empirical evidence that L2 performance is affected by task design in predictable ways. It also shows that living within the target language environment, and presumably using the L2 in a host of everyday tasks outside the classroom, confers a distinct lexical advantage, not a grammatical one.