980 resultados para Task Complexity


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reuse of existing carefully designed and tested software improves the quality of new software systems and reduces their development costs. Object-oriented frameworks provide an established means for software reuse on the levels of both architectural design and concrete implementation. Unfortunately, due to frame-works complexity that typically results from their flexibility and overall abstract nature, there are severe problems in using frameworks. Patterns are generally accepted as a convenient way of documenting frameworks and their reuse interfaces. In this thesis it is argued, however, that mere static documentation is not enough to solve the problems related to framework usage. Instead, proper interactive assistance tools are needed in order to enable system-atic framework-based software production. This thesis shows how patterns that document a framework s reuse interface can be represented as dependency graphs, and how dynamic lists of programming tasks can be generated from those graphs to assist the process of using a framework to build an application. This approach to framework specialization combines the ideas of framework cookbooks and task-oriented user interfaces. Tasks provide assistance in (1) cre-ating new code that complies with the framework reuse interface specification, (2) assuring the consistency between existing code and the specification, and (3) adjusting existing code to meet the terms of the specification. Besides illustrating how task-orientation can be applied in the context of using frameworks, this thesis describes a systematic methodology for modeling any framework reuse interface in terms of software patterns based on dependency graphs. The methodology shows how framework-specific reuse interface specifi-cations can be derived from a library of existing reusable pattern hierarchies. Since the methodology focuses on reusing patterns, it also alleviates the recog-nized problem of framework reuse interface specification becoming complicated and unmanageable for frameworks of realistic size. The ideas and methods proposed in this thesis have been tested through imple-menting a framework specialization tool called JavaFrames. JavaFrames uses role-based patterns that specify a reuse interface of a framework to guide frame-work specialization in a task-oriented manner. This thesis reports the results of cases studies in which JavaFrames and the hierarchical framework reuse inter-face modeling methodology were applied to the Struts web application frame-work and the JHotDraw drawing editor framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present two discriminative language modelling techniques for Lempel-Ziv-Welch (LZW) based LID system. The previous approach to LID using LZW algorithm was to directly use the LZW pattern tables forlanguage modelling. But, since the patterns in a language pattern table are shared by other language pattern tables, confusability prevailed in the LID task. For overcoming this, we present two pruning techniques (i) Language Specific (LS-LZW)-in which patterns common to more than one pattern table are removed. (ii) Length-Frequency product based (LF-LZW)-in which patterns having their length-frequency product below a threshold are removed. These approaches reduce the classification score (Compression Ratio [LZW-CR] or the weighted discriminant score [LZW-WDS]) for non native languages and increases the LID performance considerably. Also the memory and computational requirements of these techniques are much less compared to basic LZW techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the approach for assigning cooperative communication of Uninhabited Aerial Vehicles (UAV) to perform multiple tasks on multiple targets is posed as a combinatorial optimization problem. The multiple task such as classification, attack and verification of target using UAV is employed using nature inspired techniques such as Artificial Immune System (AIS), Particle Swarm Optimization (PSO) and Virtual Bee Algorithm (VBA). The nature inspired techniques have an advantage over classical combinatorial optimization methods like prohibitive computational complexity to solve this NP-hard problem. Using the algorithms we find the best sequence in which to attack and destroy the targets while minimizing the total distance traveled or the maximum distance traveled by an UAV. The performance analysis of the UAV to classify, attack and verify the target is evaluated using AIS, PSO and VBA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A case study of an aircraft engine manufacturer is used to analyze the effects of management levers on the lead time and design errors generated in an iteration-intensive concurrent engineering process. The levers considered are amount of design-space exploration iteration, degree of process concurrency, and timing of design reviews. Simulation is used to show how the ideal combination of these levers can vary with changes in design problem complexity, which can increase, for instance, when novel technology is incorporated in a design. Results confirm that it is important to consider multiple iteration-influencing factors and their interdependencies to understand concurrent processes, because the factors can interact with confounding effects. The article also demonstrates a new approach to derive a system dynamics model from a process task network. The new approach could be applied to analyze other concurrent engineering scenarios. © The Author(s) 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A framework for adaptive and non-adaptive statistical compressive sensing is developed, where a statistical model replaces the standard sparsity model of classical compressive sensing. We propose within this framework optimal task-specific sensing protocols specifically and jointly designed for classification and reconstruction. A two-step adaptive sensing paradigm is developed, where online sensing is applied to detect the signal class in the first step, followed by a reconstruction step adapted to the detected class and the observed samples. The approach is based on information theory, here tailored for Gaussian mixture models (GMMs), where an information-theoretic objective relationship between the sensed signals and a representation of the specific task of interest is maximized. Experimental results using synthetic signals, Landsat satellite attributes, and natural images of different sizes and with different noise levels show the improvements achieved using the proposed framework when compared to more standard sensing protocols. The underlying formulation can be applied beyond GMMs, at the price of higher mathematical and computational complexity. © 1991-2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays multi-touch devices (MTD) can be found in all kind of contexts. In the learning context, MTD availability leads many teachers to use them in their class room, to support the use of the devices by students, or to assume that it will enhance the learning processes. Despite the raising interest for MTD, few researches studying the impact in term of performance or the suitability of the technology for the learning context exist. However, even if the use of touch-sensitive screens rather than a mouse and keyboard seems to be the easiest and fastest way to realize common learning tasks (as for instance web surfing behaviour), we notice that the use of MTD may lead to a less favourable outcome. The complexity to generate an accurate fingers gesture and the split attention it requires (multi-tasking effect) make the use of gestures to interact with a touch-sensitive screen more difficult compared to the traditional laptop use. More precisely, it is hypothesized that efficacy and efficiency decreases, as well as the available cognitive resources making the users’ task engagement more difficult. Furthermore, the presented study takes into account the moderator effect of previous experiences with MTD. Two key factors of technology adoption theories were included in the study: familiarity and self-efficacy with the technology.Sixty university students, invited to a usability lab, are asked to perform information search tasks on an online encyclopaedia. The different tasks were created in order to execute the most commonly used mouse actions (e.g. right click, left click, scrolling, zooming, key words encoding…). Two different conditions were created: (1) MTD use and (2) laptop use (with keyboard and mouse). The cognitive load, self-efficacy, familiarity and task engagement scales were adapted to the MTD context. Furthermore, the eye-tracking measurement would offer additional information about user behaviours and their cognitive load.Our study aims to clarify some important aspects towards the usage of MTD and the added value compared to a laptop in a student learning context. More precisely, the outcomes will enhance the suitability of MTD with the processes at stakes, the role of previous knowledge in the adoption process, as well as some interesting insights into the user experience with such devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last decade, multi-touch devices (MTD) have spread in a range of contexts. In the learning context, MTD accessibility leads more and more teachers to use them in their classroom, assuming that it will improve the learning activities. Despite a growing interest, only few studies have focused on the impacts of MTD use in terms of performance and suitability in a learning context.However, even if the use of touch-sensitive screens rather than a mouse and keyboard seems to be the easiest and fastest way to realize common learning tasks (as for instance web surfing), we notice that the use of MTD may lead to a less favorable outcome. More precisely, tasks that require users to generate complex and/or less common gestures may increase extrinsic cognitive load and impair performance, especially for intrinsically complex tasks. It is hypothesized that task and gesture complexity will affect users’ cognitive resources and decrease task efficacy and efficiency. Because MTD are supposed to be more appealing, it is assumed that it will also impact cognitive absorption. The present study also takes into account user’s prior knowledge concerning MTD use and gestures by using experience with MTD as a moderator. Sixty university students were asked to perform information search tasks on an online encyclopedia. Tasks were set up so that users had to generate the most commonly used mouse actions (e.g. left/right click, scrolling, zooming, text encoding…). Two conditions were created: MTD use and laptop use (with mouse and keyboard) in order to make a comparison between the two devices. An eye tracking device was used to measure user’s attention and cognitive load. Our study sheds light on some important aspects towards the use of MTD and the added value compared to a laptop in a student learning context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. The "obvious" lower bounds of O(m) messages (m is the number of edges in the network) and O(D) time (D is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even O(n) (n is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that D and n were not known).

We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make anyuse of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time algorithm is known. A slight adaptation of our lower bound technique gives rise to an O(m) message lower bound for randomized broadcast algorithms.

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This article focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. In particular, the seemingly obvious lower bounds of Ω(m) messages, where m is the number of edges in the network, and Ω(D) time, where D is the network diameter, are nontrivial to show for randomized (Monte Carlo) algorithms. (Recent results, showing that even Ω(n), where n is the number of nodes in the network, is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms, except for the restricted case of comparison algorithms, where it was also required that nodes may not wake up spontaneously and that D and n were not known. We establish these fundamental lower bounds in this article for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (namely, algorithms that work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time leader election algorithm is known. A slight adaptation of our lower bound technique gives rise to an Ω(m) message lower bound for randomized broadcast algorithms. 

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. The answer is known to be negative in the deterministic setting. We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that tradeoff messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising two different types of processors—such a platform is referred to as two-type platform. We present two low degree polynomial time-complexity algorithms, SA and SA-P, each providing the following guarantee. For a given two-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then (i) using SA, it is guaranteed to find such an assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which processors are 1+α times faster. The parameter 0<α≤1 is a property of the task set; it is the maximum of all the task utilizations that are no greater than 1. We evaluate average-case performance of both the algorithms by generating task sets randomly and measuring how much faster processors the algorithms need (which is upper bounded by 1+α/2 for SA and 1+α for SA-P) in order to output a feasible task assignment (intra-migrative for SA and non-migrative for SA-P). In our evaluations, for the vast majority of task sets, these algorithms require significantly smaller processor speedup than indicated by their theoretical bounds. Finally, we consider a special case where no task utilization in the given task set can exceed one and for this case, we (re-)prove the performance guarantees of SA and SA-P. We show, for both of the algorithms, that changing the adversary from intra-migrative to a more powerful one, namely fully-migrative, in which tasks can migrate between processors of any type, does not deteriorate the performance guarantees. For this special case, we compare the average-case performance of SA-P and a state-of-the-art algorithm by generating task sets randomly. In our evaluations, SA-P outperforms the state-of-the-art by requiring much smaller processor speedup and by running orders of magnitude faster.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of a complexly worded counterattitudinal appeal on laypeople's attitudes toward a legal issue were examined, using the Elaboration Likelihood Model (ELM) of persuasion as a theoretical framework. This model states that persuasion can result from the elaboration and scrutiny of the message arguments (i.e., central route processing), or can result from less cognitively effortful strategies, such as relying on source characteristics as a cue to message validity (i.e., peripheral route processing). One hundred and sixty-seven undergraduates (85 men and 81 women) listened to eitller a low status or high status source deliver a counterattitudinal speech on a legal issue. The speech was designed to contain strong or weak arguments. These arguments were 'worded in a simple and, therefore, easy to comprehend manner, or in a complex and, therefore, difficult to comprehend manner. Thus, there were three experimental manipulations: argument comprehensibility (easy to comprehend vs. difficult to comprehend), argumel11 strength (weak vs. strong), and source status (low vs. high). After listening to tIle speec.J] participants completed a measure 'of their attitude toward the legal issue, a thought listil1g task, an argument recall task,manipulation checks, measures of motivation to process the message, and measures of mood. As a result of the failure of the argument strength manipulation, only the effects of the comprehel1sibility and source status manipulations were tested. There was, however, some evidence of more central route processing in the easy comprehension condition than in the difficult comprehension condition, as predicted. Significant correlations were found between attitude and favourable and unfavourable thoughts about the legal issue with easy to comprehend arguments; whereas, there was a correlation only between attitude and favourable thoughts 11 toward the issue with difficult to comprehend arguments, suggesting, perhaps, that central route processing, \vhich involves argument scrutiny and elaboration, occurred under conditions of easy comprehension to a greater extent than under conditions of difficult comprehension. The results also revealed, among other findings, several significant effects of gender. Men had more favourable attitudes toward the legal issue than did women, men recalled more arguments from the speech than did women, men were less frustrated while listening to the speech than were ,vomen, and men put more effort into thinking about the message arguments than did women. When the arguments were difficult to comprehend, men had more favourable thoughts and fewer unfavourable thoughts about the legal issue than did women. Men and women may have had different affective responses to the issue of plea bargaining (with women responding more negatively than men), especially in light of a local and controversial plea bargain that occurred around the time of this study. Such pre-existing gender differences may have led to tIle lower frustration, the greater effort, the greater recall, and more positive attitudes for men than for WOlnen. Results· from this study suggest that current cognitive models of persuasion may not be very applicable to controversial issues which elicit strong emotional responses. Finally, these data indicate that affective responses, the controversial and emotional nature ofthe issue, gender and other individual differences are important considerations when experts are attempting to persuade laypeople toward a counterattitudinal position.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

n this paper, a time series complexity analysis of dense array electroencephalogram signals is carried out using the recently introduced Sample Entropy (SampEn) measure. This statistic quantifies the regularity in signals recorded from systems that can vary from the purely deterministic to purely stochastic realm. The present analysis is conducted with an objective of gaining insight into complexity variations related to changing brain dynamics for EEG recorded from the three cases of passive, eyes closed condition, a mental arithmetic task and the same mental task carried out after a physical exertion task. It is observed that the statistic is a robust quantifier of complexity suited for short physiological signals such as the EEG and it points to the specific brain regions that exhibit lowered complexity during the mental task state as compared to a passive, relaxed state. In the case of mental tasks carried out before and after the performance of a physical exercise, the statistic can detect the variations brought in by the intermediate fatigue inducing exercise period. This enhances its utility in detecting subtle changes in the brain state that can find wider scope for applications in EEG based brain studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Four experiments consider some of the circumstances under which children follow two different rule pairs when sorting cards. Previous research has repeatedly found that 3-year-olds encounter substantial difficulties implementing the second of two conflicting rule sets, despite their knowledge of these rules. One interpretation of this phenomenon [Cognitive Complexity and Control (CCC) theory] is that 3-year-olds have problems establishing an appropriate hierarchical ordering for rules. The present data suggest an alternative account of children's card sorting behaviour, according to which the cognitive salience of test card features may be more important than inflexibility with respect to rule representation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This investigation moves beyond the traditional studies of word reading to identify how the production complexity of words affects reading accuracy in an individual with deep dyslexia (JO). We examined JO’s ability to read words aloud while manipulating both the production complexity of the words and the semantic context. The classification of words as either phonetically simple or complex was based on the Index of Phonetic Complexity. The semantic context was varied using a semantic blocking paradigm (i.e., semantically blocked and unblocked conditions). In the semantically blocked condition words were grouped by semantic categories (e.g., table, sit, seat, couch,), whereas in the unblocked condition the same words were presented in a random order. JO’s performance on reading aloud was also compared to her performance on a repetition task using the same items. Results revealed a strong interaction between word complexity and semantic blocking for reading aloud but not for repetition. JO produced the greatest number of errors for phonetically complex words in semantically blocked condition. This interaction suggests that semantic processes are constrained by output production processes which are exaggerated when derived from visual rather than auditory targets. This complex relationship between orthographic, semantic, and phonetic processes highlights the need for word recognition models to explicitly account for production processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The overarching aim of the research reported here was to investigate the effects of task structure and storyline complexity of oral narrative tasks on second language task performance. Participants were 60 Iranian language learners of English who performed six narrative tasks of varying degree of structure and storyline complexity in an assessment setting. A number of analytic detailed measures were employed to examine whether there were any differences in the participants’ performances elicited by the different tasks in terms of their accuracy, fluency, syntactic complexity and lexical diversity. Results of the data analysis showed that performance in the more structured tasks was more accurate and to a great extent more fluent than that in the less structured tasks. The results further revealed that syntactic complexity of L2 performance was related to the storyline complexity, i.e. more syntactic complexity was associated with narratives that had both foreground and background storylines. These findings strongly suggest that there is some unsystematic variance in the participants’ performance triggered by the different aspects of task design.