905 resultados para task model
Resumo:
We examine hypotheses for the neural basis of the profile of visual cognition in young children with Williams syndrome (WS). These are: (a) that it is a consequence of anomalies in sensory visual processing; (b) that it is a deficit of the dorsal relative to the ventral cortical stream; (c) that it reflects deficit of frontal function, in particular of fronto-parietal interaction; (d) that it is related to impaired function in the right hemisphere relative to the left. The tests reported here are particularly relevant to (b) and (c). They form part of a more extensive programme of investigating visual, visuospatial, and cognitive function in large group of children with WS children, aged 8 months to 15 years. To compare performance across tests, avoiding floor and ceiling effects, we have measured performance in children with WS in terms of the ‘age equivalence’ for typically developing children. In this paper the relation between dorsal and ventral function was tested by motion and form coherence thresholds respectively. We confirm the presence of a subgroup of children with WS who perform particularly poorly on the motion (dorsal) task. However, such performance is also characteristic of normally developingchildren up to 5 years: thus the WS performance may reflect an overall persisting immaturity of visuospatial processing which is particularly evident in the dorsal stream. Looking at the performance on the global coherence tasks of the entire WS group, we find that there is also a subgroup who have both high form and motion coherence thresholds, relative to the performance of children of the same chronological age and verbal age on the BPVS, suggesting a more general global processing deficit. Frontal function was tested by a counterpointing task, ability to retrieve a ball from a ‘detour box’, and the Stroop-like ‘day-night’ task, all of which require inhibition of a familiar response. When considered in relation to overall development as indexed by vocabulary, the day-night task shows little specific impairment, the detour box shows a significant delay relative to controls, and the counterpointing task shows a marked and persistent deficit in many children. We conclude that frontal control processes show most impairment in WS when they are associated with spatially directed responses, reflecting a deficit of fronto-parietal processing. However, children with WS may successfully reduce the effect of this impairment by verbally mediated strategies. On all these tasks we find a range of difficulties across individual children and a small subset of WS who show very good performance, equivalent to chronological age norms of typically developing children. Neurobiological models of visuo-spatial cognition in children with WS p.4 Overall, we conclude that children with WS have specific processing difficulties with tasks involving frontoparietal circuits within the spatial domain. However, some children with WS can achieve similar performance to typically developing children on some tasks involving the dorsal stream, although the strategies and processing may be different in the two groups.
Resumo:
In this paper we report on our attempts to fit the optimal data selection (ODS) model (Oaksford Chater, 1994; Oaksford, Chater, & Larkin, 2000) to the selection task data reported in Feeney and Handley (2000) and Handley, Feeney, and Harper (2002). Although Oaksford (2002b) reports good fits to the data described in Feeney and Handley (2000), the model does not adequately capture the data described in Handley et al. (2002). Furthermore, across all six of the experiments modelled here, the ODS model does not predict participants' behaviour at the level of selection rates for individual cards. Finally, when people's probability estimates are used in the modelling exercise, the model adequately captures only I out of 18 conditions described in Handley et al. We discuss the implications of these results for models of the selection task and claim that they support deductive, rather than probabilistic, accounts of the task.
Resumo:
The interpretations people attach to line drawings reflect shape-related processes in human vision. Their divergences from expectations embodied in related machine vision traditions are summarized, and used to suggest how human vision decomposes the task of interpretation. A model called IO implements this idea. It first identifies geometrically regular, local fragments. Initial decisions fix edge orientations, and this information constrains decisions about other properties. Relations between fragments are explored, beginning with weak consistency checks and moving to fuller ones. IO's output captures multiple distinctive characteristics of human performance, and it suggests steady progress towards understanding shape-related visual processes is possible.
Resumo:
This paper introduces a logical model of inductive generalization, and specifically of the machine learning task of inductive concept learning (ICL). We argue that some inductive processes, like ICL, can be seen as a form of defeasible reasoning. We define a consequence relation characterizing which hypotheses can be induced from given sets of examples, and study its properties, showing they correspond to a rather well-behaved non-monotonic logic. We will also show that with the addition of a preference relation on inductive theories we can characterize the inductive bias of ICL algorithms. The second part of the paper shows how this logical characterization of inductive generalization can be integrated with another form of non-monotonic reasoning (argumentation), to define a model of multiagent ICL. This integration allows two or more agents to learn, in a consistent way, both from induction and from arguments used in the communication between them. We show that the inductive theories achieved by multiagent induction plus argumentation are sound, i.e. they are precisely the same as the inductive theories built by a single agent with all data. © 2012 Elsevier B.V.
Resumo:
Cognitive and neurophysiological correlates of arithmetic calculation, concepts, and applications were examined in 41 adolescents, ages 12-15 years. Psychological and task-related EEG measures which correctly distinguished children who scored low vs. high (using a median split) in each arithmetic subarea were interpreted as indicative of processes involved. Calculation was related to visual-motor sequencing, spatial visualization, theta activity measured during visual-perceptual and verbal tasks at right- and left-hemisphere locations, and right-hemisphere alpha activity measured during a verbal task. Performance on arithmetic word problems was related to spatial visualization and perception, vocabulary, and right-hemisphere alpha activity measured during a verbal task. Results suggest a complex interplay of spatial and sequential operations in arithmetic performance, consistent with processing model concepts of lateralized brain function.
Resumo:
This open learning zone article examines the cardiac cycle and the interpretation of cardiac rhythm strips. The article begins with a brief revision of related physiology followed by a description of normal sinus rhythm and the main cardiac rhythm abnormalities. The article concludes by providing easy to follow steps for use in the interpretation of cardiac rhythm strips with practice examples presented in the CPD task section.
Resumo:
NiTi alloys have been widely used in the applications for micro-electro-mechanical-systems (MEMS), which often involve some precise and complex motion control. However, when using the NiTi alloys in MEMS application, the main problem to be considered is the degradation of functional property during cycling loading. This also stresses the importance of accurate prediction of the functional behavior of NiTi alloys. In the last two decades, a large number of constitutive models have been proposed to achieve the task. A portion of them focused on the deformation behavior of NiTi alloys under cyclic loading, which is a practical and non-negligible situation. Despite of the scale of modeling studies of the field in NiTi alloys, two experimental observations under uniaxial tension loading have not received proper attentions. First, a deviation from linearity well before the stress-induced martensitic transformation (SIMT) has not been modeled. Recent experiments confirmed that it is caused by the formation of stress-induced R phase. Second, the influence of the well-known localized Lüders-like SIMT on the macroscopic behavior of NiTi alloys, in particular the residual strain during cyclic loading, has not been addressed. In response, we develop a 1-D phenomenological constitutive model for NiTi alloys with two novel features: the formation of stress-induced R phase and the explicit modeling of the localized Lüders-like SIMT. The derived constitutive relations are simple and at the same time sufficient to describe the behavior of NiTi alloys. The accumulation of residual strain caused by R phase under different loading schemes is accurately described by the proposed model. Also, the residual strain caused by irreversible SIMT at different maximum loading strain under cyclic tension loading in individual samples can be explained by and fitted into a single equation in the proposed model. These results show that the proposed model successfully captures the behavior of R phase and the essence of localized SIMT.
Resumo:
Processor architectures has taken a turn towards many-core processors, which integrate multiple processing cores on a single chip to increase overall performance, and there are no signs that this trend will stop in the near future. Many-core processors are harder to program than multi-core and single-core processors due to the need of writing parallel or concurrent programs with high degrees of parallelism. Moreover, many-cores have to operate in a mode of strong scaling because of memory bandwidth constraints. In strong scaling increasingly finer-grain parallelism must be extracted in order to keep all processing cores busy.
Task dataflow programming models have a high potential to simplify parallel program- ming because they alleviate the programmer from identifying precisely all inter-task de- pendences when writing programs. Instead, the task dataflow runtime system detects and enforces inter-task dependences during execution based on the description of memory each task accesses. The runtime constructs a task dataflow graph that captures all tasks and their dependences. Tasks are scheduled to execute in parallel taking into account dependences specified in the task graph.
Several papers report important overheads for task dataflow systems, which severely limits the scalability and usability of such systems. In this paper we study efficient schemes to manage task graphs and analyze their scalability. We assume a programming model that supports input, output and in/out annotations on task arguments, as well as commutative in/out and reductions. We analyze the structure of task graphs and identify versions and generations as key concepts for efficient management of task graphs. Then, we present three schemes to manage task graphs building on graph representations, hypergraphs and lists. We also consider a fourth edge-less scheme that synchronizes tasks using integers. Analysis using micro-benchmarks shows that the graph representation is not always scalable and that the edge-less scheme introduces least overhead in nearly all situations.
Resumo:
The assessment of parenting capacity continues to engender public concern in cases of suspected harm to children. This paper outlines a model for approaching this task based on the application of three key domains of knowledge in social work relating to facts, theory and practice wisdom. The McMaster Model of Family Assessment is identified out of this process and reworked to give it a sharper focus on parenting roles and responsibilities. Seven formative dimensions of parenting are then elicited and combined with an analytical process of identifying strengths, concerns, prospects for growth and impact on child outcomes. The resulting assessment framework, it is argued, adds rigour to professional judgements about parenting capacity and enhances formulations on risk in child protection.
Resumo:
We demonstrate a model for stoichiometric and reduced titanium dioxide intended for use in molecular dynamics and other atomistic simulations and based in the polarizable ion tight binding theory. This extends the model introduced in two previous papers from molecular and liquid applications into the solid state, thus completing the task of providing a comprehensive and unified scheme for studying chemical reactions, particularly aimed at problems in catalysis and electrochemistry. As before, experimental results are given priority over theoretical ones in selecting targets for model fitting, for which we used crystal parameters and band gaps of titania bulk polymorphs, rutile and anatase. The model is applied to six low index titania surfaces, with and without oxygen vacancies and adsorbed water molecules, both in dissociated and non-dissociated states. Finally, we present the results of molecular dynamics simulation of an anatase cluster with a number of adsorbed water molecules and discuss the role of edge and corner atoms of the cluster. (C) 2014 AIP Publishing LLC.
Resumo:
We introduce a task-based programming model and runtime system that exploit the observation that not all parts of a program are equally significant for the accuracy of the end-result, in order to trade off the quality of program outputs for increased energy-efficiency. This is done in a structured and flexible way, allowing for easy exploitation of different points in the quality/energy space, without adversely affecting application performance. The runtime system can apply a number of different policies to decide whether it will execute less-significant tasks accurately or approximately.
The experimental evaluation indicates that our system can achieve an energy reduction of up to 83% compared with a fully accurate execution and up to 35% compared with an approximate version employing loop perforation. At the same time, our approach always results in graceful quality degradation.
Resumo:
Cost-effective semantic description and annotation of shared knowledge resources has always been of great importance for digital libraries and large scale information systems in general. With the emergence of the Social Web and Web 2.0 technologies, a more effective semantic description and annotation, e.g., folksonomies, of digital library contents is envisioned to take place in collaborative and personalised environments. However, there is a lack of foundation and mathematical rigour for coping with contextualised management and retrieval of semantic annotations throughout their evolution as well as diversity in users and user communities. In this paper, we propose an ontological foundation for semantic annotations of digital libraries in terms of flexonomies. The proposed theoretical model relies on a high dimensional space with algebraic operators for contextualised access of semantic tags and annotations. The set of the proposed algebraic operators, however, is an adaptation of the set theoretic operators selection, projection, difference, intersection, union in database theory. To this extent, the proposed model is meant to lay the ontological foundation for a Digital Library 2.0 project in terms of geometric spaces rather than logic (description) based formalisms as a more efficient and scalable solution to the semantic annotation problem in large scale.
Resumo:
The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, there were identified five broad selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. After the identification criteria, a survey was elaborated and companies were contacted in order to understand which factors have more weight in their decisions to choose the partners. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP) method or Value Analysis. The goal of the paper it's to supply a selection reference model that can represent an orientation/pattern for a decision making on the suppliers/partners selection process
Resumo:
Heterogeneous multicore platforms are becoming an interesting alternative for embedded computing systems with limited power supply as they can execute specific tasks in an efficient manner. Nonetheless, one of the main challenges of such platforms consists of optimising the energy consumption in the presence of temporal constraints. This paper addresses the problem of task-to-core allocation onto heterogeneous multicore platforms such that the overall energy consumption of the system is minimised. To this end, we propose a two-phase approach that considers both dynamic and leakage energy consumption: (i) the first phase allocates tasks to the cores such that the dynamic energy consumption is reduced; (ii) the second phase refines the allocation performed in the first phase in order to achieve better sleep states by trading off the dynamic energy consumption with the reduction in leakage energy consumption. This hybrid approach considers core frequency set-points, tasks energy consumption and sleep states of the cores to reduce the energy consumption of the system. Major value has been placed on a realistic power model which increases the practical relevance of the proposed approach. Finally, extensive simulations have been carried out to demonstrate the effectiveness of the proposed algorithm. In the best-case, savings up to 18% of energy are reached over the first fit algorithm, which has shown, in previous works, to perform better than other bin-packing heuristics for the target heterogeneous multicore platform.
Resumo:
Optimal challenge occurs when an individual perceives the challenge of the task to be equaled or matched by his or her own skill level (Csikszentmihalyi, 1990). The purpose of this study was to test the impact of the OPTIMAL model on physical education students' motivation and perceptions of optimal challenge across four games categories (i. e. target, batting/fielding, net/wall, invasion). Enjoyment, competence, student goal orientation and activity level were examined in relation to the OPTIMAL model. A total of 22 (17 M; 5 F) students and their parents provided informed consent to take part in the study and were taught four OPTIMAL lessons and four non-OPTIMAL lessons ranging across the four different games categories by their own teacher. All students completed the Task and Ego in Sport Questionnaire (TEOSQ; Duda & Whitehead, 1998), the Intrinsic Motivation Inventory (IMI; McAuley, Duncan, & Tanmien, 1987) and the Children's Perception of Optimal Challenge Instrument (CPOCI; Mandigo, 2001). Sixteen students (two each lesson) were observed by using the System for Observing Fitness Instruction Time tool (SOFTT; McKenzie, 2002). As well, they participated in a structured interview which took place after each lesson was completed. Quantitative results concluded that no overall significant difference was found in motivational outcomes when comparing OPTIMAL and non-OPTIMAL lessons. However, when the lessons were broken down into games categories, significant differences emerged. Levels of perceived competence were found to be higher in non-OPTIMAL batting/fielding lessons compared to OPTIMAL lessons, whereas levels of enjoyment and perceived competence were found to be higher in OPTIMAL invasion lessons in comparison to non-OPTIMAL invasion lessons. Qualitative results revealed significance in feehngs of skill/challenge balance, enjoyment and competence in the OPTIMAL lessons. Moreover, a significance of practically twice the active movement time percentage was found in OPTIMAL lessons in comparison to non-OPTIMAL lessons.