983 resultados para simple timing task
Resumo:
When switching tasks, if stimuli are presented that contain features that cue two of the tasks in the set (i.e., bivalent stimuli), performance slowing is observed on all tasks. This generalized slowing extends to tasks in the set which have no features in common with the bivalent stimulus and is referred to as the bivalency effect. In previous work, the bivalency effect was invoked by presenting occasionally occurring bivalent stimuli; therefore, the possibility that the generalized slowing is simply due to surprise (as opposed to bivalency) has not yet been discounted. This question was addressed in two task switching experiments where the occasionally occurring stimuli were either bivalent (bivalent version) or merely surprising (surprising version). The results confirmed that the generalized slowing was much greater in the bivalent version of both experiments, demonstrating that the magnitude of this effect is greater than can be accounted for by simple surprise. This set of results confirms that slowing task execution when encountering bivalent stimuli may be fundamental for efficient task switching, as adaptive tuning of response style may serve to prepare the cognitive system for possible future high conflict trials.
Resumo:
The purpose of the present study was to investigate whether amnesic patients show a bivalency effect. The bivalency effect refers to the performance slowing that occurs when switching tasks and bivalent stimuli appear occasionally among univalent stimuli. According to the episodic context binding account, bivalent stimuli create a conflict-loaded context that is re-activated on subsequent trials and thus it is assumed that it depends on memory binding processes. Given the profound memory deficit in amnesia, we hypothesized that the bivalency effect would be largely reduced in amnesic patients. We tested sixteen severely amnesic patients and a control group with a paradigm requiring predictable alternations between three simple cognitive tasks, with bivalent stimuli occasionally occurring on one of these tasks. The results showed the typical bivalency effect for the control group, that is, a generalized slowing for each task. In contrast, for amnesic patients, only a short-lived slowing was present on the task that followed immediately after a bivalent stimulus, indicating that the binding between tasks and context was impaired in amnesic patients.
Resumo:
Implicit task sequence learning (TSL) can be considered as an extension of implicit sequence learning which is typically tested with the classical serial reaction time task (SRTT). By design, in the SRTT there is a correlation between the sequence of stimuli to which participants must attend and the sequence of motor movements/key presses with which participants must respond. The TSL paradigm allows to disentangle this correlation and to separately manipulate the presences/absence of a sequence of tasks, a sequence of responses, and even other streams of information such as stimulus locations or stimulus-response mappings. Here I review the state of TSL research which seems to point at the critical role of the presence of correlated streams of information in implicit sequence learning. On a more general level, I propose that beyond correlated streams of information, a simple statistical learning mechanism may also be involved in implicit sequence learning, and that the relative contribution of these two explanations differ according to task requirements. With this differentiation, conflicting results can be integrated into a coherent framework.
Resumo:
This paper introduces an extended hierarchical task analysis (HTA) methodology devised to evaluate and compare user interfaces on volumetric infusion pumps. The pumps were studied along the dimensions of overall usability and propensity for generating human error. With HTA as our framework, we analyzed six pumps on a variety of common tasks using Norman’s Action theory. The introduced method of evaluation divides the problem space between the external world of the device interface and the user’s internal cognitive world, allowing for predictions of potential user errors at the human-device level. In this paper, one detailed analysis is provided as an example, comparing two different pumps on two separate tasks. The results demonstrate the inherent variation, often the cause of usage errors, found with infusion pumps being used in hospitals today. The reported methodology is a useful tool for evaluating human performance and predicting potential user errors with infusion pumps and other simple medical devices.
Resumo:
The ability to represent time is an essential component of cognition but its neural basis is unknown. Although extensively studied both behaviorally and electrophysiologically, a general theoretical framework describing the elementary neural mechanisms used by the brain to learn temporal representations is lacking. It is commonly believed that the underlying cellular mechanisms reside in high order cortical regions but recent studies show sustained neural activity in primary sensory cortices that can represent the timing of expected reward. Here, we show that local cortical networks can learn temporal representations through a simple framework predicated on reward dependent expression of synaptic plasticity. We assert that temporal representations are stored in the lateral synaptic connections between neurons and demonstrate that reward-modulated plasticity is sufficient to learn these representations. We implement our model numerically to explain reward-time learning in the primary visual cortex (V1), demonstrate experimental support, and suggest additional experimentally verifiable predictions.
Resumo:
The purpose of this study was to investigate the generality and temporal endurance of the bivalency effect in task switching. This effect refers to the slowing on univalent stimuli that occurs when bivalent stimuli appear occasionally. We used a paradigm involving predictable switches between 3 simple tasks, with bivalent stimuli occasionally occurring on one of the tasks. The generality of the bivalency effect was investigated by using different tasks and different types of bivalent stimuli, and the endurance of this effect was investigated across different intertrial intervals (ITIs) and across the univalent trials that followed trials with bivalent stimuli. In 3 experiments, the results showed a general, robust, and enduring bivalency effect for all ITI conditions. Although the effect declined across trials, it remained significant for about 4 trials following one with a bivalent stimulus. Our findings emphasise the importance of top–down processes in task-switching performance. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Resumo:
In this study we investigated whether synesthetic color experiences have similar effects as real colors in cognitive conflict adaptation. We tested 24 synesthetes and two yoke-matched control groups in a task-switching experiment that involved regular switches between three simple decision tasks (a color decision, a form decision, and a size decision). In most of the trials the stimuli were univalent, that is, specific for each task. However, occasionally, black graphemes were presented for the size decisions and we tested whether they would trigger synesthetic color experiences and thus, turn them into bivalent stimuli. The results confirmed this expectation. We were also interested in their effect for subsequent performance (i.e., the bivalency effect). The results showed that for synesthetic colors the bivalency effect was not as pronounced as for real colors. The latter result may be related to differences between synesthetes and controls in coping with color conflict.
Resumo:
INTRODUCTION Optimising the use of blood has become a core task of transfusion medicine. Because no general guidelines are available in Switzerland, we analysed the effects of the introduction of a guideline on red blood cell (RBC) transfusion for elective orthopaedic surgery. METHODS Prospective, multicentre, before-and-after study comparing the use of RBCs in adult elective hip or knee replacement before and after the implementation of a guideline in 10 Swiss hospitals, developed together with all participants. RESULTS We included 2,134 patients, 1,238 in 7 months before, 896 in 6 months after intervention. 57 (34 or 2.7% before, 23 or 2.6% after) were lost before follow-up visit. The mean number of transfused RBC units decreased from 0.5 to 0.4 per patient (0.1, 95% CI 0.08-0.2; p = 0.014), the proportion of transfused patients from 20.9% to 16.9% (4%, 95% C.I. 0.7-7.4%; p = 0.02), and the pre-transfusion haemoglobin from 82.6 to 78.2 g/l (4.4 g/l, 95% C. I. 2.15-6.62 g/l, p < 0.001). We did not observe any statistically significant changes in in-hospital mortality (0.4% vs. 0%) and morbidity (4.1% vs. 4.0%), median hospital length of stay (9 vs. 9 days), follow-up mortality (0.4% vs. 0.2%) and follow-up morbidity (6.9% vs. 6.0%). CONCLUSIONS The introduction of a simple transfusion guideline reduces and standardises the use of RBCs by decreasing the haemoglobin transfusion trigger, without negative effects on the patient outcome. Local support, training, and monitoring of the effects are requirements for programmes optimising the use of blood.
Resumo:
This study evaluated the administration-time-dependent effects of a stimulant (Dexedrine 5-mg), a sleep-inducer (Halcion 0.25-mg) and placebo (control) on human performance. The investigation was conducted on 12 diurnally active (0700-2300) male adults (23-38 yrs) using a double-blind, randomized sixway-crossover three-treatment, two-timepoint (0830 vs 2030) design. Performance tests were conducted hourly during sleepless 13-hour studies using a computer generated, controlled and scored multi-task cognitive performance assessment battery (PAB) developed at the Walter Reed Army Institute of Research. Specific tests were Simple and Choice Reaction Time, Serial Addition/Subtraction, Spatial Orientation, Logical Reasoning, Time Estimation, Response Timing and the Stanford Sleepiness Scale. The major index of performance was "Throughput", a combined measure of speed and accuracy.^ For the Placebo condition, Single and Group Cosinor Analysis documented circadian rhythms in cognitive performance for the majority of tests, both for individuals and for the group. Performance was best around 1830-2030 and most variable around 0530-0700 when sleepiness was greatest (0300).^ Morning Dexedrine dosing marginally enhanced performance an average of 3% with reference to the corresponding in time control level. Dexedrine AM also increased alertness by 10% over the AM control. Dexedrine PM failed to improve performance with reference to the corresponding PM control baseline. With regard to AM and PM Dexedrine administrations, AM performance was 6% better with subjects 25% more alert.^ Morning Halcion administration caused a 7% performance decrement and 16% increase in sleepiness and a 13% decrement and 10% increase in sleepiness when administered in the evening compared to corresponding in time control data. Performance was 9% worse and sleepiness 24% greater after evening versus morning Halcion administration.^ These results suggest that for evening Halcion dosing, the overnight sleep deprivation occurring in coincidence with the nadir in performance due to circadian rhythmicity together with the CNS depressant effects combine to produce performance degradation. For Dexedrine, morning administration resulted in only marginal performance enhancement; Dexedrine in the evening was less effective, suggesting the 5-mg dose level may be too low to counteract the partial sleep deprivation and nocturnal nadir in performance. ^
Resumo:
The aim of the present work is to examine the differences between two groups of fencers with different levels of competition, elite and medium level. The timing parameters of the response reaction have been compared together with the kinetic variables which determine the sequence of segmented participation used during the lunge with a change in target during movement. A total of 30 male sword fencers participated, 13 elite and 17 medium level. Two force platforms recorded the horizontal component of the force and the start of the movement. One system filmed the movement in 3D, recording the spatial positions of 11 markers, while another system projected a mobile target over a screen. For synchronisation, an electronic signal enabled all the systems to be started simultaneously. Among the timing parameters of the reaction response, the choice reaction time (CRT) to the target change during the lunge was measured. The results revealed differences between the groups regarding the flight time, horizontal velocity at the end of the acceleration phase, and the length of the lunge, these being higher for the elite group, as well as other variables related to the temporal sequence of movement. No significant differences have been found in the simple reaction time or in CRT. According to the literature, the CRT appears to improve with sports practice, although this factor did not differentiate the elite from medium-level fencers. The coordination of fencing movements, that is, the right technique, constitutes a factor that differentiates elite fencers from medium-level ones.
Resumo:
In this paper, we study a robot swarm that has to perform task allocation in an environment that features periodic properties. In this environment, tasks appear in different areas following periodic temporal patterns. The swarm has to reallocate its workforce periodically, performing a temporal task allocation that must be synchronized with the environment to be effective. We tackle temporal task allocation using methods and concepts that we borrow from the signal processing literature. In particular, we propose a distributed temporal task allocation algorithm that synchronizes robots of the swarm with the environment and with each other. In this algorithm, robots use only local information and a simple visual communication protocol based on light blinking. Our results show that a robot swarm that uses the proposed temporal task allocation algorithm performs considerably more tasks than a swarm that uses a greedy algorithm.
Resumo:
Internet está evolucionando hacia la conocida como Live Web. En esta nueva etapa en la evolución de Internet, se pone al servicio de los usuarios multitud de streams de datos sociales. Gracias a estas fuentes de datos, los usuarios han pasado de navegar por páginas web estáticas a interacturar con aplicaciones que ofrecen contenido personalizado, basada en sus preferencias. Cada usuario interactúa a diario con multiples aplicaciones que ofrecen notificaciones y alertas, en este sentido cada usuario es una fuente de eventos, y a menudo los usuarios se sienten desbordados y no son capaces de procesar toda esa información a la carta. Para lidiar con esta sobresaturación, han aparecido múltiples herramientas que automatizan las tareas más habituales, desde gestores de bandeja de entrada, gestores de alertas en redes sociales, a complejos CRMs o smart-home hubs. La contrapartida es que aunque ofrecen una solución a problemas comunes, no pueden adaptarse a las necesidades de cada usuario ofreciendo una solucion personalizada. Los Servicios de Automatización de Tareas (TAS de sus siglas en inglés) entraron en escena a partir de 2012 para dar solución a esta liminación. Dada su semejanza, estos servicios también son considerados como un nuevo enfoque en la tecnología de mash-ups pero centra en el usuarios. Los usuarios de estas plataformas tienen la capacidad de interconectar servicios, sensores y otros aparatos con connexión a internet diseñando las automatizaciones que se ajustan a sus necesidades. La propuesta ha sido ámpliamante aceptada por los usuarios. Este hecho ha propiciado multitud de plataformas que ofrecen servicios TAS entren en escena. Al ser un nuevo campo de investigación, esta tesis presenta las principales características de los TAS, describe sus componentes, e identifica las dimensiones fundamentales que los defines y permiten su clasificación. En este trabajo se acuña el termino Servicio de Automatización de Tareas (TAS) dando una descripción formal para estos servicios y sus componentes (llamados canales), y proporciona una arquitectura de referencia. De igual forma, existe una falta de herramientas para describir servicios de automatización, y las reglas de automatización. A este respecto, esta tesis propone un modelo común que se concreta en la ontología EWE (Evented WEb Ontology). Este modelo permite com parar y equiparar canales y automatizaciones de distintos TASs, constituyendo un aporte considerable paraa la portabilidad de automatizaciones de usuarios entre plataformas. De igual manera, dado el carácter semántico del modelo, permite incluir en las automatizaciones elementos de fuentes externas sobre los que razonar, como es el caso de Linked Open Data. Utilizando este modelo, se ha generado un dataset de canales y automatizaciones, con los datos obtenidos de algunos de los TAS existentes en el mercado. Como último paso hacia el lograr un modelo común para describir TAS, se ha desarrollado un algoritmo para aprender ontologías de forma automática a partir de los datos del dataset. De esta forma, se favorece el descubrimiento de nuevos canales, y se reduce el coste de mantenimiento del modelo, el cual se actualiza de forma semi-automática. En conclusión, las principales contribuciones de esta tesis son: i) describir el estado del arte en automatización de tareas y acuñar el término Servicio de Automatización de Tareas, ii) desarrollar una ontología para el modelado de los componentes de TASs y automatizaciones, iii) poblar un dataset de datos de canales y automatizaciones, usado para desarrollar un algoritmo de aprendizaje automatico de ontologías, y iv) diseñar una arquitectura de agentes para la asistencia a usuarios en la creación de automatizaciones. ABSTRACT The new stage in the evolution of the Web (the Live Web or Evented Web) puts lots of social data-streams at the service of users, who no longer browse static web pages but interact with applications that present them contextual and relevant experiences. Given that each user is a potential source of events, a typical user often gets overwhelmed. To deal with that huge amount of data, multiple automation tools have emerged, covering from simple social media managers or notification aggregators to complex CRMs or smart-home Hub/Apps. As a downside, they cannot tailor to the needs of every single user. As a natural response to this downside, Task Automation Services broke in the Internet. They may be seen as a new model of mash-up technology for combining social streams, services and connected devices from an end-user perspective: end-users are empowered to connect those stream however they want, designing the automations they need. The numbers of those platforms that appeared early on shot up, and as a consequence the amount of platforms following this approach is growing fast. Being a novel field, this thesis aims to shed light on it, presenting and exemplifying the main characteristics of Task Automation Services, describing their components, and identifying several dimensions to classify them. This thesis coins the term Task Automation Services (TAS) by providing a formal definition of them, their components (called channels), as well a TAS reference architecture. There is also a lack of tools for describing automation services and automations rules. In this regard, this thesis proposes a theoretical common model of TAS and formalizes it as the EWE ontology This model enables to compare channels and automations from different TASs, which has a high impact in interoperability; and enhances automations providing a mechanism to reason over external sources such as Linked Open Data. Based on this model, a dataset of components of TAS was built, harvesting data from the web sites of actual TASs. Going a step further towards this common model, an algorithm for categorizing them was designed, enabling their discovery across different TAS. Thus, the main contributions of the thesis are: i) surveying the state of the art on task automation and coining the term Task Automation Service; ii) providing a semantic common model for describing TAS components and automations; iii) populating a categorized dataset of TAS components, used to learn ontologies of particular domains from the TAS perspective; and iv) designing an agent architecture for assisting users in setting up automations, that is aware of their context and acts in consequence.
Resumo:
Existe normalmente el propósito de obtener la mejor solución posible cuando se plantea un problema estructural, entendiendo como mejor la solución que cumpliendo los requisitos estructurales, de uso, etc., tiene un coste físico menor. En una primera aproximación se puede representar el coste físico por medio del peso propio de la estructura, lo que permite plantear la búsqueda de la mejor solución como la de menor peso. Desde un punto de vista práctico, la obtención de buenas soluciones—es decir, soluciones cuyo coste sea solo ligeramente mayor que el de la mejor solución— es una tarea tan importante como la obtención de óptimos absolutos, algo en general difícilmente abordable. Para disponer de una medida de la eficiencia que haga posible la comparación entre soluciones se propone la siguiente definición de rendimiento estructural: la razón entre la carga útil que hay que soportar y la carga total que hay que contabilizar (la suma de la carga útil y el peso propio). La forma estructural puede considerarse compuesta por cuatro conceptos, que junto con el material, definen una estructura: tamaño, esquema, proporción, y grueso.Galileo (1638) propuso la existencia de un tamaño insuperable para cada problema estructural— el tamaño para el que el peso propio agota una estructura para un esquema y proporción dados—. Dicho tamaño, o alcance estructural, será distinto para cada material utilizado; la única información necesaria del material para su determinación es la razón entre su resistencia y su peso especifico, una magnitud a la que denominamos alcance del material. En estructuras de tamaño muy pequeño en relación con su alcance estructural la anterior definición de rendimiento es inútil. En este caso —estructuras de “talla nula” en las que el peso propio es despreciable frente a la carga útil— se propone como medida del coste la magnitud adimensional que denominamos número de Michell, que se deriva de la “cantidad” introducida por A. G. M. Michell en su artículo seminal de 1904, desarrollado a partir de un lema de J. C. Maxwell de 1870. A finales del siglo pasado, R. Aroca combino las teorías de Galileo y de Maxwell y Michell, proponiendo una regla de diseño de fácil aplicación (regla GA), que permite la estimación del alcance y del rendimiento de una forma estructural. En el presente trabajo se estudia la eficiencia de estructuras trianguladas en problemas estructurales de flexión, teniendo en cuenta la influencia del tamaño. Por un lado, en el caso de estructuras de tamaño nulo se exploran esquemas cercanos al optimo mediante diversos métodos de minoración, con el objetivo de obtener formas cuyo coste (medido con su numero deMichell) sea muy próximo al del optimo absoluto pero obteniendo una reducción importante de su complejidad. Por otro lado, se presenta un método para determinar el alcance estructural de estructuras trianguladas (teniendo en cuenta el efecto local de las flexiones en los elementos de dichas estructuras), comparando su resultado con el obtenido al aplicar la regla GA, mostrando las condiciones en las que es de aplicación. Por último se identifican las líneas de investigación futura: la medida de la complejidad; la contabilidad del coste de las cimentaciones y la extensión de los métodos de minoración cuando se tiene en cuenta el peso propio. ABSTRACT When a structural problem is posed, the intention is usually to obtain the best solution, understanding this as the solution that fulfilling the different requirements: structural, use, etc., has the lowest physical cost. In a first approximation, the physical cost can be represented by the self-weight of the structure; this allows to consider the search of the best solution as the one with the lowest self-weight. But, from a practical point of view, obtaining good solutions—i.e. solutions with higher although comparable physical cost than the optimum— can be as important as finding the optimal ones, because this is, generally, a not affordable task. In order to have a measure of the efficiency that allows the comparison between different solutions, a definition of structural efficiency is proposed: the ratio between the useful load and the total load —i.e. the useful load plus the self-weight resulting of the structural sizing—. The structural form can be considered to be formed by four concepts, which together with its material, completely define a particular structure. These are: Size, Schema, Slenderness or Proportion, and Thickness. Galileo (1638) postulated the existence of an insurmountable size for structural problems—the size for which a structure with a given schema and a given slenderness, is only able to resist its self-weight—. Such size, or structural scope will be different for every different used material; the only needed information about the material to determine such size is the ratio between its allowable stress and its specific weight: a characteristic length that we name material structural scope. The definition of efficiency given above is not useful for structures that have a small size in comparison with the insurmountable size. In this case—structures with null size, inwhich the self-weight is negligible in comparisonwith the useful load—we use as measure of the cost the dimensionless magnitude that we call Michell’s number, an amount derived from the “quantity” introduced by A. G. M. Michell in his seminal article published in 1904, developed out of a result from J. C.Maxwell of 1870. R. Aroca joined the theories of Galileo and the theories of Maxwell and Michell, obtaining some design rules of direct application (that we denominate “GA rule”), that allow the estimation of the structural scope and the efficiency of a structural schema. In this work the efficiency of truss-like structures resolving bending problems is studied, taking into consideration the influence of the size. On the one hand, in the case of structures with null size, near-optimal layouts are explored using several minimization methods, in order to obtain forms with cost near to the absolute optimum but with a significant reduction of the complexity. On the other hand, a method for the determination of the insurmountable size for truss-like structures is shown, having into account local bending effects. The results are checked with the GA rule, showing the conditions in which it is applicable. Finally, some directions for future research are proposed: the measure of the complexity, the cost of foundations and the extension of optimization methods having into account the self-weight.
Resumo:
We studied the performance of young and senior subjects on a well known working memory task, the Operation Span. This is a dual-task in which subjects perform a memory task while simultaneously verifying simple equations. Positron-emission tomography scans were taken during performance. Both young and senior subjects demonstrated a cost in accuracy and latency in the Operation Span compared with performing each component task alone (math verification or memory only). Senior subjects were disproportionately impaired relative to young subjects on the dual-task. When brain activation was examined for senior subjects, we found regions in prefrontal cortex that were active in the dual-task, but not in the component tasks. Similar results were obtained for young subjects who performed relatively poorly on the dual-task; however, for young subjects who performed relatively well in the dual-task, we found no prefrontal regions that were active only in the dual-task. Results are discussed as they relate to the executive component of task switching.
Resumo:
We analyzed the effect of short-term water deficits at different periods of sunflower (Helianthus annuus L.) leaf development on the spatial and temporal patterns of tissue expansion and epidermal cell division. Six water-deficit periods were imposed with similar and constant values of soil water content, predawn leaf water potential and [ABA] in the xylem sap, and with negligible reduction of the rate of photosynthesis. Water deficit did not affect the duration of expansion and division. Regardless of their timing, deficits reduced relative expansion rate by 36% and relative cell division rate by 39% (cells blocked at the G0-G1 phase) in all positions within the leaf. However, reductions in final leaf area and cell number in a given zone of the leaf largely differed with the timing of deficit, with a maximum effect for earliest deficits. Individual cell area was only affected during the periods when division slowed down. These behaviors could be simulated in all leaf zones and for all timings by assuming that water deficit affects relative cell division rate and relative expansion rate independently, and that leaf development in each zone follows a stable three-phase pattern in which duration of each phase is stable if expressed in thermal time (C. Granier and F. Tardieu [1998b] Plant Cell Environ 21: 695–703).