941 resultados para Memory systems
Resumo:
It is well established that memory functioning deteriorates with advancing age. However, research indicates that the magnitude of age-related memory deficits varies across different types of memory, and broad individual differences can be observed in the rate and timing of memory aging. The general aim of this study was to investigate the selectivity and variability of memory functioning in relation to anxiety. Firstly, memory effectiveness was assessed in episodic memory tasks with reality monitoring and external source monitoring paradigms, semantic memory tasks referred to general knowledge and word fluency, and perceptual priming task reflected in word completion. According to the scores on trait version of STAI, the high-trait and low-trait anxious subjects were screened respectively from young and old participants matched for educational level. Secondly, based on the results of the first part, concurrent primary and secondary tasks with probe technique assessing spare processing capacity were used to explore the relation between memory efficiency and anxiety. The first main findings were that: (a) there were no age-related differences in semantic memory assessed by general knowledge and PRS, whereas age effects were observed in episodic memory and semantic memory assessed by word fluency with stringent time restraints. (b) Furthermore, comparison of age-related deficits in source and item was not related to the presentation ways and encoding effort for source, but was affected by types of source. Specifically, memory was more sensitive to aging than item memory in external source monitoring processes involved in discriminating two external sources (i.e., female vs. male voices), but not in reality monitoring processes in discriminating between internal and external sources (i.e., acting vs. listening). The second main findings were that: (a) Anxiety had no effects on the effectiveness and efficiency of semantic memory in recall of general knowledge and PRS, but impaired those of semantic memory in word fluency. (b) The effects of anxiety on episodic memory were different between the old and the young. Both the effectiveness and the efficiency of episodic memory of the old were affected adversely by anxiety. More importantly, source recall in external source monitoring processes was observed to be more vulnerable to anxiety than item memory. The effectiveness of episodic memory of the young was relatively unrelated to anxiety, while anxiety might have adverse effect on their memory efficiency. These results indicated that: First, the selectivity of age-related memory deficits existed not only between memory systems, but also within episodic memory system. The tendency to forget the source even when the fact was retained in external source monitoring was suggested to be a specific feature of cognitive aging. Second, anxiety had adverse impact on the individual differences in memory aging, and mediated partial age-related differences in episodic memory performance.
Resumo:
Speculative service implies that a client's request for a document is serviced by sending, in addition to the document requested, a number of other documents (or pointers thereto) that the server speculates will be requested by the client in the near future. This speculation is based on statistical information that the server maintains for each document it serves. The notion of speculative service is analogous to prefetching, which is used to improve cache performance in distributed/parallel shared memory systems, with the exception that servers (not clients) control when and what to prefetch. Using trace simulations based on the logs of our departmental HTTP server http://cs-www.bu.edu, we show that both server load and service time could be reduced considerably, if speculative service is used. This is above and beyond what is currently achievable using client-side caching [3] and server-side dissemination [2]. We identify a number of parameters that could be used to fine-tune the level of speculation performed by the server.
Resumo:
Parallel computing is now widely used in numerical simulation, particularly for application codes based on finite difference and finite element methods. A popular and successful technique employed to parallelize such codes onto large distributed memory systems is to partition the mesh into sub-domains that are then allocated to processors. The code then executes in parallel, using the SPMD methodology, with message passing for inter-processor interactions. In order to improve the parallel efficiency of an imbalanced structured mesh CFD code, a new dynamic load balancing (DLB) strategy has been developed in which the processor partition range limits of just one of the partitioned dimensions uses non-coincidental limits, as opposed to coincidental limits. The ‘local’ partition limit change allows greater flexibility in obtaining a balanced load distribution, as the workload increase, or decrease, on a processor is no longer restricted by the ‘global’ (coincidental) limit change. The automatic implementation of this generic DLB strategy within an existing parallel code is presented in this chapter, along with some preliminary results.
Resumo:
Code parallelization using OpenMP for shared memory systems is relatively easier than using message passing for distributed memory systems. Despite this, it is still a challenge to use OpenMP to parallelize application codes in a way that yields effective scalable performance when executed on a shared memory parallel system. We describe an environment that will assist the programmer in the various tasks of code parallelization and this is achieved in a greatly reduced time frame and level of skill required. The parallelization environment includes a number of tools that address the main tasks of parallelism detection, OpenMP source code generation, debugging and optimization. These tools include a high quality, fully interprocedural dependence analysis with user interaction capabilities to facilitate the generation of efficient parallel code, an automatic relative debugging tool to identify erroneous user decisions in that interaction and also performance profiling to identify bottlenecks. Finally, experiences of parallelizing some NASA application codes are presented to illustrate some of the benefits of using the evolving environment.
Resumo:
Power, and consequently energy, has recently attained first-class system resource status, on par with conventional metrics such as CPU time. To reduce energy consumption, many hardware- and OS-level solutions have been investigated. However, application-level information - which can provide the system with valuable insights unattainable otherwise - was only considered in a handful of cases. We introduce OpenMPE, an extension to OpenMP designed for power management. OpenMP is the de-facto standard for programming parallel shared memory systems, but does not yet provide any support for power control. Our extension exposes (i) per-region multi-objective optimization hints and (ii) application-level adaptation parameters, in order to create energy-saving opportunities for the whole system stack. We have implemented OpenMPE support in a compiler and runtime system, and empirically evaluated its performance on two architectures, mobile and desktop. Our results demonstrate the effectiveness of OpenMPE with geometric mean energy savings across 9 use cases of 15 % while maintaining full quality of service.
Resumo:
A prominent hypothesis states that specialized neural modules within the human lateral frontopolar cortices (LFPCs) support “relational integration” (RI), the solving of complex problems using inter-related rules. However, it has been proposed that LFPC activity during RI could reflect the recruitment of additional “domain-general” resources when processing more difficult problems in general as opposed to RI specifi- cally. Moreover, theoretical research with computational models has demonstrated that RI may be supported by dynamic processes that occur throughout distributed networks of brain regions as opposed to within a discrete computational module. Here, we present fMRI findings from a novel deductive reasoning paradigm that controls for general difficulty while manipulating RI demands. In accordance with the domain- general perspective, we observe an increase in frontoparietal activation during challenging problems in general as opposed to RI specifically. Nonetheless, when examining frontoparietal activity using analyses of phase synchrony and psychophysiological interactions, we observe increased network connectivity during RI alone. Moreover, dynamic causal modeling with Bayesian model selection identifies the LFPC as the effective connectivity source. Based on these results, we propose that during RI an increase in network connectivity and a decrease in network metastability allows rules that are coded throughout working memory systems to be dynamically bound. This change in connectivity state is top-down propagated via a hierarchical system of domain-general networks with the LFPC at the apex. In this manner, the functional network perspective reconciles key propositions of the globalist, modular, and computational accounts of RI within a single unified framework.
Resumo:
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
La présente étude avait pour but d’explorer les modulations fonctionnelles putaminales du signal de spectroscopie par résonance magnétique (SRM) combiné du glutamate et de la glutamine (Glx), ainsi que de l’acide γ-aminobutyrique (GABA) en lien avec l’apprentissage d’une séquence motrice. Nous avons émis l’hypothèse que les concentrations de Glx seraient spécifiquement augmentées pendant et après la pratique d’une telle tâche, et ce comparativement à une condition d’exécution motrice simple conçue pour minimiser l’apprentissage. La tâche d’appuis séquentiels des doigts (« finger taping task ») utilisée est connue pour induire un apprentissage moteur évoluant en phases, avec une progression initialement rapide lors de la première session d’entraînement (phase rapide), puis lente lors de sessions subséquentes (phase lente). Cet apprentissage est également conçu comme dépendant de processus « on-line » (pendant la pratique) d’acquisition et « off-line » (entre les périodes de pratique) de consolidation de la trace mnésique de l’habilité motrice. Une grande quantité de données impliquent le système de neurotransmission glutamatergique, principalement par l’action de ses récepteurs N-Méthyl-D-aspartate (NMDAR) et métabotropiques (mGluR), dans une multitude de domaine de la mémoire. Quelques-unes de ces études suggèrent que cette relation s’applique aussi à des mémoires de type motrice ou dépendante du striatum. De plus, certains travaux chez l’animal montrent qu’une hausse des concentrations de glutamate et de glutamine peut être associée à l’acquisition et/ou consolidation d’une trace mnésique. Nos mesures de SRM à 3.0 Tesla, dont la qualité ne s’est avérée satisfaisante que pour le Glx, démontrent qu’une telle modulation des concentrations de Glx est effectivement détectable dans le putamen après la performance d’une tâche motrice. Elles ne nous permettent toutefois pas de dissocier cet effet putativement attribuable à la plasticité du putamen associée à l’apprentissage moteur de séquence, de celui de la simple activation neuronale causée par l’exécution motrice. L’interprétation de l’interaction non significative, montrant une plus grande modulation par la tâche motrice simple, mène cependant à l’hypothèse alternative que la plasticité glutamatergique détectée est potentiellement plus spécifique à la phase lente de l’apprentissage, suggérant qu’une seconde expérience ainsi orientée et utilisant une méthode de SRM plus sensible au Glx aurait donc de meilleures chances d’offrir des résultats concluants.
Resumo:
El sueño, es indispensable para la recuperación, física, mental y de procesos como la consolidación de memoria, atención y lenguaje. La privación de sueño (PS) incide en la atención y concentración. La PS es inherente a la formación médica, pero no es claro el papel de los turnos nocturnos en estudiantes, porque no cumplen con un objetivo académico, pero hay relación con disminución de la salud, productividad, accidentes, y alteraciones en diversas actividades. Está descrito el impacto de la PS sobre la capacidad de aprendizaje y aspectos como el ánimo y las relaciones interpersonales. MÉTODOS: Se realizó un estudio analítico observacional de cohorte longitudinal, con tres etapas de medición a 180 estudiantes de Medicina de la Universidad del Rosario, que evaluó atención selectiva y concentración mediante la aplicación de la prueba d2, validada internacionalmente para tal fin. RESULTADOS: Se estudiaron 180 estudiantes, 115 mujeres, 65 hombres, entre los 18 y 26 años (promedio 21). Al inicio del estudio dormían en promedio 7,9 horas, cifra que se redujo a 5,8 y 6,3 en la segunda y tercera etapa respectivamente. El promedio de horas de sueño nocturno, disminuyó en el segundo y tercer momento (p<0,001); Además se encontró mediante la aplicación de la prueba d2, que hubo correlación significativa directa débil, entre el promedio de horas de sueño, y el promedio del desempeño en la prueba (r=0.168, p=0.029) CONCLUSIONES: La PS, con períodos de sueño menores a 7,2 horas, impactan de manera importante la atención selectiva, la concentración
Resumo:
El neurofeedback es una técnica no invasiva en la que se pretende corregir, mediante condicionamiento operante, ondas cerebrales que se encuentren alteradas en el electroencefalograma. Desde 1967, se han conducido numerosas investigaciones relacionadas con los efectos de la técnica en el tratamiento de alteraciones psicológicas. Sin embargo, a la fecha no existen revisiones sistemáticas que reúnan los temas que serán aquí tratados. El aporte de este trabajo es la revisión de 56 artículos, publicados entre los años 1995 y 2013 y la evaluación metodológica de 29 estudios incluidos en la revisión. La búsqueda fue acotada a la efectividad del neurofeedback en el tratamiento de depresión, ansiedad, trastorno obsesivo compulsivo (TOC), ira y fibromialgia. Los hallazgos demuestran que el neurofeedback ha tenido resultados positivos en el tratamiento de estos trastornos, sin embargo, es una técnica que aún está en desarrollo, con unas bases teóricas no muy bien establecidas y cuyos resultados necesitan de diseños metodológicamente más sólidos que ratifiquen su validez.
Resumo:
Se realizó un capítulo sobre la descripción del examen neurológico como herramienta principal en el abordaje del paciente con patología neurológica.
Resumo:
One among the most influential and popular data mining methods is the k-Means algorithm for cluster analysis. Techniques for improving the efficiency of k-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting geometrical constraints and an efficient data structure, notably a multidimensional binary search tree (KD-Tree). These techniques allow to reduce the number of distance computations the algorithm performs at each iteration. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient k-Means variants in parallel computing environments. In this work, we provide a parallel formulation of the KD-Tree based k-Means algorithm for distributed memory systems and address its load balancing issue. Three solutions have been developed and tested. Two approaches are based on a static partitioning of the data set and a third solution incorporates a dynamic load balancing policy.
Resumo:
The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. This work proposes a fully decentralised algorithm (Epidemic K-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art distributed K-Means algorithms based on sampling methods. The experimental analysis confirms that the proposed algorithm is a practical and accurate distributed K-Means implementation for networked systems of very large and extreme scale.
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.