980 resultados para memory systems
Resumo:
Objective: Neuroimaging studies have highlighted important issues related to structural and functional brain changes found in sufferers of psychological trauma that may influence their ability to synthesize, categorize, and integrate traumatic memories. Methods: Literature review and critical analysis and synthesis. Results: Traumatic memories are diagnostic symptoms of post-traumatic stress disorder (PTSD), and the dual representation theory posits separate memory systems subserving vivid re-experiencing (non-hippocampally dependent) versus declarative autobiographical memories of trauma (hippocampally dependent). But the psychopathological signs of trauma are not static over time, nor is the expression of traumatic memories. Multiple memory systems are activated simultaneously and in parallel on various occasions. Neural circuitry interaction is a crucial aspect in the development of a psychotherapeutic approach that may favour an integrative translation of the sensory fragments of the traumatic memory into a declarative memory system. Conclusion: The relationship between neuroimaging findings and psychological approaches is discussed for greater efficacy in the treatment of psychologically traumatized patients.
Resumo:
The multiple memory systems theory proposes that the hippocampus and the dorsolateral striatum are the core structures of the spatial/relational and stimulus-response (S-R) memory systems, respectively. This theory is supported by double dissociation studies showing that the spatial and cue (S-R) versions of the Morris water maze are impaired by lesions in the dorsal hippocarnpus and dorsal striatum, respectively. In the present study we further investigated whether adult male Wistar rats bearing double and bilateral electrolytic lesions in the dorsal hippocampus and dorsolateral striatum were as impaired as rats bearing single lesions in just one of these structures in learning both versions of the water maze. Such a prediction, based on the multiple memory systems theory, was not confirmed. Compared to the controls, the animals with double lesions exhibited no improvement at all in the spatial version and learned the cued version very slowly. These results suggest that, instead of independent systems competing for holding control over navigational behaviour, the hippocampus and dorsal striatum both play critical roles in navigation based on spatial or cue-based strategies. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Textbooks divide between human memory systems based on consciousness. Hippocampus is thought to support only conscious encoding, while neocortex supports both conscious and unconscious encoding. We tested whether processing modes, not consciousness, divide between memory systems in three neuroimaging experiments with 11 amnesic patients (mean age = 45.55 years, standard deviation = 8.74, range = 23-60) and 11 matched healthy control subjects. Examined processing modes were single item versus relational encoding with only relational encoding hypothesized to depend on hippocampus. Participants encoded and later retrieved either single words or new relations between words. Consciousness of encoding was excluded by subliminal (invisible) word presentation. Amnesic patients and controls performed equally well on the single item task activating prefrontal cortex. But only the controls succeeded on the relational task activating the hippocampus, while amnesic patients failed as a group. Hence, unconscious relational encoding, but not unconscious single item encoding, depended on hippocampus. Yet, three patients performed normally on unconscious relational encoding in spite of amnesia capitalizing on spared hippocampal tissue and connections to language cortex. This pattern of results suggests that processing modes divide between memory systems, while consciousness divides between levels of function within a memory system.
Resumo:
Humans are consciously aware of some memories and can make verbal reports about these memories. Other memories cannot be brought to consciousness, even though they influence behavior. This conspicuous difference in access to memories is central in taxonomies of human memory systems but has been difficult to document in animal studies, suggesting that some forms of memory may be unique to humans. Here I show that rhesus macaque monkeys can report the presence or absence of memory. Although it is probably impossible to document subjective, conscious properties of memory in nonverbal animals, this result objectively demonstrates an important functional parallel with human conscious memory. Animals able to discern the presence and absence of memory should improve accuracy if allowed to decline memory tests when they have forgotten, and should decline tests most frequently when memory is attenuated experimentally. One of two monkeys examined unequivocally met these criteria under all test conditions, whereas the second monkey met them in all but one case. Probe tests were used to rule out “cueing” by a wide variety of environmental and behavioral stimuli, leaving detection of the absence of memory per se as the most likely mechanism underlying the monkeys' abilities to selectively decline memory tests when they had forgotten.
Resumo:
J.L., then a 25-year-old physiotherapist, became densely amnesic following herpes simplex encephalitis. She displayed severe retrograde amnesia, category-specific semantic memory loss, and a profound anterograde amnesia affecting both verbal and visual memory. Her working memory systems were relatively spared as were most of her cognitive problem-solving abilities, but her social functioning was grossly impaired. She was able to demonstrate several previously learned physiotherapy skills, but was unable to modify her application of these procedures in accordance with patient response. She showed no memory of theoretical or propositional knowledge, and could neither plan treatment or reason clinically. Three years later, J.L. had profound impairment of anterograde and retrograde declarative memory, with relative sparing of working memory for problem solving and long-term memory of procedural skills. The theoretical and practical implications of her amnesic syndrome are discussed.
Resumo:
Unstructured mesh based codes for the modelling of continuum physics phenomena have evolved to provide the facility to model complex interacting systems. Such codes have the potential to provide a high performance on parallel platforms for a small investment in programming. The critical parameters for success are to minimise changes to the code to allow for maintenance while providing high parallel efficiency, scalability to large numbers of processors and portability to a wide range of platforms. The paradigm of domain decomposition with message passing has for some time been demonstrated to provide a high level of efficiency, scalability and portability across shared and distributed memory systems without the need to re-author the code into a new language. This paper addresses these issues in the parallelisation of a complex three dimensional unstructured mesh Finite Volume multiphysics code and discusses the implications of automating the parallelisation process.
Resumo:
The difficulties encountered in implementing large scale CM codes on multiprocessor systems are now fairly well understood. Despite the claims of shared memory architecture manufacturers to provide effective parallelizing compilers, these have not proved to be adequate for large or complex programs. Significant programmer effort is usually required to achieve reasonable parallel efficiencies on significant numbers of processors. The paradigm of Single Program Multi Data (SPMD) domain decomposition with message passing, where each processor runs the same code on a subdomain of the problem, communicating through exchange of messages, has for some time been demonstrated to provide the required level of efficiency, scalability, and portability across both shared and distributed memory systems, without the need to re-author the code into a new language or even to support differing message passing implementations. Extension of the methods into three dimensions has been enabled through the engineering of PHYSICA, a framework for supporting 3D, unstructured mesh and continuum mechanics modeling. In PHYSICA, six inspectors are used. Part of the challenge for automation of parallelization is being able to prove the equivalence of inspectors so that they can be merged into as few as possible.
Resumo:
A dissertation submitted in fulfillment of the requirements to the degree of Master in Computer Science and Computer Engineering
Resumo:
The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.
Resumo:
A cross-sectional study was carried out to examine the pattern of changes in the capacity to coordinate attention between two simultaneously performed tasks in a group of 570 volunteers, from 5 to 17 years old. Method: The results revealed that the ability to coordinate attention increases with age, reaching adult values by age 15 years. Also, these results were compared with the performance in the same dual task of healthy elderly and Alzheimer disease (AD) patients found in a previous study. Results: The analysis indicated that AD patients showed a lower dual-tasking capacity than 5-year-old children, whereas the elderly presented a significantly higher ability than 5-year-old children and no significant differences with respect to young adults. Conclusion: These findings may suggest the presence of a working memory system’s mechanism that enables the division of attention, which is strengthened by the maturation of prefrontal cortex, and impaired in AD. (J. of Att. Dis. 2016; 20(2) 87-95)
Resumo:
The recent trends of chip architectures with higher number of heterogeneous cores, and non-uniform memory/non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as a fundamental building block for developing parallel applications. Nevertheless, although STM promises to ease concurrent and parallel software development, it relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by embedded real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upper-bounded and task sets can be feasibly scheduled. In this paper we assess the use of STM in the development of embedded real-time software, defending that the amount of contention can be reduced if read-only transactions access recent consistent data snapshots, progressing in a wait-free manner. We show how the required number of versions of a shared object can be calculated for a set of tasks. We also outline an algorithm to manage conflicts between update transactions that prevents starvation.
Resumo:
Contention on the memory bus in COTS based multicore systems is becoming a major determining factor of the execution time of a task. Analyzing this extra execution time is non-trivial because (i) bus arbitration protocols in such systems are often undocumented and (ii) the times when the memory bus is requested to be used are not explicitly controlled by the operating system scheduler; they are instead a result of cache misses. We present a method for finding an upper bound on the extra execution time of a task due to contention on the memory bus in COTS based multicore systems. This method makes no assumptions on the bus arbitration protocol (other than assuming that it is work-conserving).
Resumo:
The foreseen evolution of chip architectures to higher number of, heterogeneous, cores, with non-uniform memory and non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as an alternative to lock-based synchronisation. However, STM relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upperbounded and task sets can be feasibly scheduled. In this paper we defend the role of the transaction contention manager to reduce the number of transaction retries and to help the real-time scheduler assuring schedulability. For such purpose, the contention management policy should be aware of on-line scheduling information.
Resumo:
VTT Jouni Meriluodon valtio-opin alaan kuuluva väitöskirja Systems between information and knowledge : in a memory management model of an extended enterprise tarkastettiin 21.6.2011 Helsingin yliopistossa.
Resumo:
In this Thesis various aspects of memory effects in the dynamics of open quantum systems are studied. We develop a general theoretical framework for open quantum systems beyond the Markov approximation which allows us to investigate different sources of memory effects and to develop methods for harnessing them in order to realise controllable open quantum systems. In the first part of the Thesis a characterisation of non-Markovian dynamics in terms of information flow is developed and applied to study different sources of memory effects. Namely, we study nonlocal memory effects which arise due to initial correlations between two local environments and further the memory effects induced by initial correlations between the open system and the environment. The last part focuses on describing two all-optical experiment in which through selective preparation of the initial environment states the information flow between the system and the environment can be controlled. In the first experiment the system is driven from the Markovian to the non- Markovian regime and the degree of non-Markovianity is determined. In the second experiment we observe the nonlocal nature of the memory effects and provide a novel method to experimentally quantify frequency correlations in photonic environments via polarisation measurements.