961 resultados para Memory models
Resumo:
The authors address the 4 main points in S. M. Monroe and S. Mineka's (2008) comment. First, the authors show that the Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev.; American Psychiatric Association, 2000) posttraumatic stress disorder (PTSD) diagnosis includes an etiology and that it is based on a theoretical model with a distinguished history in psychology and psychiatry. Two tenets of this theoretical model are that voluntary (strategic) recollections of the trauma are fragmented and incomplete while involuntary (spontaneous) recollections are vivid and persistent and yield privileged access to traumatic material. Second, the authors describe differences between their model and other cognitive models of PTSD. They argue that these other models share the same 2 tenets as the diagnosis and show that these 2 tenets are largely unsupported by empirical evidence. Third, the authors counter arguments about the strength of the evidence favoring the mnemonic model. Fourth, they show that concerns about the causal role of memory in PTSD are based on views of causality that are generally inappropriate for the explanation of PTSD in the social and biological sciences. © 2008 American Psychological Association.
Resumo:
The spacing effect in list learning occurs because identical massed items suffer encoding deficits and because spaced items benefit from retrieval and increased time in working memory. Requiring the retrieval of identical items produced a spacing effect for recall and recognition, both for intentional and incidental learning. Not requiring retrieval produced spacing only for intentional learning because intentional learning encourages retrieval. Once-presented words provided baselines for these effects. Next, massed and spaced word pairs were judged for matches on their first three letters, forcing retrieval. The words were not identical, so there was no encoding deficit. Retrieval could and did cause spacing only for the first word of each pair; time in working memory, only for the second.
Resumo:
In recent years, increased focus has been placed on the role of intrauterine infection and inflammation in the pathogenesis of fetal brain injury leading to neurodevelopmental disorders such as cerebral palsy. At present, the mechanisms by which inflammatory processes during pregnancy cause this effect on the fetus are poorly understood. Our previous work has indicated an association between experimentally-induced intrauterine infection, increased proinflammatory cytokines, and increased white matter injury in the guinea pig fetus. In order to further elucidate the pathways by which inflammation in the maternal system or the fetal membranes leads to fetal impairment, a number of studies investigating aspects of the disease process have been performed. These studies represent a body of work encompassing novel research and results in a number of human and animal studies. Using a guinea pig model of inflammation, increased amniotic fluid proinflammatory cytokines and fetal brain injury were found after a maternal inflammatory response was initiated using endotoxin. In order to more closely monitor the fetal response to chorioamnionitis, a model using the chronically catheterized fetal ovine was carried out. This study demonstrated the adverse effects on fetal white matter after intrauterine exposure to bacterial inoculation, though the physiological parameters of the fetus were relatively stable throughout the experimental protocol, even when challenged with intermittent hypoxic episodes. The placenta is an important mediator between mother and fetus during gestation, though its role in the inflammatory process is largely undefined. Studies on the placental role in the inflammatory process were undertaken, and the limited ability of proinflammatory cytokines and endotoxin to cross the placenta are detailed herein. Neurodevelopmental disorders can be monitored in animal models in order to determine effective disease models for characterization of injury and use in therapeutic strategies. Our characterizations of postnatal behaviour in the guinea pig model using motility monitoring and spatial memory testing have shown small but significant differences in pups exposed to inflammatory processes in utero. The data presented herein contributes a breadth of knowledge to the ongoing elucidation of the pathways by which fetal brain injury occurs. Determining the pathway of damage will lead to discovery of diagnostic criteria, while determining the vulnerabilities of the developing fetus is essential in formulating therapeutic options.
Resumo:
Temporal distinctiveness models of memory retrieval claim that memories are organised partly in terms of their positions along a temporal dimension, and suggest that memory retrieval involves temporal discrimination. According to such models the retrievability of memories should be related to the discriminability of their temporal distances at the time of retrieval. This prediction is tested directly in three pairs of experiments that examine (a) memory retrieval and (b) identification of temporal durations that correspond to the temporal distances of the memories. Qualitative similarities between memory retrieval and temporal discrimination are found in probed serial recall (Experiments 1 and 2), immediate and delayed free recall (Experiments 3 and 4) and probed serial recall of grouped lists (Experiments 5 and 6). The results are interpreted as consistent with the suggestion that memory retrieval is indeed akin to temporal discrimination. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
We apply all autobiographical memory framework to the Study of regret. Focusing oil the distinction between regrets for specific and general events we argue that the temporal profile of regret, usually explained in terms of the action-inaction distinction, is predicted by models of autobiographical memory. In two studies involving Participants in their sixties we demonstrate a reminiscence bump for general, but not for specific regrets. Recent regrets were more likely to be specific than general in nature. Coding regrets as actions/inactions revealed that general regrets were significantly more likely to be due to inaction while specific regrets were as likely to be clue to action as to inaction. In Study 2 we also generalised all of these findings to a group of participants in their 40s. We re-interpret existing accounts of the temporal profile of regret within the autobiographical memory framework, and Outline the practical and theoretical advantages Of Our memory-based distinction over traditional decision-making approaches to the Study of regret. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
People tend to attribute more regret to a character who has decided to take action and experienced a negative outcome than to one who has decided not to act and experienced a negative outcome. For some decisions, however, this finding is not observed in a between-participants design and thus appears to rely on comparisons between people's representations of action and their representations of inaction. In this article, we outline a mental models account that explains findings from studies that have used within- and between-participants designs, and we suggest that, for decisions with uncertain counterfactual outcomes, information about the consequences of a decision to act causes people to flesh out their representation of the counterfactual states of affairs for inaction. In three experiments, we confirm our predictions about participants' fleshing out of representations, demonstrating that an action effect occurs only when information about the consequences of action is available to participants as they rate the nonactor and when this information about action is informative with respect to judgments about inaction. It is important to note that the action effect always occurs when the decision scenario specifies certain counterfactual outcomes. These results suggest that people sometimes base their attributions of regret on comparisons among different sets of mental models.
Resumo:
Hardware synthesis from dataflow graphs of signal processing systems is a growing research area as focus shifts to high level design methodologies. For data intensive systems, dataflow based synthesis can lead to an inefficient usage of memory due to the restrictive nature of synchronous dataflow and its inability to easily model data reuse. This paper explores how dataflow graph changes can be used to drive both the on-chip and off-chip memory organisation and how these memory architectures can be mapped to a hardware implementation. By exploiting the data reuse inherent to many image processing algorithms and by creating memory hierarchies, off-chip memory bandwidth can be reduced by a factor of a thousand from the original dataflow graph level specification of a motion estimation algorithm, with a minimal increase in memory size. This analysis is verified using results gathered from implementation of the motion estimation algorithm on a Xilinx Virtex-4 FPGA, where the delay between the memories and processing elements drops from 14.2 ns down to 1.878 ns through the refinement of the memory architecture. Care must be taken when modeling these algorithms however, as inefficiencies in these models can be easily translated into overuse of hardware resources.
Resumo:
A conceptual model is described for generating distributions of grazing animals, according to their searching behavior, to investigate the mechanisms animals may use to achieve their distributions. The model simulates behaviors ranging from random diffusion, through taxis and cognitively aided navigation (i.e., using memory), to the optimization extreme of the Ideal Free Distribution. These behaviors are generated from simulation of biased diffusion that operates at multiple scales simultaneously, formalizing ideas of multiple-scale foraging behavior. It uses probabilistic bias to represent decisions, allowing multiple search goals to be combined (e.g., foraging and social goals) and the representation of suboptimal behavior. By allowing bias to arise at multiple scales within the environment, each weighted relative to the others, the model can represent different scales of simultaneous decision-making and scale-dependent behavior. The model also allows different constraints to be applied to the animal's ability (e.g., applying food-patch accessibility and information limits). Simulations show that foraging-decision randomness and spatial scale of decision bias have potentially profound effects on both animal intake rate and the distribution of resources in the environment. Spatial variograms show that foraging strategies can differentially change the spatial pattern of resource abundance in the environment to one characteristic of the foraging strategy.</
Resumo:
Many scientific applications are programmed using hybrid programming models that use both message passing and shared memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared memory or message passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoption of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74 percent on average and up to 13.8 percent) with some performance gain (up to 7.5 percent) or negligible performance loss.
Resumo:
The two critical forms of dementia are Alzheimer's disease (AD) and vascular dementia (VD).The alterations of Ca2+/calmodulin/CaMKII/CaV1.2 signaling in AD and VD have not been well elucidated. Here we have demonstrated changes in the levels of CaV1.2, calmodulin, p-CaMKII, p-CREB and BDNF proteins by Western blot analysis and the co-localization of p-CaMKII/CaV1.2 by double-labeling immunofluorescence in the hippocampus of APP/PS1 mice and VD gerbils. Additionally, expression of these proteins and intracellular calcium levels were examined in cultured neurons treated with Aß1–42. The expression of CaV1.2 protein was increased in VD gerbils and in cultured neurons but decreased in APP/PS1 mice; the expression of calmodulin protein was increased in APP/PS1 mice and VD gerbils; levels of p-CaMKII, p-CREB and BDNF proteins were decreased in AD and VD models. The number of neurons in which p-CaMKII and CaV1.2 were co-localized, was decreased in the CA1 and CA3 regions in two models. Intracellular calcium was increased in the cultured neurons treated with Aß1–42. Collectively, our results suggest that the alterations in CaV1.2, calmodulin, p-CaMKII, p-CREB and BDNF can be reflective of an involvement in the impairment in memory and cognition in AD and VD models.
Resumo:
This paper investigates sub-integer implementations of the adaptive Gaussian mixture model (GMM) for background/foreground segmentation to allow the deployment of the method on low cost/low power processors that lack Floating Point Unit (FPU). We propose two novel integer computer arithmetic techniques to update Gaussian parameters. Specifically, the mean value and the variance of each Gaussian are updated by a redefined and generalised "round'' operation that emulates the original updating rules for a large set of learning rates. Weights are represented by counters that are updated following stochastic rules to allow a wider range of learning rates and the weight trend is approximated by a line or a staircase. We demonstrate that the memory footprint and computational cost of GMM are significantly reduced, without significantly affecting the performance of background/foreground segmentation.
Resumo:
This paper presents a scalable, statistical ‘black-box’ model for predicting the performance of parallel programs on multi-core non-uniform memory access (NUMA) systems. We derive a model with low overhead, by reducing data collection and model training time. The model can accurately predict the behaviour of parallel applications in response to changes in their concurrency, thread layout on NUMA nodes, and core voltage and frequency. We present a framework that applies the model to achieve significant energy and energy-delay-square (ED2) savings (9% and 25%, respectively) along with performance improvement (10% mean) on an actual 16-core NUMA system running realistic application workloads. Our prediction model proves substantially more accurate than previous efforts.
Resumo:
This paper introduces hybrid address spaces as a fundamental design methodology for implementing scalable runtime systems on many-core architectures without hardware support for cache coherence. We use hybrid address spaces for an implementation of MapReduce, a programming model for large-scale data processing, and the implementation of a remote memory access (RMA) model. Both implementations are available on the Intel SCC and are portable to similar architectures. We present the design and implementation of HyMR, a MapReduce runtime system whereby different stages and the synchronization operations between them alternate between a distributed memory address space and a shared memory address space, to improve performance and scalability. We compare HyMR to a reference implementation and we find that HyMR improves performance by a factor of 1.71× over a set of representative MapReduce benchmarks. We also compare HyMR with Phoenix++, a state-of-art implementation for systems with hardware-managed cache coherence in terms of scalability and sustained to peak data processing bandwidth, where HyMR demon- strates improvements of a factor of 3.1× and 3.2× respectively. We further evaluate our hybrid remote memory access (HyRMA) programming model and assess its performance to be superior of that of message passing.
Resumo:
Abstract—Power capping is an essential function for efficient power budgeting and cost management on modern server systems. Contemporary server processors operate under power caps by using dynamic voltage and frequency scaling (DVFS). However, these processors are often deployed in non-uniform memory
access (NUMA) architectures, where thread allocation between cores may significantly affect performance and power consumption. This paper proposes a method which maximizes performance under power caps on NUMA systems by dynamically optimizing two knobs: DVFS and thread allocation. The method selects the optimal combination of the two knobs with models based on artificial neural network (ANN) that captures the nonlinear effect of thread allocation on performance. We implement
the proposed method as a runtime system and evaluate it with twelve multithreaded benchmarks on a real AMD Opteron based NUMA system. The evaluation results show that our method outperforms a naive technique optimizing only DVFS by up to
67.1%, under a power cap.
Resumo:
Non-Volatile Memory (NVM) technology holds promise to replace SRAM and DRAM at various levels of the memory hierarchy. The interest in NVM is motivated by the difficulty faced in scaling DRAM beyond 22 nm and, long-term, lower cost per bit. While offering higher density and negligible static power (leakage and refresh), NVM suffers increased latency and energy per memory access. This paper develops energy and performance models of memory systems and applies them to understand the energy-efficiency of replacing or complementing DRAM with NVM. Our analysis focusses on the application of NVM in main memory. We demonstrate that NVM such as STT-RAM and RRAM is energy-efficient for memory sizes commonly employed in servers and high-end workstations, but PCM is not. Furthermore, the model is well suited to quickly evaluate the impact of changes to the model parameters, which may be achieved through optimization of the memory architecture, and to determine the key parameters that impact system-level energy and performance.