992 resultados para program memory tracing
Resumo:
Studies revealing transfer effects of working memory (WM) training on non-trained cognitive performance of children hold promising implications for scholastic learning. However, the results of existing training studies are not consistent and provoke debates about the potential and limitations of cognitive enhancement. To examine the influence of individual differences on training outcomes is a promising approach for finding causes for such inconsistencies. In this study, we implemented WM training in an elementary school setting. The aim was to investigate near and far transfer effects on cognitive abilities and academic achievement and to examine the moderating effects of a dispositional and a regulative temperament factor, neuroticism and effortful control. Ninetynine second-graders were randomly assigned to 20 sessions of computer-based adaptiveWMtraining, computer-based reading training, or a no-contact control group. For the WM training group, our analyses reveal near transfer on a visual WM task, far transfer on a vocabulary task as a proxy for crystallized intelligence, and increased academic achievement in reading and math by trend. Considering individual differences in temperament, we found that effortful control predicts larger training mean and gain scores and that there is a moderation effect of both temperament factors on post-training improvement: WM training condition predicted higher post-training gains compared to both control conditions only in children with high effortful control or low neuroticism. Our results suggest that a short but intensive WM training program can enhance cognitive abilities in children, but that sufficient selfregulative abilities and emotional stability are necessary for WM training to be effective.
Resumo:
Finding useful sharing information between instances in object- oriented programs has been recently the focus of much research. The applications of such static analysis are multiple: by knowing which variables share in memory we can apply conventional compiler optimizations, find coarse-grained parallelism opportunities, or, more importantly,erify certain correctness aspects of programs even in the absence of annotations In this paper we introduce a framework for deriving precise sharing information based on abstract interpretation for a Java-like language. Our analysis achieves precision in various ways. The analysis is multivariant, which allows separating different contexts. We propose a combined Set Sharing + Nullity + Classes domain which captures which instances share and which ones do not or are definitively null, and which uses the classes to refine the static information when inheritance is present. Carrying the domains in a combined way facilitates the interaction among the domains in the presence of mutivariance in the analysis. We show that both the set sharing part of the domain as well as the combined domain provide more accurate information than previous work based on pair sharing domains, at reasonable cost.
Resumo:
The program PECET (Boundary Element Program in Three-Dimensional Elasticity) is presented in this paper. This program, written in FORTRAN V and implemen ted on a UNIVAC 1100,has more than 10,000 sentences and 96 routines and has a lot of capabilities which will be explained in more detail. The object of the program is the analysis of 3-D piecewise heterogeneous elastic domains, using a subregionalization process and 3-D parabolic isopara, metric boundary elements. The program uses special data base management which will be described below, and the modularity followed to write it gives a great flexibility to the package. The Method of Analysis includes an adaptive integration process, an original treatment of boundary conditions, a complete treatment of body forces, the utilization of a Modified Conjugate Gradient Method of solution and an original process of storage which makes it possible to save a lot of memory.
Resumo:
Although long-term memory is thought to require a cellular program of gene expression and increased protein synthesis, the identity of proteins critical for associative memory is largely unknown. We used RNA fingerprinting to identify candidate memory-related genes (MRGs), which were up-regulated in the hippocampus of water maze-trained rats, a brain area that is critically involved in spatial learning. Two of the original 10 candidate genes implicated by RNA fingerprinting, the rat homolog of the ryanodine receptor type-2 and glutamate dehydrogenase (EC 1.4.1.3), were further investigated by Northern blot analysis, reverse transcription–PCR, and in situ hybridization and confirmed as MRGs with distinct temporal and regional expression. Successive RNA screening as illustrated here may help to reveal a spectrum of MRGs as they appear in distinct domains of memory storage.
Resumo:
The storage of long-term memory is associated with a cellular program of gene expression, altered protein synthesis, and the growth of new synaptic connections. Recent studies of a variety of memory processes, ranging in complexity from those produced by simple forms of implicit learning in invertebrates to those produced by more complex forms of explicit learning in mammals, suggest that part of the molecular switch required for consolidation of long-term memory is the activation of a cAMP-inducible cascade of genes and the recruitment of cAMP response element binding protein-related transcription factors. This conservation of steps in the mechanisms for learning-related synaptic plasticity suggests the possibility of a molecular biology of cognition.
Resumo:
Torch Press.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
What actors and processes at what levels of analysis and through what mechanisms have pushed Iran's nuclear program (INP) towards being designated as a proliferation threat (securitization)? What actors and processes at what levels of analysis and through what mechanisms have pushed Iran's nuclear program away from being designated as an existential threat (de-securitization)? What has been the overall balance of power and interaction dynamics of these opposing forces over the last half-century and what is their most likely future trajectory? ^ Iran's nuclear story can be told as the unfolding of constant interaction between state and non-state forces of "nuclear securitization" and "nuclear de-securitization." Tracking the crisscrossing interaction between these different securitizing and de-securitizing actors in a historical context constitutes the central task of this project. ^ A careful tracing of "security events" on different analytical levels reveals the broad contours of the evolutionary trajectory of INP and its possible future path(s). Out of this theoretically conscious historical narrative, one can make informed observations about the overall thrust of INP along the securitization - de-securitization continuum. ^ The main contributions of this work are three fold: First, it brings a fresh theoretical perspective on Iran's proliferation behavior by utilizing the "securitization" theory tracing the initial indications of the threat designation of INP all the way back to the mid 1970s. Second, it gives a solid and thematically grounded historical texture to INP by providing an intimate engagement with the persons, processes, and events of Tehran's nuclear pursuit over half a century. Third, it demonstrates how INP has interacted with and even at times transformed the NPT as the keystone of the non-proliferation regime, and how it has affected and injected urgency to the international discourse on nuclear proliferation specifically in the Middle East.^
Resumo:
Background: Many school-based interventions are being delivered in the absence of evidence of effectiveness (Snowling & Hulme, 2011, Br. J. Educ. Psychol., 81, 1).Aim: This study sought to address this oversight by evaluating the effectiveness of the commonly used the Lexia Reading Core5 intervention, with 4- to 6-year-old pupils in Northern Ireland.Sample: A total of 126 primary school pupils in year 1 and year 2 were screened on the Phonological Assessment Battery 2nd Edition (PhAB-2). Children were recruited from the equivalent year groups to Reception and Year 1 in England and Wales, and Pre-kindergarten and Kindergarten in North America.
Methods: A total of 98 below-average pupils were randomized (T0) to either an 8-week block (inline image = 647.51 min, SD = 158.21) of daily access to Lexia Reading Core5 (n = 49) or a waiting-list control group (n = 49). Assessment of phonological skills was completed at post-intervention (T1) and at 2-month follow-up (T2) for the intervention group only.
Results: Analysis of covariance which controlled for baseline scores found that the Lexia Reading Core5 intervention group made significantly greater gains in blending, F(1, 95) = 6.50, p = .012, partial η2 = .064 (small effect size) and non-word reading, F(1, 95) = 7.20, p = .009, partial η2 = .070 (small effect size). Analysis of the 2-month follow-up of the intervention group found that all group treatment gains were maintained. However, improvements were not uniform among the intervention group with 35% failing to make progress despite access to support. Post-hoc analysis revealed that higher T0 phonological working memory scores predicted improvements made in phonological skills.
Conclusions: An early-intervention, computer-based literacy program can be effective in boosting the phonological skills of 4- to 6-year-olds, particularly if these literacy difficulties are not linked to phonological working memory deficits.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.
Resumo:
The big data era has dramatically transformed our lives; however, security incidents such as data breaches can put sensitive data (e.g. photos, identities, genomes) at risk. To protect users' data privacy, there is a growing interest in building secure cloud computing systems, which keep sensitive data inputs hidden, even from computation providers. Conceptually, secure cloud computing systems leverage cryptographic techniques (e.g., secure multiparty computation) and trusted hardware (e.g. secure processors) to instantiate a “secure” abstract machine consisting of a CPU and encrypted memory, so that an adversary cannot learn information through either the computation within the CPU or the data in the memory. Unfortunately, evidence has shown that side channels (e.g. memory accesses, timing, and termination) in such a “secure” abstract machine may potentially leak highly sensitive information, including cryptographic keys that form the root of trust for the secure systems. This thesis broadly expands the investigation of a research direction called trace oblivious computation, where programming language techniques are employed to prevent side channel information leakage. We demonstrate the feasibility of trace oblivious computation, by formalizing and building several systems, including GhostRider, which is a hardware-software co-design to provide a hardware-based trace oblivious computing solution, SCVM, which is an automatic RAM-model secure computation system, and ObliVM, which is a programming framework to facilitate programmers to develop applications. All of these systems enjoy formal security guarantees while demonstrating a better performance than prior systems, by one to several orders of magnitude.
Resumo:
Previous research has shown that crotamine, a toxin isolated from the venom of Crotalus durissus terrificus, induces the release of acetylcholine and dopamine in the central nervous system of rats. Particularly, these neurotransmitters are important modulators of memory processes. Therefore, in this study we investigated the effects of crotamine infusion on persistence of memory in rats. We verified that the intrahippocampal infusion of crotamine (1 μg/μl; 1 μl/side) improved the persistence of object recognition and aversive memory. By other side, the intrahippocampal infusion of the toxin did not alter locomotor and exploratory activities, anxiety or pain threshold. These results demonstrate a future prospect of using crotamine as potential pharmacological tool to treat diseases involving memory impairment, although it is still necessary more researches to better elucidate the crotamine effects on hippocampus and memory.
Resumo:
Ca(2+)/calmodulin-dependent protein kinase II (CaMKII) functions both in regulation of insulin secretion and neurotransmitter release through common downstream mediators. Therefore, we hypothesized that pancreatic ß-cells acquire and store the information contained in calcium pulses as a form of metabolic memory, just as neurons store cognitive information. To test this hypothesis, we developed a novel paradigm of pulsed exposure of ß-cells to intervals of high glucose, followed by a 24-h consolidation period to eliminate any acute metabolic effects. Strikingly, ß-cells exposed to this high-glucose pulse paradigm exhibited significantly stronger insulin secretion. This metabolic memory was entirely dependent on CaMKII. Metabolic memory was reflected on the protein level by increased expression of proteins involved in glucose sensing and Ca(2+)-dependent vesicle secretion, and by elevated levels of the key ß-cell transcription factor MAFA. In summary, like neurons, human and mouse ß-cells are able to acquire and retrieve information.
Resumo:
To investigate the effects of a specific protocol of undulatory physical resistance training on maximal strength gains in elderly type 2 diabetics. The study included 48 subjects, aged between 60 and 85 years, of both genders. They were divided into two groups: Untrained Diabetic Elderly (n=19) with those who were not subjected to physical training and Trained Diabetic Elderly (n=29), with those who were subjected to undulatory physical resistance training. The participants were evaluated with several types of resistance training's equipment before and after training protocol, by test of one maximal repetition. The subjects were trained on undulatory resistance three times per week for a period of 16 weeks. The overload used in undulatory resistance training was equivalent to 50% of one maximal repetition and 70% of one maximal repetition, alternating weekly. Statistical analysis revealed significant differences (p<0.05) between pre-test and post-test over a period of 16 weeks. The average gains in strength were 43.20% (knee extension), 65.00% (knee flexion), 27.80% (supine sitting machine), 31.00% (rowing sitting), 43.90% (biceps pulley), and 21.10% (triceps pulley). Undulatory resistance training used with weekly different overloads was effective to provide significant gains in maximum strength in elderly type 2 diabetic individuals.