946 resultados para Memory-based
Resumo:
Our growing understanding of human mind and cognition and the development of neurotechnology has triggered debate around cognitive enhancement in neuroethics. The dissertation examines the normative issues of memory enhancement, and focuses on two issues: (1) the distinction between memory treatment and enhancement; and (2) how the issue of authenticity concerns memory interventions, including memory treatments and enhancements. rnThe first part consists of a conceptual analysis of the concepts required for normative considerations. First, the representational nature and the function of memory are discussed. Memory is regarded as a special form of self-representation resulting from a constructive processes. Next, the concepts of selfhood, personhood, and identity are examined and a conceptual tool—the autobiographical self-model (ASM)—is introduced. An ASM is a collection of mental representations of the system’s relations with its past and potential future states. Third, the debate between objectivist and constructivist views of health are considered. I argue for a phenomenological account of health, which is based on the primacy of illness and negative utilitarianism.rnThe second part presents a synthesis of the relevant normative issues based on the conceptual tools developed. I argue that memory enhancement can be distinguished from memory treatment using a demarcation regarding the existence of memory-related suffering. That is, memory enhancements are, under standard circumstances and without any unwilling suffering or potential suffering resulting from the alteration of memory functions, interventions that aim to manipulate memory function based on the self-interests of the individual. I then consider the issue of authenticity, namely whether memory intervention or enhancement endangers “one’s true self”. By analyzing two conceptions of authenticity—authenticity as self-discovery and authenticity as self-creation, I propose that authenticity should be understood in terms of the satisfaction of the functional constraints of an ASM—synchronic coherence, diachronic coherence, and global veridicality. This framework provides clearer criteria for considering the relevant concerns and allows us to examine the moral values of authenticity. rn
Resumo:
Following the internationalization of contemporary higher education, academic institutions based in non-English speaking countries are increasingly urged to produce contents in English to address international prospective students and personnel, as well as to increase their attractiveness. The demand for English translations in the institutional academic domain is consequently increasing at a rate exceeding the capacity of the translation profession. Resources for assisting non-native authors and translators in the production of appropriate texts in L2 are therefore required in order to help academic institutions and professionals streamline their translation workload. Some of these resources include: (i) parallel corpora to train machine translation systems and multilingual authoring tools; and (ii) translation memories for computer-aided tools. The purpose of this study is to create and evaluate reference resources like the ones mentioned in (i) and (ii) through the automatic sentence alignment of a large set of Italian and English as a Lingua Franca (ELF) institutional academic texts given as equivalent but not necessarily parallel (i.e. translated). In this framework, a set of aligning algorithms and alignment tools is examined in order to identify the most profitable one(s) in terms of accuracy and time- and cost-effectiveness. In order to determine the text pairs to align, a sample is selected according to document length similarity (characters) and subsequently evaluated in terms of extent of noisiness/parallelism, alignment accuracy and content leverageability. The results of these analyses serve as the basis for the creation of an aligned bilingual corpus of academic course descriptions, which is eventually used to create a translation memory in TMX format.
Resumo:
Three distinct categories of marginal zone lymphomas (MZLs) are currently recognized, principally based on their site of occurrence. They are thought to represent unique entities, but the relationship of one subtype with another is poorly understood. We investigated 17 non-splenic MZLs (seven nodal, 10 extranodal) by gene expression profiling to distinguish between subtypes and determine their cell of origin. Our findings suggest biological inter-relatedness of these entities despite occurrence at different locations and associations with possibly different aetiologies. Furthermore, the expression profiles of non-splenic MZL were similar to memory B cells.
Resumo:
CD8 T cells play a key role in mediating protective immunity against selected pathogens after vaccination. Understanding the mechanism of this protection is dependent upon definition of the heterogeneity and complexity of cellular immune responses generated by different vaccines. Here, we identify previously unrecognized subsets of CD8 T cells based upon analysis of gene-expression patterns within single cells and show that they are differentially induced by different vaccines. Three prime-boost vector combinations encoding HIV Env stimulated antigen-specific CD8 T-cell populations of similar magnitude, phenotype, and functionality. Remarkably, however, analysis of single-cell gene-expression profiles enabled discrimination of a majority of central memory (CM) and effector memory (EM) CD8 T cells elicited by the three vaccines. Subsets of T cells could be defined based on their expression of Eomes, Cxcr3, and Ccr7, or Klrk1, Klrg1, and Ccr5 in CM and EM cells, respectively. Of CM cells elicited by DNA prime-recombinant adenoviral (rAd) boost vectors, 67% were Eomes(-) Ccr7(+) Cxcr3(-), in contrast to only 7% and 2% stimulated by rAd5-rAd5 or rAd-LCMV, respectively. Of EM cells elicited by DNA-rAd, 74% were Klrk1(-) Klrg1(-)Ccr5(-) compared with only 26% and 20% for rAd5-rAd5 or rAd5-LCMV. Definition by single-cell gene profiling of specific CM and EM CD8 T-cell subsets that are differentially induced by different gene-based vaccines will facilitate the design and evaluation of vaccines, as well as enable our understanding of mechanisms of protective immunity.
Resumo:
Community-based studies suggest that cannabis products that are high in Δ⁹-tetrahydrocannabinol (THC) but low in cannabidiol (CBD) are particularly hazardous for mental health. Laboratory-based studies are ideal for clarifying this issue because THC and CBD can be administered in pure form, under controlled conditions. In a between-subjects design, we tested the hypothesis that pre-treatment with CBD inhibited THC-elicited psychosis and cognitive impairment. Healthy participants were randomised to receive oral CBD 600 mg (n=22) or placebo (n=26), 210 min ahead of intravenous (IV) THC (1.5 mg). Post-THC, there were lower PANSS positive scores in the CBD group, but this did not reach statistical significance. However, clinically significant positive psychotic symptoms (defined a priori as increases ≥ 3 points) were less likely in the CBD group compared with the placebo group, odds ratio (OR)=0.22 (χ²=4.74, p<0.05). In agreement, post-THC paranoia, as rated with the State Social Paranoia Scale (SSPS), was less in the CBD group compared with the placebo group (t=2.28, p<0.05). Episodic memory, indexed by scores on the Hopkins Verbal Learning Task-revised (HVLT-R), was poorer, relative to baseline, in the placebo pre-treated group (-10.6 ± 18.9%) compared with the CBD group (-0.4% ± 9.7 %) (t=2.39, p<0.05). These findings support the idea that high-THC/low-CBD cannabis products are associated with increased risks for mental health.
Resumo:
In this report, we describe a short peptide, containing a T helper- and a B-cell epitope, located in the Gag protein of the caprine arthritis encephalitis virus (CAEV). This T-cell epitope is capable of inducing a robust T-cell proliferative response in vaccinated goats with different genetic backgrounds and to provide help for a strong antibody response to the B-cell epitope, indicating that it may function as a universal antigen-carrier for goat vaccines. The primary immune response of goats homozygous for MHC class I and II genes showed an MHC-dependent partitioning in rapid-high and slow-low responses, whereas the memory immune response was strong in both groups, demonstrating that a vaccine based on this immunodominant T helper epitope is capable to overcome genetic differences.
Resumo:
We performed a Rey visual design learning test (RVDLT) in 17 subjects and measured intervoxel coherence (IC) by DTI as an indication of connectivity to investigate if visual memory performance would depend on white matter structure in healthy persons. IC considers the orientation of the adjacent voxels and has a better signal-to-noise ratio than the commonly used fractional anisotropy index. Voxel-based t-test analysis of the IC values was used to identify neighboring voxel clusters with significant differences between 7 low and 10 high test performers. We detected 9 circumscribed significant clusters (p< .01) with lower IC values in low performers than in high performers, with centers of gravity located in left and right superior temporal region, corpus callosum, left superior longitudinal fascicle, and left optic radiation. Using non-parametric correlation analysis, IC and memory performance were significantly correlated in each of the 9 clusters (r< .61 to r< .81; df=15, p< .01 to p< .0001). The findings provide in vivo evidence for the contribution of white matter structure to visual memory in healthy people.
Resumo:
INTRODUCTION: Cognitive complaints, such as poor concentration and memory deficits, are frequent after whiplash injury and play an important role in disability. The origin of these complaints is discussed controversially. Some authors postulate brain lesions as a consequence of whiplash injuries. Potential diffuse axonal injury (DAI) with subsequent atrophy of the brain and ventricular expansion is of particular interest as focal brain lesions have not been documented so far in whiplash injury. OBJECTIVE: To investigate whether traumatic brain injury can be identified using a magnetic resonance (MR)-based quantitative analysis of normalized ventricle-brain ratios (VBR) in chronic whiplash patients with subjective cognitive impairment that cannot be objectively confirmed by neuropsychological testing. MATERIALS AND METHODS: MR examination was performed in 21 patients with whiplash injury and symptom persistence for 9 months on average and in 18 matched healthy controls. Conventional MR imaging (MRI) was used to assess the volumes of grey and white matter and of ventricles. The normalized VBR was calculated. RESULTS: The values of normalized VBR did not differ in whiplash patients when compared with that in healthy controls (F = 0.216, P = 0.645). CONCLUSIONS: This study does not support loss of brain tissue following whiplash injury as measured by VBR. On this basis, traumatic brain injury with subsequent DAI does not seem to be the underlying mechanism for persistent concentration and memory deficits that are subjectively reported but not objectively verifiable as neuropsychological deficits.
Resumo:
As the performance gap between microprocessors and memory continues to increase, main memory accesses result in long latencies which become a factor limiting system performance. Previous studies show that main memory access streams contain significant localities and SDRAM devices provide parallelism through multiple banks and channels. These locality and parallelism have not been exploited thoroughly by conventional memory controllers. In this thesis, SDRAM address mapping techniques and memory access reordering mechanisms are studied and applied to memory controller design with the goal of reducing observed main memory access latency. The proposed bit-reversal address mapping attempts to distribute main memory accesses evenly in the SDRAM address space to enable bank parallelism. As memory accesses to unique banks are interleaved, the access latencies are partially hidden and therefore reduced. With the consideration of cache conflict misses, bit-reversal address mapping is able to direct potential row conflicts to different banks, further improving the performance. The proposed burst scheduling is a novel access reordering mechanism, which creates bursts by clustering accesses directed to the same rows of the same banks. Subjected to a threshold, reads are allowed to preempt writes and qualified writes are piggybacked at the end of the bursts. A sophisticated access scheduler selects accesses based on priorities and interleaves accesses to maximize the SDRAM data bus utilization. Consequentially burst scheduling reduces row conflict rate, increasing and exploiting the available row locality. Using a revised SimpleScalar and M5 simulator, both techniques are evaluated and compared with existing academic and industrial solutions. With SPEC CPU2000 benchmarks, bit-reversal reduces the execution time by 14% on average over traditional page interleaving address mapping. Burst scheduling also achieves a 15% reduction in execution time over conventional bank in order scheduling. Working constructively together, bit-reversal and burst scheduling successfully achieve a 19% speedup across simulated benchmarks.
Resumo:
Virtualization has become a common abstraction layer in modern data centers. By multiplexing hardware resources into multiple virtual machines (VMs) and thus enabling several operating systems to run on the same physical platform simultaneously, it can effectively reduce power consumption and building size or improve security by isolating VMs. In a virtualized system, memory resource management plays a critical role in achieving high resource utilization and performance. Insufficient memory allocation to a VM will degrade its performance dramatically. On the contrary, over-allocation causes waste of memory resources. Meanwhile, a VM’s memory demand may vary significantly. As a result, effective memory resource management calls for a dynamic memory balancer, which, ideally, can adjust memory allocation in a timely manner for each VM based on their current memory demand and thus achieve the best memory utilization and the optimal overall performance. In order to estimate the memory demand of each VM and to arbitrate possible memory resource contention, a widely proposed approach is to construct an LRU-based miss ratio curve (MRC), which provides not only the current working set size (WSS) but also the correlation between performance and the target memory allocation size. Unfortunately, the cost of constructing an MRC is nontrivial. In this dissertation, we first present a low overhead LRU-based memory demand tracking scheme, which includes three orthogonal optimizations: AVL-based LRU organization, dynamic hot set sizing and intermittent memory tracking. Our evaluation results show that, for the whole SPEC CPU 2006 benchmark suite, after applying the three optimizing techniques, the mean overhead of MRC construction is lowered from 173% to only 2%. Based on current WSS, we then predict its trend in the near future and take different strategies for different prediction results. When there is a sufficient amount of physical memory on the host, it locally balances its memory resource for the VMs. Once the local memory resource is insufficient and the memory pressure is predicted to sustain for a sufficiently long time, a relatively expensive solution, VM live migration, is used to move one or more VMs from the hot host to other host(s). Finally, for transient memory pressure, a remote cache is used to alleviate the temporary performance penalty. Our experimental results show that this design achieves 49% center-wide speedup.
Resumo:
Reuse distance analysis, the prediction of how many distinct memory addresses will be accessed between two accesses to a given address, has been established as a useful technique in profile-based compiler optimization, but the cost of collecting the memory reuse profile has been prohibitive for some applications. In this report, we propose using the hardware monitoring facilities available in existing CPUs to gather an approximate reuse distance profile. The difficulties associated with this monitoring technique are discussed, most importantly that there is no obvious link between the reuse profile produced by hardware monitoring and the actual reuse behavior. Potential applications which would be made viable by a reliable hardware-based reuse distance analysis are identified.
Resumo:
In the memory antisaccade task, subjects are instructed to look at an imaginary point precisely at the opposite side of a peripheral visual stimulus presented short time previously. To perform this task accurately, the visual vector, i.e., the distance between a central fixation point and the peripheral stimulus, must be inverted from one visual hemifield to the other. Recent data in humans and monkeys suggest that the posterior parietal cortex (PPC) might be critically involved in the process of visual vector inversion. In the present study, we investigated the temporal dynamics of visual vector inversion in the human PPC by using transcranial magnetic stimulation (TMS). In six healthy subjects, single pulse TMS was applied over the right PPC during a memory antisaccade task at four different time intervals: 100 ms, 217 ms, 333 ms, or 450 ms after target onset. The results indicate that for rightward antisaccades, i.e., when the visual target was presented in the left screen-half, TMS had a significant effect on saccade gain when applied 100 ms after target onset, but not later. For leftward antisaccades, i.e., when the visual target was presented in the right screen-half, a significant TMS effect on gain was found for the 333 ms and 450 ms conditions, but not for the earlier ones. This double dissociation of saccade gain suggests that the initial process of vector inversion can be disrupted 100 ms after onset of the visual stimulus and that TMS interfered with motor saccade planning based on an inversed vector signal at 333 ms and 450 ms after stimulus onset.
Resumo:
In power electronic basedmicrogrids, the computational requirements needed to implement an optimized online control strategy can be prohibitive. The work presented in this dissertation proposes a generalized method of derivation of geometric manifolds in a dc microgrid that is based on the a-priori computation of the optimal reactions and trajectories for classes of events in a dc microgrid. The proposed states are the stored energies in all the energy storage elements of the dc microgrid and power flowing into them. It is anticipated that calculating a large enough set of dissimilar transient scenarios will also span many scenarios not specifically used to develop the surface. These geometric manifolds will then be used as reference surfaces in any type of controller, such as a sliding mode hysteretic controller. The presence of switched power converters in microgrids involve different control actions for different system events. The control of the switch states of the converters is essential for steady state and transient operations. A digital memory look-up based controller that uses a hysteretic sliding mode control strategy is an effective technique to generate the proper switch states for the converters. An example dcmicrogrid with three dc-dc boost converters and resistive loads is considered for this work. The geometric manifolds are successfully generated for transient events, such as step changes in the loads and the sources. The surfaces corresponding to a specific case of step change in the loads are then used as reference surfaces in an EEPROM for experimentally validating the control strategy. The required switch states corresponding to this specific transient scenario are programmed in the EEPROM as a memory table. This controls the switching of the dc-dc boost converters and drives the system states to the reference manifold. In this work, it is shown that this strategy effectively controls the system for a transient condition such as step changes in the loads for the example case.
Resumo:
Concurrency control is mostly based on locks and is therefore notoriously difficult to use. Even though some programming languages provide high-level constructs, these add complexity and potentially hard-to-detect bugs to the application. Transactional memory is an attractive mechanism that does not have the drawbacks of locks, however the underlying implementation is often difficult to integrate into an existing language. In this paper we show how we have introduced transactional semantics into Smalltalk by using the reflective facilities of the language. Our approach is based on method annotations, incremental parse tree transformations and an optimistic commit protocol. The implementation does not depend on modifications to the virtual machine and therefore can be changed at the language level. We report on a practical case study, benchmarks and further and on-going work.
Resumo:
Conscious events interact with memory systems in learning, rehearsal and retrieval (Ebbinghaus 1885/1964; Tulving 1985). Here we present hypotheses that arise from the IDA computional model (Franklin, Kelemen and McCauley 1998; Franklin 2001b) of global workspace theory (Baars 1988, 2002). Our primary tool for this exploration is a flexible cognitive cycle employed by the IDA computational model and hypothesized to be a basic element of human cognitive processing. Since cognitive cycles are hypothesized to occur five to ten times a second and include interaction between conscious contents and several of the memory systems, they provide the means for an exceptionally fine-grained analysis of various cognitive tasks. We apply this tool to the small effect size of subliminal learning compared to supraliminal learning, to process dissociation, to implicit learning, to recognition vs. recall, and to the availability heuristic in recall. The IDA model elucidates the role of consciousness in the updating of perceptual memory, transient episodic memory, and procedural memory. In most cases, memory is hypothesized to interact with conscious events for its normal functioning. The methodology of the paper is unusual in that the hypotheses and explanations presented are derived from an empirically based, but broad and qualitative computational model of human cognition.