986 resultados para orthogonal memory patterns
Resumo:
As computers approach the physical limits of information storable in memory, new methods will be needed to further improve information storage and retrieval. We propose a quantum inspired vector based approach, which offers a contextually dependent mapping from the subsymbolic to the symbolic representations of information. If implemented computationally, this approach would provide exceptionally high density of information storage, without the traditionally required physical increase in storage capacity. The approach is inspired by the structure of human memory and incorporates elements of Gardenfors’ Conceptual Space approach and Humphreys et al.’s matrix model of memory.
Resumo:
Oscillations of neural activity may bind widespread cortical areas into a neural representation that encodes disparate aspects of an event. In order to test this theory we have turned to data collected from complex partial epilepsy (CPE) patients with chronically implanted depth electrodes. Data from regions critical to word and face information processing was analyzed using spectral coherence measurements. Similar analyses of intracranial EEG (iEEG) during seizure episodes display HippoCampal Formation (HCF)—NeoCortical (NC) spectral coherence patterns that are characteristic of specific seizure stages (Klopp et al. 1996). We are now building a computational memory model to examine whether spatio-temporal patterns of human iEEG spectral coherence emerge in a computer simulation of HCF cellular distribution, membrane physiology and synaptic connectivity. Once the model is reasonably scaled it will be used as a tool to explore neural parameters that are critical to memory formation and epileptogenesis.
Resumo:
Working memory-related brain activation has been widely studied, and impaired activation patterns have been reported for several psychiatric disorders. We investigated whether variation in N-back working memory brain activation is genetically influenced in 60 pairs of twins, (29 monozygotic (MZ), 31 dizygotic (DZ); mean age 24.4 ± 1.7S.D.). Task-related brain response (BOLD percent signal difference of 2 minus 0-back) was measured in three regions of interest. Although statistical power was low due to the small sample size, for middle frontal gyrus, angular gyrus, and supramarginal gyrus, the MZ correlations were, in general, approximately twice those of the DZ pairs, with non-significant heritability estimates (14-30%) in the low-moderate range. Task performance was strongly influenced by genes (57-73%) and highly correlated with cognitive ability (0.44-0.55). This study, which will be expanded over the next 3 years, provides the first support that individual variation in working memory-related brain activation is to some extent influenced by genes.
Resumo:
Although key to understanding individual variation in task-related brain activation, the genetic contribution to these individual differences remains largely unknown. Here we report voxel-by-voxel genetic model fitting in a large sample of 319 healthy, young adult, human identical and fraternal twins (mean ± SD age, 23.6 ±1.8 years) who performed an n-back working memory task during functional magnetic resonance imaging (fMRI) at a high magnetic field (4 tesla). Patterns of task-related brain response (BOLD signal difference of 2-back minus 0-back) were significantly heritable, with the highest estimates (40 - 65%) in the inferior, middle, and superior frontal gyri, left supplementary motor area, precentral and postcentral gyri, middle cingulate cortex, superior medial gyrus, angular gyrus, superior parietal lobule, including precuneus, and superior occipital gyri. Furthermore, high test-retest reliability for a subsample of 40 twins indicates that nongenetic variance in the fMRI brain response is largely due to unique environmental influences rather than measurement error. Individual variations in activation of the working memory network are therefore significantly influenced by genetic factors. By establishing the heritability of cognitive brain function in a large sample that affords good statistical power, and using voxel-by-voxel analyses, this study provides the necessary evidence for task-related brain activation to be considered as an endophenotype for psychiatric or neurological disorders, and represents a substantial new contribution to the field of neuroimaging genetics. These genetic brain maps should facilitate discovery of gene variants influencing cognitive brain function through genome-wide association studies, potentially opening up new avenues in the treatment of brain disorders.
Resumo:
To investigate potentially dissociable recognition memory responses in the hippocampus and perirhinal cortex, fMRI studies have often used confidence ratings as an index of memory strength. Confidence ratings, although correlated with memory strength, also reflect sources of variability, including task-irrelevant item effects and differences both within and across individuals in terms of applying decision criteria to separate weak from strong memories. We presented words one, two, or four times at study in each of two different conditions, focused and divided attention, and then conducted separate fMRI analyses of correct old responses on the basis of subjective confidence ratings or estimates from single- versus dual-process recognition memory models. Overall, the effect of focussing attention on spaced repetitions at study manifested as enhanced recognition memory performance. Confidence- versus model-based analyses revealed disparate patterns of hippocampal and perirhinal cortex activity at both study and test and both within and across hemispheres. The failure to observe equivalent patterns of activity indicates that fMRI signals associated with subjective confidence ratings reflect additional sources of variability. The results are consistent with predictions of single-process models of recognition memory.
Resumo:
Distraction in the workplace is increasingly more common in the information age. Several tasks and sources of information compete for a worker's limited cognitive capacities in human-computer interaction (HCI). In some situations even very brief interruptions can have detrimental effects on memory. Nevertheless, in other situations where persons are continuously interrupted, virtually no interruption costs emerge. This dissertation attempts to reveal the mental conditions and causalities differentiating the two outcomes. The explanation, building on the theory of long-term working memory (LTWM; Ericsson and Kintsch, 1995), focuses on the active, skillful aspects of human cognition that enable the storage of task information beyond the temporary and unstable storage provided by short-term working memory (STWM). Its key postulate is called a retrieval structure an abstract, hierarchical knowledge representation built into long-term memory that can be utilized to encode, update, and retrieve products of cognitive processes carried out during skilled task performance. If certain criteria of practice and task processing are met, LTWM allows for the storage of large representations for long time periods, yet these representations can be accessed with the accuracy, reliability, and speed typical of STWM. The main thesis of the dissertation is that the ability to endure interruptions depends on the efficiency in which LTWM can be recruited for maintaing information. An observational study and a field experiment provide ecological evidence for this thesis. Mobile users were found to be able to carry out heavy interleaving and sequencing of tasks while interacting, and they exhibited several intricate time-sharing strategies to orchestrate interruptions in a way sensitive to both external and internal demands. Interruptions are inevitable, because they arise as natural consequences of the top-down and bottom-up control of multitasking. In this process the function of LTWM is to keep some representations ready for reactivation and others in a more passive state to prevent interference. The psychological reality of the main thesis received confirmatory evidence in a series of laboratory experiments. They indicate that after encoding into LTWM, task representations are safeguarded from interruptions, regardless of their intensity, complexity, or pacing. However, when LTWM cannot be deployed, the problems posed by interference in long-term memory and the limited capacity of the STWM surface. A major contribution of the dissertation is the analysis of when users must resort to poorer maintenance strategies, like temporal cues and STWM-based rehearsal. First, one experiment showed that task orientations can be associated with radically different patterns of retrieval cue encodings. Thus the nature of the processing of the interface determines which features will be available as retrieval cues and which must be maintained by other means. In another study it was demonstrated that if the speed of encoding into LTWM, a skill-dependent parameter, is slower than the processing speed allowed for by the task, interruption costs emerge. Contrary to the predictions of competing theories, these costs turned out to involve intrusions in addition to omissions. Finally, it was learned that in rapid visually oriented interaction, perceptual-procedural expectations guide task resumption, and neither STWM nor LTWM are utilized due to the fact that access is too slow. These findings imply a change in thinking about the design of interfaces. Several novel principles of design are presented, basing on the idea of supporting the deployment of LTWM in the main task.
Resumo:
The microcommands constituting the microprogram of the control memory of a microprogrammed processor can be partitioned into a number of disjoint sets. Some of these sets are then encoded to minimize the word width of the ROM storing the microprogram. A further reduction in the width of the ROM words can be achieved by a technique known as bit steering where one or more bits are shared by two or more sets of microcommands. These sets are called the steerable sets. This correspondence presents a simple method for the detection and encoding of steerable sets. It has been shown that the concurrency matrix of two steerable sets exhibits definite patterns of clusters which can be easily recognized. A relation "connection" has been defined which helps in the detection of three-set steerability. Once steerable sets are identified, their encoding becomes a straightforward procedure following the location of the identifying clusters on the concurrency matrix or matrices.
Resumo:
Understanding the molecular mechanisms of immunological memory assumes importance in vaccine design. We had earlier hypothesized a mechanism for the maintenance of immunological memory through the operation of a network of idiotypic and anti-idiotypic antibodies (Ab2). Peptides derived from an internal image carrying anti-idiotypic antibody are hypothesized to facilitate the perpetuation of antigen specific T cell memory through similarity in peptide-MHC binding as that of the antigenic peptide. In the present work, the existence of such peptidomimics of the antigen in the Ab2 variable region and their similarity of MHC-I binding was examined by bioinformatics approaches. The analysis employing three known viral antigens and one tumor-associated antigen shows that peptidomimics from Ab2 variable regions have structurally similar MHC-I binding patterns as compared to antigenic peptides, indicating a structural basis for memory perpetuation. (C)) 2007 Elsevier Inc. All rights reserved.
Resumo:
The variety of electron diffraction patterns arising from the decagonal phase has been explored using a stereographic analysis for generating the important zone axes as intersection points corresponding to important relvectors. An indexing scheme employing a set of five vectors and an orthogonal vector has been followed. A systematic tilting from the decagonal axis to one of the twofold axes has been adopted to generate a set of experimental diffraction patterns corresponding to the expected patterns from the stereographic analysis with excellent agreement.
Resumo:
Template matching is concerned with measuring the similarity between patterns of two objects. This paper proposes a memory-based reasoning approach for pattern recognition of binary images with a large template set. It seems that memory-based reasoning intrinsically requires a large database. Moreover, some binary image recognition problems inherently need large template sets, such as the recognition of Chinese characters which needs thousands of templates. The proposed algorithm is based on the Connection Machine, which is the most massively parallel machine to date, using a multiresolution method to search for the matching template. The approach uses the pyramid data structure for the multiresolution representation of templates and the input image pattern. For a given binary image it scans the template pyramid searching the match. A binary image of N × N pixels can be matched in O(log N) time complexity by our algorithm and is independent of the number of templates. Implementation of the proposed scheme is described in detail.
Resumo:
A single source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the symbols received at their incoming edges on their outgoing edges. In this work, we introduce network-error correction for single source, acyclic, unit-delay, memory-free networks with coherent network coding for multicast. A convolutional code is designed at the source based on the network code in order to correct network- errors that correspond to any of a given set of error patterns, as long as consecutive errors are separated by a certain interval which depends on the convolutional code selected. Bounds on this interval and the field size required for constructing the convolutional code with the required free distance are also obtained. We illustrate the performance of convolutional network error correcting codes (CNECCs) designed for the unit-delay networks using simulations of CNECCs on an example network under a probabilistic error model.
Resumo:
Software transactional memory (STM) has been proposed as a promising programming paradigm for shared memory multi-threaded programs as an alternative to conventional lock based synchronization primitives. Typical STM implementations employ a conflict detection scheme, which works with uniform access granularity, tracking shared data accesses either at word/cache line or at object level. It is well known that a single fixed access tracking granularity cannot meet the conflicting goals of reducing false conflicts without impacting concurrency adversely. A fine grained granularity while improving concurrency can have an adverse impact on performance due to lock aliasing, lock validation overheads, and additional cache pressure. On the other hand, a coarse grained granularity can impact performance due to reduced concurrency. Thus, in general, a fixed or uniform granularity access tracking (UGAT) scheme is application-unaware and rarely matches the access patterns of individual application or parts of an application, leading to sub-optimal performance for different parts of the application(s). In order to mitigate the disadvantages associated with UGAT scheme, we propose a Variable Granularity Access Tracking (VGAT) scheme in this paper. We propose a compiler based approach wherein the compiler uses inter-procedural whole program static analysis to select the access tracking granularity for different shared data structures of the application based on the application's data access pattern. We describe our prototype VGAT scheme, using TL2 as our STM implementation. Our experimental results reveal that VGAT-STM scheme can improve the application performance of STAMP benchmarks from 1.87% to up to 21.2%.
Resumo:
We present external memory data structures for efficiently answering range-aggregate queries. The range-aggregate problem is defined as follows: Given a set of weighted points in R-d, compute the aggregate of the weights of the points that lie inside a d-dimensional orthogonal query rectangle. The aggregates we consider in this paper include COUNT, sum, and MAX. First, we develop a structure for answering two-dimensional range-COUNT queries that uses O(N/B) disk blocks and answers a query in O(log(B) N) I/Os, where N is the number of input points and B is the disk block size. The structure can be extended to obtain a near-linear-size structure for answering range-sum queries using O(log(B) N) I/Os, and a linear-size structure for answering range-MAX queries in O(log(B)(2) N) I/Os. Our structures can be made dynamic and extended to higher dimensions. (C) 2012 Elsevier B.V. All rights reserved.
Quantitative, Time-Resolved Proteomic Analysis Using Bio-Orthogonal Non-Canonical Amino Acid Tagging
Resumo:
Bio-orthogonal non-canonical amino acid tagging (BONCAT) is an analytical method that allows the selective analysis of the subset of newly synthesized cellular proteins produced in response to a biological stimulus. In BONCAT, cells are treated with the non-canonical amino acid L-azidohomoalanine (Aha), which is utilized in protein synthesis in place of methionine by wild-type translational machinery. Nascent, Aha-labeled proteins are selectively ligated to affinity tags for enrichment and subsequently identified via mass spectrometry. The work presented in this thesis exhibits advancements in and applications of the BONCAT technology that establishes it as an effective tool for analyzing proteome dynamics with time-resolved precision.
Chapter 1 introduces the BONCAT method and serves as an outline for the thesis as a whole. I discuss motivations behind the methodological advancements in Chapter 2 and the biological applications in Chapters 2 and 3.
Chapter 2 presents methodological developments that make BONCAT a proteomic tool capable of, in addition to identifying newly synthesized proteins, accurately quantifying rates of protein synthesis. I demonstrate that this quantitative BONCAT approach can measure proteome-wide patterns of protein synthesis at time scales inaccessible to alternative techniques.
In Chapter 3, I use BONCAT to study the biological function of the small RNA regulator CyaR in Escherichia coli. I correctly identify previously known CyaR targets, and validate several new CyaR targets, expanding the functional roles of the sRNA regulator.
In Chapter 4, I use BONCAT to measure the proteomic profile of the quorum sensing bacterium Vibrio harveyi during the time-dependent transition from individual- to group-behaviors. My analysis reveals new quorum-sensing-regulated proteins with diverse functions, including transcription factors, chemotaxis proteins, transport proteins, and proteins involved in iron homeostasis.
Overall, this work describes how to use BONCAT to perform quantitative, time-resolved proteomic analysis and demonstrates that these measurements can be used to study a broad range of biological processes.
Resumo:
We investigated the migration and behavior of young Pacific bluefin tuna (Thunnus orientalis) using archival tags that measure environmental variables, record them in memory, and estimate daily geographical locations using measured light levels. Swimming depth, ambient water temperature, and feeding are described in a companion paper. Errors of the tag location estimates that could be checked were –0.54° ±0.75° (mean ±SD) in longitude and –0.12° ±3.06° in latitude. Latitude, estimated automatically by the tag, was problematic, but latitude, estimated by comparing recorded sea-surface temperatures with a map of sea-surface temperature, was satisfactory. We concluded that the archival tag is a reliable tool for estimating location on a scale of about one degree, which is sufficient for a bluefin tuna migration study. After release, tagged fish showed a normal swimming behavioral pattern within one day and normal feeding frequency within one month. In addition, fish with an archival tag maintained weight-at-length similar to that of wild fish; however, their growth rate was less than that of wild fish. Of 166 fish released in the East China Sea with implanted archival tags, 30 were recovered, including one that migrated across the Pacific Ocean. Migration of young Pacific bluefin tuna appears to consist of two phases: a residency phase comprising more than 80% of all days, and a traveling phase. An individual young Pacific bluefin tuna was observed to cover 7600 km in one traveling phase that lasted more than two months (part of this phase was a trans-Pacific migration completed within two months). Many features of behavior in the traveling phase were similar to those in the residency phase; however the temperature difference between viscera and ambient temperature was larger, feeding was slightly more frequent, and dives to deeper water were more frequent.