19 resultados para Metadata schema
Resumo:
Theoretical and computational frameworks for synaptic plasticity and learning have a long and cherished history, with few parallels within the well-established literature for plasticity of voltage-gated ion channels. In this study, we derive rules for plasticity in the hyperpolarization-activated cyclic nucleotide-gated (HCN) channels, and assess the synergy between synaptic and HCN channel plasticity in establishing stability during synaptic learning. To do this, we employ a conductance-based model for the hippocampal pyramidal neuron, and incorporate synaptic plasticity through the well-established Bienenstock-Cooper-Munro (BCM)-like rule for synaptic plasticity, wherein the direction and strength of the plasticity is dependent on the concentration of calcium influx. Under this framework, we derive a rule for HCN channel plasticity to establish homeostasis in synaptically-driven firing rate, and incorporate such plasticity into our model. In demonstrating that this rule for HCN channel plasticity helps maintain firing rate homeostasis after bidirectional synaptic plasticity, we observe a linear relationship between synaptic plasticity and HCN channel plasticity for maintaining firing rate homeostasis. Motivated by this linear relationship, we derive a calcium-dependent rule for HCN-channel plasticity, and demonstrate that firing rate homeostasis is maintained in the face of synaptic plasticity when moderate and high levels of cytosolic calcium influx induced depression and potentiation of the HCN-channel conductance, respectively. Additionally, we show that such synergy between synaptic and HCN-channel plasticity enhances the stability of synaptic learning through metaplasticity in the BCM-like synaptic plasticity profile. Finally, we demonstrate that the synergistic interaction between synaptic and HCN-channel plasticity preserves robustness of information transfer across the neuron under a rate-coding schema. Our results establish specific physiological roles for experimentally observed plasticity in HCN channels accompanying synaptic plasticity in hippocampal neurons, and uncover potential links between HCN-channel plasticity and calcium influx, dynamic gain control and stable synaptic learning.
Resumo:
Maintaining metadata consistency is a critical issue in designing a filesystem. Although satisfactory solutions are available for filesystems residing on magnetic disks, these solutions may not give adequate performance for filesystems residing on flash devices. Prabhakaran et al. have designed a metadata consistency mechanism specifically for flash chips, called Transactional Flash1]. It uses cyclic commit mechanism to provide transactional abstractions. Although significant improvement over usual journaling techniques, this mechanism has certain drawbacks such as complex protocol and necessity to read whole flash during recovery, which slows down recovery process. In this paper we propose addition of thin journaling layer on top of Transactional Flash to simplify the protocol and speed up the recovery process. The simplified protocol named Quick Recovery Cyclic Commit (QRCC) uses journal stored on NOR flash for recovery. Our evaluations on actual raw flash card show that journal writes add negligible penalty compared to original Transactional Flash's write performance, while quick recovery is facilitated by journal in case of failures.
Resumo:
The tonic is a fundamental concept in Indian art music. It is the base pitch, which an artist chooses in order to construct the melodies during a rg(a) rendition, and all accompanying instruments are tuned using the tonic pitch. Consequently, tonic identification is a fundamental task for most computational analyses of Indian art music, such as intonation analysis, melodic motif analysis and rg recognition. In this paper we review existing approaches for tonic identification in Indian art music and evaluate them on six diverse datasets for a thorough comparison and analysis. We study the performance of each method in different contexts such as the presence/absence of additional metadata, the quality of audio data, the duration of audio data, music tradition (Hindustani/Carnatic) and the gender of the singer (male/female). We show that the approaches that combine multi-pitch analysis with machine learning provide the best performance in most cases (90% identification accuracy on average), and are robust across the aforementioned contexts compared to the approaches based on expert knowledge. In addition, we also show that the performance of the latter can be improved when additional metadata is available to further constrain the problem. Finally, we present a detailed error analysis of each method, providing further insights into the advantages and limitations of the methods.
Resumo:
In this paper, we present Bi-Modal Cache - a flexible stacked DRAM cache organization which simultaneously achieves several objectives: (i) improved cache hit ratio, (ii) moving the tag storage overhead to DRAM, (iii) lower cache hit latency than tags-in-SRAM, and (iv) reduction in off-chip bandwidth wastage. The Bi-Modal Cache addresses the miss rate versus off-chip bandwidth dilemma by organizing the data in a bi-modal fashion - blocks with high spatial locality are organized as large blocks and those with little spatial locality as small blocks. By adaptively selecting the right granularity of storage for individual blocks at run-time, the proposed DRAM cache organization is able to make judicious use of the available DRAM cache capacity as well as reduce the off-chip memory bandwidth consumption. The Bi-Modal Cache improves cache hit latency despite moving the metadata to DRAM by means of a small SRAM based Way Locator. Further by leveraging the tremendous internal bandwidth and capacity that stacked DRAM organizations provide, the Bi-Modal Cache enables efficient concurrent accesses to tags and data to reduce hit time. Through detailed simulations, we demonstrate that the Bi-Modal Cache achieves overall performance improvement (in terms of Average Normalized Turnaround Time (ANTT)) of 10.8%, 13.8% and 14.0% in 4-core, 8-core and 16-core workloads respectively.