941 resultados para Memory systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

An accurate switched-current (SI) memory cell and suitable for low-voltage low-power (LVLP) applications is proposed. Information is memorized as the gate-voltage of the input transistor, in a tunable gain-boosting triode-transconductor. Additionally, four-quadrant multiplication between the input voltage to the transconductor regulation-amplifier (X-operand) and the stored voltage (Y-operand) is provided. A simplified 2 x 2-memory array was prototyped according to a standard 0.8 mum n-well CMOS process and 1.8-V supply. Measured current-reproduction error is less than 0.26% for 0.25 muA less than or equal to I-SAMPLE less than or equal to 0.75 muA. Standby consumption is 6.75 muW per cell @I-SAMPLE = 0.75 muA. At room temperature, leakage-rate is 1.56 nA/ms. Four-quadrant multiplier (4QM) full-scale operands are 2x(max) = 320 mV(pp) and 2y(max). = 448 mV(pp), yielding a maximum output swing of 0.9 muA(pp). 4QM worst-case nonlinearity is 7.9%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A CMOS memory-cell for dynamic storage of analog data and suitable for LVLP applications is proposed. Information is memorized as the gate-voltage of input-transistor of a gain-boosting triode-transconductor. The enhanced output-resistance improves accuracy on reading out the sampled currents. Additionally, a four-quadrant multiplication between the input to regulation-amplifier of the transconductor and the stored voltage is provided. Designing complies with a low-voltage 1.2μm N-well CMOS fabrication process. For a 1.3V-supply, CCELL=3.6pF and sampling interval is 0.25μA≤ ISAMPLE ≤ 0.75μA. The specified retention time is 1.28ms and corresponds to a charge-variation of 1% due to junction leakage @75°C. A range of MR simulations confirm circuit performance. Absolute read-out error is below O.40% while the four-quadrant multiplier nonlinearity, at full-scale is 8.2%. Maximum stand-by consumption is 3.6μW/cell.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the dynamical response of a coupled oscillator is investigated, taking in consideration the nonlinear behavior of a SMA spring coupling the two oscillators. Due to the nonlinear coupling terms, the system exhibits both regular and chaotic motions. The Poincaré sections for different sets of coupling parameters are verified. © 2011 World Scientific Publishing Company.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shape memory alloys (SMAs) provide a compact and effective actuation for a variety of mechanical systems. In this paper, a numerical simulation study of a three degree of-freedom airfoil, subjected to two-dimensional incompressible inviscid flow using a SMA is presented. SMA wire actuators are used to control the flap movement of a wing section. Through the thermo-mechanical constitutive equation of the SMA proposed by Brison, we simulate numerically the behavior of a double SMA wire actuator. Two SMA actuators are used: one to move the flap down and the other to move the flap up. Through the numerical results conducted in the present study, the behavior and characteristics of an SMA actuator with two SMA wires are shown the effectiveness of the SMA actuator. In conclusion, this paper shows the feasibility of using SMA wire actuators for flap movement, with success

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show how to construct a topological Markov map of the interval whose invariant probability measure is the stationary law of a given stochastic chain of infinite order. In particular we characterize the maps corresponding to stochastic chains with memory of variable length. The problem treated here is the converse of the classical construction of the Gibbs formalism for Markov expanding maps of the interval.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical Pavlovian fear conditioning to painful stimuli has provided the generally accepted view of a core system centered in the central amygdala to organize fear responses. Ethologically based models using other sources of threat likely to be expected in a natural environment, such as predators or aggressive dominant conspecifics, have challenged this concept of a unitary core circuit for fear processing. We discuss here what the ethologically based models have told us about the neural systems organizing fear responses. We explored the concept that parallel paths process different classes of threats, and that these different paths influence distinct regions in the periaqueductal gray - a critical element for the organization of all kinds of fear responses. Despite this parallel processing of different kinds of threats, we have discussed an interesting emerging view that common cortical-hippocampal-amygdalar paths seem to be engaged in fear conditioning to painful stimuli, to predators and, perhaps, to aggressive dominant conspecifics as well. Overall, the aim of this review is to bring into focus a more global and comprehensive view of the systems organizing fear responses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Somatostatin ist ein Molekül mit multifunktinonellem Charakter, dem Neurotransmitter-, Neuromodulator- und (Neuro)-Hormoneigenschaften zugeschrieben werden. Gemäß seiner ubiquitären Verteilung in Geweben beeinflusst es Stoffwechsel- und Entwicklungsprozesse, bis hin zu Lern-und Gedächtnisleistungen. Diese Wirkungen resultieren aus dem lokalen und zeitlichen Zusammenspiel eines Liganden und fünf G-Protein gekoppelter Rezeptoren (SSTR1-5). Zur Charakterisierung der biologischen Bedeutung des Somatostatin-Systems im Gesamtorganismus wurde eine Mutationsanalyse einzelner Systemkomponenten durchgeführt. Sie umfaßte die Inaktivierung der Gene für das Somatostatin-Präpropeptid und die der Rezeptoren SSTR3 und SSTR4 durch Gene Targeting. Die entsprechenden Ausfallmutationen belegen: Weder die Rezeptoren 3 und 4, noch Somatostatin sind für das Überleben des Organismus unter Standardhaltungsbedingungen notwendig. Die entsprechenden Mauslinien zeigen keine unmittelbar auffälligen Einschränkungen ihrer Biologie. Die Somatostatin-Nullmaus wurde zum Hauptgegenstand einer detaillierten Untersuchung aufgrund der übergeordneten Position des Liganden in der Signalkaskade und verfügbaren Hinweisen zu seiner Funktion. Folgende Schlußfolgerungen konnten nach eingehender Analyse gezogen werden: Der Ausfall des Somatostatin-Gens hat erhöhte Plasmakonzentrationen an Wachstumshormon (GH) zur Konsequenz. Dies steht im Einklang mit der Rolle Somatostatins als hemmender Faktor der Wachstumshormon-Freisetzung, die in der Mutante aufgehoben ist. Durch die Somatostatin-Nullmaus wurde zudem deutlich: Somatostatin interagiert als wesentliches Bindeglied zwischen der Wachstums- und Streßachse. Permanent erhöhte Corticosteron-Werte in den Mutanten implizieren einen negativen tonischen Einfluß für die Sekretion von Glukocorticoiden in vivo. Damit zeigt die Knockout-Maus, daß Somatostatin normalerweise als ein entscheidendes inhibierendes Kontrollelement der Steroidfreisetzung fungiert. Verhaltensversuche offenbarten ein Defizit im motorischen Lernen. Somatostatin-Nullmäuse bleiben im Lernparadigma “Rotierender Stabtest” hinter ihren Artgenossen zurück ohne aber generell in Motorik oder Koordination eingeschränkt zu sein. Diese motorischen Lernvorgänge sind von einem funktionierenden Kleinhirn abhängig. Da Somatostatin und seine Rezeptoren kaum im adulten, wohl aber im sich entwickelnden Kleinhirn auftreten, belegt dieses Ergebnis die Funktion transient in der Entwicklung exprimierter Neuropeptide – eine lang bestehende, aber bislang experimentell nicht nachgewiesene Hypothese. Die Überprüfung weiterer physiologischer Parameter und Verhaltenskategorien unter Standard-Laborbedingunggen ergab keine sichtbaren Abweichungen im Vergleich zu Wildtyp-Mäusen. Damit steht nun ein Tiermodell zur weiterführenden Analyse für die Somatostatin-Forschung bereit: In endokrinologischen, elektrophysiologischen und verhaltens-biologischen Experimenten ist nun eine unmittelbare Korrelation selektiv mit dem Somatostatin-Peptid bzw. mit den Rezeptoren 3 und 4 aber auch in Kombination der Ausfallmutationen nach entsprechenden Kreuzungen möglich.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our growing understanding of human mind and cognition and the development of neurotechnology has triggered debate around cognitive enhancement in neuroethics. The dissertation examines the normative issues of memory enhancement, and focuses on two issues: (1) the distinction between memory treatment and enhancement; and (2) how the issue of authenticity concerns memory interventions, including memory treatments and enhancements. rnThe first part consists of a conceptual analysis of the concepts required for normative considerations. First, the representational nature and the function of memory are discussed. Memory is regarded as a special form of self-representation resulting from a constructive processes. Next, the concepts of selfhood, personhood, and identity are examined and a conceptual tool—the autobiographical self-model (ASM)—is introduced. An ASM is a collection of mental representations of the system’s relations with its past and potential future states. Third, the debate between objectivist and constructivist views of health are considered. I argue for a phenomenological account of health, which is based on the primacy of illness and negative utilitarianism.rnThe second part presents a synthesis of the relevant normative issues based on the conceptual tools developed. I argue that memory enhancement can be distinguished from memory treatment using a demarcation regarding the existence of memory-related suffering. That is, memory enhancements are, under standard circumstances and without any unwilling suffering or potential suffering resulting from the alteration of memory functions, interventions that aim to manipulate memory function based on the self-interests of the individual. I then consider the issue of authenticity, namely whether memory intervention or enhancement endangers “one’s true self”. By analyzing two conceptions of authenticity—authenticity as self-discovery and authenticity as self-creation, I propose that authenticity should be understood in terms of the satisfaction of the functional constraints of an ASM—synchronic coherence, diachronic coherence, and global veridicality. This framework provides clearer criteria for considering the relevant concerns and allows us to examine the moral values of authenticity. rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Following the internationalization of contemporary higher education, academic institutions based in non-English speaking countries are increasingly urged to produce contents in English to address international prospective students and personnel, as well as to increase their attractiveness. The demand for English translations in the institutional academic domain is consequently increasing at a rate exceeding the capacity of the translation profession. Resources for assisting non-native authors and translators in the production of appropriate texts in L2 are therefore required in order to help academic institutions and professionals streamline their translation workload. Some of these resources include: (i) parallel corpora to train machine translation systems and multilingual authoring tools; and (ii) translation memories for computer-aided tools. The purpose of this study is to create and evaluate reference resources like the ones mentioned in (i) and (ii) through the automatic sentence alignment of a large set of Italian and English as a Lingua Franca (ELF) institutional academic texts given as equivalent but not necessarily parallel (i.e. translated). In this framework, a set of aligning algorithms and alignment tools is examined in order to identify the most profitable one(s) in terms of accuracy and time- and cost-effectiveness. In order to determine the text pairs to align, a sample is selected according to document length similarity (characters) and subsequently evaluated in terms of extent of noisiness/parallelism, alignment accuracy and content leverageability. The results of these analyses serve as the basis for the creation of an aligned bilingual corpus of academic course descriptions, which is eventually used to create a translation memory in TMX format.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dual-systems theorists posit distinct modes of reasoning. The intuition system reasons automatically and its processes are unavailable to conscious introspection. The deliberation system reasons effortfully while its processes recruit working memory. The current paper extends the application of such theories to the study of Obsessive-Compulsive Disorder (OCD). Patients with OCD often retain insight into their irrationality, implying dissociable systems of thought: intuition produces obsessions and fears that deliberation observes and attempts (vainly) to inhibit. To test the notion that dual-systems theory can adequately describe OCD, we obtained speeded and unspeeded risk judgments from OCD patients and non-anxious controls in order to quantify the differential effects of intuitive and deliberative reasoning. As predicted, patients deemed negative events to be more likely than controls. Patients also took more time in producing judgments than controls. Furthermore, when forced to respond quickly patients' judgments were more affected than controls'. Although patients did attenuate judgments when given additional time, their estimates never reached the levels of controls'. We infer from these data that patients have genuine difficulty inhibiting their intuitive cognitive system. Our dual-systems perspective is compatible with current theories of the disorder. Similar behavioral tests may prove helpful in better understanding related anxiety disorders. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Virtualization has become a common abstraction layer in modern data centers. By multiplexing hardware resources into multiple virtual machines (VMs) and thus enabling several operating systems to run on the same physical platform simultaneously, it can effectively reduce power consumption and building size or improve security by isolating VMs. In a virtualized system, memory resource management plays a critical role in achieving high resource utilization and performance. Insufficient memory allocation to a VM will degrade its performance dramatically. On the contrary, over-allocation causes waste of memory resources. Meanwhile, a VM’s memory demand may vary significantly. As a result, effective memory resource management calls for a dynamic memory balancer, which, ideally, can adjust memory allocation in a timely manner for each VM based on their current memory demand and thus achieve the best memory utilization and the optimal overall performance. In order to estimate the memory demand of each VM and to arbitrate possible memory resource contention, a widely proposed approach is to construct an LRU-based miss ratio curve (MRC), which provides not only the current working set size (WSS) but also the correlation between performance and the target memory allocation size. Unfortunately, the cost of constructing an MRC is nontrivial. In this dissertation, we first present a low overhead LRU-based memory demand tracking scheme, which includes three orthogonal optimizations: AVL-based LRU organization, dynamic hot set sizing and intermittent memory tracking. Our evaluation results show that, for the whole SPEC CPU 2006 benchmark suite, after applying the three optimizing techniques, the mean overhead of MRC construction is lowered from 173% to only 2%. Based on current WSS, we then predict its trend in the near future and take different strategies for different prediction results. When there is a sufficient amount of physical memory on the host, it locally balances its memory resource for the VMs. Once the local memory resource is insufficient and the memory pressure is predicted to sustain for a sufficiently long time, a relatively expensive solution, VM live migration, is used to move one or more VMs from the hot host to other host(s). Finally, for transient memory pressure, a remote cache is used to alleviate the temporary performance penalty. Our experimental results show that this design achieves 49% center-wide speedup.