941 resultados para Memory systems
Resumo:
Concurrency control is mostly based on locks and is therefore notoriously difficult to use. Even though some programming languages provide high-level constructs, these add complexity and potentially hard-to-detect bugs to the application. Transactional memory is an attractive mechanism that does not have the drawbacks of locks, however the underlying implementation is often difficult to integrate into an existing language. In this paper we show how we have introduced transactional semantics into Smalltalk by using the reflective facilities of the language. Our approach is based on method annotations, incremental parse tree transformations and an optimistic commit protocol. The implementation does not depend on modifications to the virtual machine and therefore can be changed at the language level. We report on a practical case study, benchmarks and further and on-going work.
Resumo:
People with synaesthesia show an enhanced memory relative to demographically matched controls. The most obvious explanation for this is that the ‘extra’ perceptual experiences lead to richer encoding and retrieval opportunities of stimuli which induce synaesthesia (typically verbal stimuli). Although there is some evidence for this, it is unlikely to be the whole explanation. For instance, not all stimuli which trigger synaesthesia are better remembered (e.g., digit span) and some stimuli which do not trigger synaesthesia are better remembered. In fact, synaesthetes tend to have better visual memory than verbal memory. We suggest that enhanced memory in synaesthesia is linked to wider changes in cognitive systems at the interface of perception and memory and link this to recent findings in the neuroscience of memory.
Resumo:
Multiple interlinked positive feedback loops shape the stimulus responses of various biochemical systems, such as the cell cycle or intracellular Ca2+ release. Recent studies with simplified models have identified two advantages of coupling fast and slow feedback loops. This dual-time structure enables a fast response while enhancing resistances of responses and bistability to stimulus noise. We now find that (1) the dual-time structure similarly confers resistance to internal noise due to molecule number fluctuations, and (2) model variants with altered coupling, which better represent some specific biochemical systems, share all the above advantages. We also develop a similar bistable model with coupling of a fast autoactivation loop to a slow loop. This model's topology was suggested by positive feedback proposed to play a role in long-term synaptic potentiation (LTP). The advantages of fast response and noise resistance are also present in this autoactivation model. Empirically, LTP develops resistance to reversal over approximately 1h . The model suggests this resistance may result from increased amounts of synaptic kinases involved in positive feedback.
Resumo:
Visual working memory (VWM) involves maintaining and processing visual information, often for the purpose of making immediate decisions. Neuroimaging experiments of VWM provide evidence in support of a neural system mainly involving a fronto-parietal neuronal network, but the role of specific brain areas is less clear. A proposal that has recently generated considerable debate suggests that a dissociation of object and location VWM occurs within the prefrontal cortex, in dorsal and ventral regions, respectively. However, re-examination of the relevant literature presents a more robust distribution suggestive of a general caudal-rostral dissociation from occipital and parietal structures, caudally, to prefrontal regions, rostrally, corresponding to location and object memory, respectively. The purpose of the present study was to identify a dissociation of location and object VWM across two imaging methods (magnetoencephalography, MEG, and functional magnetic imaging, fMRI). These two techniques provide complimentary results due the high temporal resolution of MEG and the high spatial resolution of fMRI. The use of identical location and object change detection tasks was employed across techniques and reported for the first time. Moreover, this study is the first to use matched stimulus displays across location and object VWM conditions. The results from these two imaging methods provided convergent evidence of a location and object VWM dissociation favoring a general caudal-rostral rather than the more common prefrontal dorsal-ventral view. Moreover, neural activity across techniques was correlated with behavioral performance for the first time and provided convergent results. This novel approach of combining imaging tools to study memory resulted in robust evidence suggesting a novel interpretation of location and object memory. Accordingly, this study presents a novel context within which to explore the neural substrates of WM across imaging techniques and populations.
Resumo:
Purpose: Results from previous studies indicate that children with brain tumors (BT) might present with cognitive problems at diagnosis and thus before the start of any medical treatment. The question remains whether these problems are due to the underlying tumor itself or due to the high level of emotional and physical stress which is involved at diagnosis of a malignant disorder. All children with a de novo oncological diagnosis not involving the central nervous systems (CNS) are usually exposed to a comparable level of distress. However, patients with cancer not involving the CNS are not expected to show disease-related cognitive problems. Thus they serve as a well-balanced control group (CG) to help distinguish between the probable causes of the effect. Method: In a pilot study we analyzed an array of cognitive functions in 16 children with BT and 17 control patients. In both groups, tests were administered in-patient at diagnosis before any therapeutic intervention such as surgery, chemotherapy od irradiation. Results: Performance of children with BT was comparable to that of CG patients in the areas of intelligence, perceptual reasoning, verbal comprehension, working memory, and processing speed. In contrast, however, BT patients performded significantly worse in verbal memory and attention. Conclusion: Memory and attention seem to be the most vulnerable funstions affected by BT, with other functions being preserved at the time of diagnosis. It ist to be expected that this vulnerability might exacerbate the cognitive decline after chemotherapy and radiation treatment - known to impair intellectual performance. The findings highlight the need of early cognitive assessments in children with BT in order to introduce cognitive training as early as possible to minimize or even prevent cognitive long-term sequelae. This might improve long-term academic and professional outcome of these children, but especially helps their return to school after hospitalization.
Resumo:
Patients suffering from bipolar affective disorder show deficits in working memory functions. In a previous functional magnetic resonance imaging study, we observed an abnormal hyperactivity of the amygdala in bipolar patients during articulatory rehearsal in verbal working memory. In the present study, we investigated the dynamic neurofunctional interactions between the right amygdala and the brain systems that underlie verbal working memory in both bipolar patients and healthy controls. In total, 18 euthymic bipolar patients and 18 healthy controls performed a modified version of the Sternberg item-recognition (working memory) task. We used the psychophysiological interaction approach in order to assess functional connectivity between the right amygdala and the brain regions involved in verbal working memory. In healthy subjects, we found significant negative functional interactions between the right amygdala and multiple cortical brain areas involved in verbal working memory. In comparison with the healthy control subjects, bipolar patients exhibited significantly reduced functional interactions of the right amygdala particularly with the right-hemispheric, i.e., ipsilateral, cortical regions supporting verbal working memory. Together with our previous finding of amygdala hyperactivity in bipolar patients during verbal rehearsal, the present results suggest that a disturbed right-hemispheric “cognitive–emotional” interaction between the amygdala and cortical brain regions underlying working memory may be responsible for amygdala hyperactivation and affects verbal working memory (deficits) in bipolar patients.
Resumo:
This manuscript deals with the adaptation of quartz-microfabrics to changing physical deformation conditions, and discusses their preservation potential during subsequent retrograde deformation. Using microstructural analysis, a sequence of recrystallization processes in quartz, ranging from Grain-Boundary Migration Recrystallization (GBM) over Subgrain-Rotation Recrystallization (SGR) to Bulging Nucleation (BLG) is detected for the Simplon fault zone (SFZ) from the low strain rim towards the internal high strain part of the large-scale shear zone. Based on: (i) the retrograde cooling path; (ii) estimates of deformation temperatures; and (iii) spatial variation of dynamic recrystallization processes and different microstructural characteristics, continuous strain localization with decreasing temperature is inferred. In contrast to the recrystallization microstructures, crystallographic preferred orientations (CPO) have a longer memory. CPO patterns indicative of prism and rhomb glide systems in mylonitic quartz veins, overprinted at low temperatures (�400 �C), suggest inheritance of a high-temperature deformation. In this way, microstructural, textural and geochemical analyses provide information for several million years of the deformation history. The reasons for such incomplete resetting of the rock texture is that strain localization is caused by change in effective viscosity contrasts related to temporal large- and small-scale temperature changes during the evolution of such a long-lived shear zone. The spatially resolved, quantitative investigation of quartz microfabrics and associated recrystallization processes therefore provide great potential for an improved understanding of the geodynamics of large-scale shear zones.
Resumo:
There have been numerous attempts to reveal the neurobiological basis of schizophrenia spectrum disorders. Results however, remain as heterogeneous as the schizophrenia spectrum disorders itself. Therefore, one aim of this thesis was to divide patients affected by this disorder into subgroups in order to homogenize the results of future studies. In a first study it is suggested that psychopathological rating scales should focus on symptoms-clusters that may have a common neurophysiological background. The here presented Bern Psychopathology Scale (BPS) proposes that alterations in three wellknown brain systems (motor, language, and affective) are largely leading to the communication failures observable on a behavioral level, but also - as repeatedly hypothesized - to dysconnectivity within and between brain systems in schizophrenia spectrum disorders. The external validity of the motor domain in the BPS was tested against the objective measure of 24 hours wrist actigraphy, in a second study. The subjective, the quantitative, as well as the global rating of the degree of motor disorders in this patient group showed significant correlations to the acquired motor activity. This result confirmed in a first step the practicability of the motor domain of the BPS, but needs further validation regarding pathological brain alterations. Finally, in a third study (independent from the two other studies), two cerebral Resting State Networks frequently altered in schizophrenia were investigated for the first time using simultaneous EEG/fMRI: The well-known default mode network and the left working memory network. Besides the changes in these fMRI-based networks, there are well-documented findings that patients exhibit alterations in EEG spectra compared to healthy controls. However, only through the multimodal approach it was possible to discover that patients with schizophrenia spectrum disorders have a slower driving frequency of the Resting State Networks compared to the matched healthy controls. Such a dysfunctional coupling between neuronal frequency and functional brain organization could explain in a uni- or multifactorial way (dysfunctional cross-frequency coupling, maturational effects, vigilance fluctuations, task-related suppression), how the typical psychotic symptoms might occur. To conclude, the major contributions presented in this thesis were on one hand the development of a psychopathology rating scale that is based on the assumption of dysfunctional brain networks, as well as the new evidence of a dysfunctional triggering frequency of Resting State Networks from the simultaneous EEG/fMRI study in patients affected by a schizophrenia spectrum disorder.
Resumo:
Considerable evidence suggests that central cholinergic neurons participate in either acquisition, storage or retrieval of information. Experiments were designed to evaluate information processing in mice following either reversible or irreversible impairment in central cholinergic activity. The cholinergic receptor antagonists, atropine and methylatropine were used to reversibly inhibit cholinergic transmission. Irreversible impairment in central cholinergic function was achieved by central administration of the cholinergic-specific neurotoxins, N-ethyl-choline aziridinium (ECA) and N-ethyl-acetylcholine aziridinium (EACA).^ ECA and EACA appear to act by irreversible inhibition of high affinity choline uptake (proposed rate-limiting step in acetylcholine synthesis). Intraventricular administration of ECA or EACA produced persistent reduction in hippocampal choline acetyltransferase activity. Other neuronal systems and brain regions showed no evidence of toxicity.^ Mice treated with either ECA or EACA showed behavioral deficits associated with cholinergic dysfunction. Passive avoidance behavior was significantly impaired by cholinotoxin treatment. Radial arm maze performance was also significantly impaired in cholinotoxin-treated animals. Deficits in radial arm maze performance were transient, however, such that rapid and apparent complete behavioral recovery was seen during retention testing. The centrally active cholinergic receptor antagonist atropine also caused significant impairment in radial arm maze behavior, while equivalent doses of methylatropine were without effect.^ The relative effects of cholinotoxin and receptor antagonist treatment on short-term (working) memory and long-term (reference) memory in radial arm maze behavior were examined. Maze rotation studies indicated that there were at least two different response strategies which could result in accurate maze performance. One strategy involved the use of response algorithms and was considered to be a function of reference memory. Another strategy appeared to be primarily dependent on spatial working memory. However, all behavioral paradigms with multiple trails have reference memory requirements (i.e. information useful over all trials). Performance was similarly affected following either cholinotoxin or anticholinergic treatment, regardless of the response strategy utilized. In addition, rates of behavioral recovery following cholinotoxin treatment were similar between response groups. It was concluded that both cholinotoxin and anticholinergic treatment primarily resulted in impaired reference memory processes. ^
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.
Resumo:
In this work we propose a method to accelerate time dependent numerical solvers of systems of PDEs that require a high cost in computational time and memory. The method is based on the combined use of such numerical solver with a proper orthogonal decomposition, from which we identify modes, a Galerkin projection (that provides a reduced system of equations) and the integration of the reduced system, studying the evolution of the modal amplitudes. We integrate the reduced model until our a priori error estimator indicates that our approximation in not accurate. At this point we use again our original numerical code in a short time interval to adapt the POD manifold and continue then with the integration of the reduced model. Application will be made to two model problems: the Ginzburg-Landau equation in transient chaos conditions and the two-dimensional pulsating cavity problem, which describes the motion of liquid in a box whose upper wall is moving back and forth in a quasi-periodic fashion. Finally, we will discuss a way of improving the performance of the method using experimental data or information from numerical simulations
Resumo:
SRAM-based FPGAs are sensitive to radiation effects. Soft errors can appear and accumulate, potentially defeating mitigation strategies deployed at the Application Layer. Therefore, Configuration Memory scrubbing is required to improve radiation tolerance of such FPGAs in space applications. Virtex FPGAs allow runtime scrubbing by means of dynamic partial reconfiguration. Even with scrubbing, intra-FPGA TMR systems are subjected to common-mode errors affecting more than one design domain. This is solved in inter-FPGA TMR systems at the expense of a higher cost, power and mass. In this context, a self-reference scrubber for device-level TMR system based on Xilinx Virtex FPGAs is presented. This scrubber allows for a fast SEU/MBU detection and correction by peer frame comparison without needing to access a golden configuration memory
Resumo:
Within the membrane computing research field, there are many papers about software simulations and a few about hardware implementations. In both cases, algorithms for implementing membrane systems in software and hardware that try to take advantages of massive parallelism are implemented. P-systems are parallel and non deterministic systems which simulate membranes behavior when processing information. This paper presents software techniques based on the proper utilization of virtual memory of a computer. There is a study of how much virtual memory is necessary to host a membrane model. This method improves performance in terms of time.
Resumo:
We present the design and implementation of the and-parallel component of ACE. ACE is a computational model for the full Prolog language that simultaneously exploits both or-parallelism and independent and-parallelism. A high performance implementation of the ACE model has been realized and its performance reported in this paper. We discuss how some of the standard problems which appear when implementing and-parallel systems are solved in ACE. We then propose a number of optimizations aimed at reducing the overheads and the increased memory consumption which occur in such systems when using previously proposed solutions. Finally, we present results from an implementation of ACE which includes the optimizations proposed. The results show that ACE exploits and-parallelism with high efficiency and high speedups. Furthermore, they also show that the proposed optimizations, which are applicable to many other and-parallel systems, significantly decrease memory consumption and increase speedups and absolute performance both in forwards execution and during backtracking.
Resumo:
Incorporating the possibility of attaching attributes to variables in a logic programming system has been shown to allow the addition of general constraint solving capabilities to it. This approach is very attractive in that by adding a few primitives any logic programming system can be turned into a generic constraint logic programming system in which constraint solving can be user deñned, and at source level - an extreme example of the "glass box" approach. In this paper we propose a different and novel use for the concept of attributed variables: developing a generic parallel/concurrent (constraint) logic programming system, using the same "glass box" flavor. We argüe that a system which implements attributed variables and a few additional primitives can be easily customized at source level to implement many of the languages and execution models of parallelism and concurrency currently proposed, in both shared memory and distributed systems. We illustrate this through examples and report on an implementation of our ideas.