980 resultados para memory systems
Resumo:
Software transactional memory is a promising programming model that adapts many concepts borrowed from the databases world to control concurrent accesses to main memory (RAM) locations. This paper discusses how to support apparently irreversible operations, such as memory allocation and deallocation, within software libraries that will be used in (software memory) transactional contexts, and propose a generic and elegant approach based on a handler system, which provide the means to create and execute compensation actions at key moments during the life-time of a transaction.
Resumo:
Experimental optoelectronic characterization of a p-i'(a-SiC:H)-n/pi(a-Si:H)-n heterostructure with low conductivity doped layers shows the feasibility of tailoring channel bandwidth and wavelength by optical bias through back and front side illumination. Front background enhances light-to-dark sensitivity of the long and medium wavelength range, and strongly quenches the others. Back violet background enhances the magnitude in short wavelength range and reduces the others. Experiments have three distinct programmed time slots: control, hibernation and data. Throughout the control time slot steady light wavelengths illuminate either or both sides of the device, followed by the hibernation without any background illumination. The third time slot allows a programmable sequence of different wavelengths with an impulse frequency of 6000Hz to shine upon the sensor. Results show that the control time slot illumination has an influence on the data time slot which is used as a volatile memory with the set, reset logical functions. © IFIP International Federation for Information Processing 2015.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática
Resumo:
The last decade has witnessed a major shift towards the deployment of embedded applications on multi-core platforms. However, real-time applications have not been able to fully benefit from this transition, as the computational gains offered by multi-cores are often offset by performance degradation due to shared resources, such as main memory. To efficiently use multi-core platforms for real-time systems, it is hence essential to tightly bound the interference when accessing shared resources. Although there has been much recent work in this area, a remaining key problem is to address the diversity of memory arbiters in the analysis to make it applicable to a wide range of systems. This work handles diverse arbiters by proposing a general framework to compute the maximum interference caused by the shared memory bus and its impact on the execution time of the tasks running on the cores, considering different bus arbiters. Our novel approach clearly demarcates the arbiter-dependent and independent stages in the analysis of these upper bounds. The arbiter-dependent phase takes the arbiter and the task memory-traffic pattern as inputs and produces a model of the availability of the bus to a given task. Then, based on the availability of the bus, the arbiter-independent phase determines the worst-case request-release scenario that maximizes the interference experienced by the tasks due to the contention for the bus. We show that the framework addresses the diversity problem by applying it to a memory bus shared by a fixed-priority arbiter, a time-division multiplexing (TDM) arbiter, and an unspecified work-conserving arbiter using applications from the MediaBench test suite. We also experimentally evaluate the quality of the analysis by comparison with a state-of-the-art TDM analysis approach and consistently showing a considerable reduction in maximum interference.
Resumo:
Accepted in 13th IEEE Symposium on Embedded Systems for Real-Time Multimedia (ESTIMedia 2015), Amsterdam, Netherlands.
Resumo:
While fractional calculus (FC) is as old as integer calculus, its application has been mainly restricted to mathematics. However, many real systems are better described using FC equations than with integer models. FC is a suitable tool for describing systems characterised by their fractal nature, long-term memory and chaotic behaviour. It is a promising methodology for failure analysis and modelling, since the behaviour of a failing system depends on factors that increase the model’s complexity. This paper explores the proficiency of FC in modelling complex behaviour by tuning only a few parameters. This work proposes a novel two-step strategy for diagnosis, first modelling common failure conditions and, second, by comparing these models with real machine signals and using the difference to feed a computational classifier. Our proposal is validated using an electrical motor coupled with a mechanical gear reducer.
Resumo:
This paper analyzes several natural and man-made complex phenomena in the perspective of dynamical systems. Such phenomena are often characterized by the absence of a characteristic length-scale, long range correlations and persistent memory, which are features also associated to fractional order systems. For each system, the output, interpreted as a manifestation of the system dynamics, is analyzed by means of the Fourier transform. The amplitude spectrum is approximated by a power law function and the parameters are interpreted as an underlying signature of the system dynamics. The complex systems under analysis are then compared in a global perspective in order to unveil and visualize hidden relationships among them.
Resumo:
The decomposition of a fractional linear system is discussed in this paper. It is shown that it can be decomposed into an integer order part, corresponding to possible existing poles, and a fractional part. The first and second parts are responsible for the short and long memory behaviors of the system, respectively, known as characteristic of fractional systems.
Resumo:
Recent embedded processor architectures containing multiple heterogeneous cores and non-coherent caches renewed attention to the use of Software Transactional Memory (STM) as a building block for developing parallel applications. STM promises to ease concurrent and parallel software development, but relies on the possibility of abort conflicting transactions to maintain data consistency, which in turns affects the execution time of tasks carrying transactions. Because of this fact the timing behaviour of the task set may not be predictable, thus it is crucial to limit the execution time overheads resulting from aborts. In this paper we formalise a FIFO-based algorithm to order the sequence of commits of concurrent transactions. Then, we propose and evaluate two non-preemptive and one SRP-based fully-preemptive scheduling strategies, in order to avoid transaction starvation.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia de Materiais
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Current computer systems have evolved from featuring only a single processing unit and limited RAM, in the order of kilobytes or few megabytes, to include several multicore processors, o↵ering in the order of several tens of concurrent execution contexts, and have main memory in the order of several tens to hundreds of gigabytes. This allows to keep all data of many applications in the main memory, leading to the development of inmemory databases. Compared to disk-backed databases, in-memory databases (IMDBs) are expected to provide better performance by incurring in less I/O overhead. In this dissertation, we present a scalability study of two general purpose IMDBs on multicore systems. The results show that current general purpose IMDBs do not scale on multicores, due to contention among threads running concurrent transactions. In this work, we explore di↵erent direction to overcome the scalability issues of IMDBs in multicores, while enforcing strong isolation semantics. First, we present a solution that requires no modification to either database systems or to the applications, called MacroDB. MacroDB replicates the database among several engines, using a master-slave replication scheme, where update transactions execute on the master, while read-only transactions execute on slaves. This reduces contention, allowing MacroDB to o↵er scalable performance under read-only workloads, while updateintensive workloads su↵er from performance loss, when compared to the standalone engine. Second, we delve into the database engine and identify the concurrency control mechanism used by the storage sub-component as a scalability bottleneck. We then propose a new locking scheme that allows the removal of such mechanisms from the storage sub-component. This modification o↵ers performance improvement under all workloads, when compared to the standalone engine, while scalability is limited to read-only workloads. Next we addressed the scalability limitations for update-intensive workloads, and propose the reduction of locking granularity from the table level to the attribute level. This further improved performance for intensive and moderate update workloads, at a slight cost for read-only workloads. Scalability is limited to intensive-read and read-only workloads. Finally, we investigate the impact applications have on the performance of database systems, by studying how operation order inside transactions influences the database performance. We then propose a Read before Write (RbW) interaction pattern, under which transaction perform all read operations before executing write operations. The RbW pattern allowed TPC-C to achieve scalable performance on our modified engine for all workloads. Additionally, the RbW pattern allowed our modified engine to achieve scalable performance on multicores, almost up to the total number of cores, while enforcing strong isolation.
Resumo:
It is a difficult task to avoid the “smart systems” topic when discussing smart prevention and, similarly, it is a difficult task to address smart systems without focusing their ability to learn. Following the same line of thought, in the current reality, it seems a Herculean task (or an irreparable omission) to approach the topic of certified occupational health and safety management systems (OHSMS) without discussing the integrated management systems (IMSs). The available data suggest that seldom are the OHSMS operating as the single management system (MS) in a company so, any statement concerning OHSMS should mainly be interpreted from an integrated perspective. A major distinction between generic systems can be drawn between those that learn, i.e., those systems that have “memory” and those that have not. These former systems are often depicted as adaptive since they take into account past events to deal with novel, similar and future events modifying their structure to enable success in its environment. Often, these systems, present a nonlinear behavior and a huge uncertainty related to the forecasting of some events. This paper seeks to portray, for the first time as we were able to find out, the IMSs as complex adaptive systems (CASs) by listing their properties and dissecting the features that enable them to evolve and self-organize in order to, holistically, fulfil the requirements from different stakeholders and thus thrive by assuring the successful sustainability of a company. Based on the revision of literature carried out, this is the first time that IMSs are pointed out as CASs which may develop fruitful synergies both for the MSs and for CASs communities. By performing a thorough revision of literature and based on some concepts embedded in the “DNA” of the subsystems implementation standards it is intended, specifically, to identify, determine and discuss the properties of a generic IMS that should be considered to classify it as a CAS.
Resumo:
Objective: Immunosenescence and cognitive decline are common markers of the aging process. Taking into consideration the heterogeneity observed in aging processes and the recently described link between lymphocytes and cognition, we herein explored the possibility of an association between alterations in lymphocytic populations and cognitive performance. Methods: In a cohort of cognitively healthy adults (n = 114), previously characterized by diverse neurocognitive/psychological performance patterns, detailed peripheral blood immunophenotyping of both the innate and adaptive immune systems was performed by flow cytometry. Results: Better cognitive performance was associated with lower numbers of effector memory CD4(+) T cells and higher numbers of naive CD8(+) T cells and B cells. Furthermore, effector memory CD4(+) T cells were found to be predictors of general and executive function and memory, even when factors known to influence cognitive performance in older individuals (e.g., age, sex, education, and mood) were taken into account. Conclusions: This is the first study in humans associating specific phenotypes of the immune system with distinct cognitive performance in healthy aging.