861 resultados para MULTIPLE MEMORY-SYSTEMS
Resumo:
In this paper we outline initial concepts for an immune inspired algorithm to evaluate price time series data. The proposed solution evolves a short term pool of trackers dynamically through a process of proliferation and mutation, with each member attempting to map to trends in price movements. Successful trackers feed into a long term memory pool that can generalise across repeating trend patterns. Tests are performed to examine the algorithm’s ability to successfully identify trends in a small data set. The influence of the long term memory pool is then examined. We find the algorithm is able to identify price trends presented successfully and efficiently.
Resumo:
Accurate immunological models offer the possibility of performing highthroughput experiments in silico that can predict, or at least suggest, in vivo phenomena. In this chapter, we compare various models of immunological memory. We first validate an experimental immunological simulator, developed by the authors, by simulating several theories of immunological memory with known results. We then use the same system to evaluate the predicted effects of a theory of immunological memory. The resulting model has not been explored before in artificial immune systems research, and we compare the simulated in silico output with in vivo measurements. Although the theory appears valid, we suggest that there are a common set of reasons why immunological memory models are a useful support tool; not conclusive in themselves.
Resumo:
Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.
Resumo:
There are enormous benefits for any organisation from practising sound records management. In the context of a public university, the importance of good records management includes: facilitating the achievement the university’s mandate; enhancing efficiency of the university; maintaining a reliable institutional memory; promoting trust; responding to an audit culture; enhancing university competitiveness; supporting the university’s fiduciary duty; demonstrating transparency and accountability; and fighting corruption. Records scholars and commentators posit that effective recordkeeping is an essential underpinning of good governance. Although there is a portrayal of positive correlation, recordkeeping struggles to get the same attention as that given to the governance. Evidence abounds of cases of neglect of recordkeeping in universities and other institutions in Sub-Saharan Africa. The apparent absence of sound recordkeeping provided a rationale for revisiting some universities in South Africa and Malawi in order to critically explore the place of recordkeeping in an organisation’s strategy in order to develop an alternative framework for managing records and documents in an era where good governance is a global agenda. The research is a collective case study in which multiple cases are used to critically explore the relationship between recordkeeping and governance. As qualitative research that belongs in the interpretive tradition of enquiry, it is not meant to suggest prescriptive solutions to general recordkeeping problems but rather to provide an understanding of the challenges and opportunities that arise in managing records and documents in the world of governance, audit and risk. That is: what goes on in the workplace; what are the problems; and what alternative approaches might address any existing problem situations. Research findings show that some institutions are making good use of their governance structures and other drivers for recordkeeping to put in place sound recordkeeping systems. Key governance structures and other drivers for recordkeeping identified include: laws and regulations; governing bodies; audit; risk; technology; reforms; and workplace culture. Other institutions are not managing their records and documents well despite efforts to improve their governance systems. They lack recordkeeping capacity. Areas that determine recordkeeping capacity include: availability of records management policy; capacity for digital records; availability of a records management unit; senior management support; level of education and training of records management staff; and systems and procedures for storage, retrieval and dispositions of records. Although this research reveals that the overall recordkeeping in the selected countries has slightly improved compared with the situation other researchers found a decade ago, it remains unsatisfactory and disjointed from governance. The study therefore proposes governance recordkeeping as an approach to managing records and documents in the world of governance, audit and risk. The governance recordkeeping viewpoint considers recordkeeping as a governance function that should be treated in the same manner as other governance functions such as audit and risk management. Additionally, recordkeeping and governance should be considered as symbiotic elements of a strategy. A strategy that neglects recordkeeping may not fulfil the organisation’s objectives effectively.
Resumo:
We outline initial concepts for an immune inspired algorithm to evaluate and predict oil price time series data. The proposed solution evolves a short term pool of trackers dynamically, with each member attempting to map trends and anticipate future price movements. Successful trackers feed into a long term memory pool that can generalise across repeating trend patterns. The resulting sequence of trackers, ordered in time, can be used as a forecasting tool. Examination of the pool of evolving trackers also provides valuable insight into the properties of the crude oil market.
Resumo:
Studies of non-equilibrium current fluctuations enable assessing correlations involved in quantum transport through nanoscale conductors. They provide additional information to the mean current on charge statistics and the presence of coherence, dissipation, disorder, or entanglement. Shot noise, being a temporal integral of the current autocorrelation function, reveals dynamical information. In particular, it detects presence of non-Markovian dynamics, i.e., memory, within open systems, which has been subject of many current theoretical studies. We report on low-temperature shot noise measurements of electronic transport through InAs quantum dots in the Fermi-edge singularity regime and show that it exhibits strong memory effects caused by quantum correlations between the dot and fermionic reservoirs. Our work, apart from addressing noise in archetypical strongly correlated system of prime interest, discloses generic quantum dynamical mechanism occurring at interacting resonant Fermi edges.
Resumo:
Liquid crystals (LCs) have revolutionized the display and communication technologies. Doping of LCs with inorganic nanoparticles such as carbon nanotubes, gold nanoparticles and ferroelectric nanoparticles have garnered the interest of research community as they aid in improving the electro-optic performance. In this thesis, we examine a hybrid nanocomposite comprising of 5CB liquid crystal and block copolymer functionalized barium titanate ferroelectric nanoparticles. This hybrid system exhibits a giant soft-memory effect. Here, spontaneous polarization of ferroelectric nanoparticles couples synergistically with the radially aligned BCP chains to create nanoscopic domains that can be rotated electromechanically and locked in space even after the removal of the applied electric field. The resulting non-volatile memory is several times larger than the non-functionalized sample and provides an insight into the role of non-covalent polymer functionalization. We also present the latest results from the dielectric and spectroscopic study of field assisted alignment of gold nanorods.
Resumo:
International audience
Resumo:
Nitrous oxide (N2O) is a potent greenhouse gas; the majority of N2O emissions are the result of agricultural management, particularly the application of N fertilizers to soils. The relationship of N2O emissions to varying sources of N (manures, mineral fertilizers, and cover crops) has not been well-evaluated. Here we discussed a novel methodology for estimating precipitation-induced pulses of N2O using flux measurements; results indicated that short-term intensive time-series sampling methods can adequately describe the magnitude of these pulses. We also evaluated the annual N2O emissions from corn-cover crop (Zea mays; cereal rye [Secale cereale], hairy vetch [Vicia villosa], or biculture) production systems when fertilized with multiple rates of subsurface banded poultry litter, as compared with tillage incorporation or mineral fertilizer. N2O emissions increased exponentially with total N rate; tillage decreased emissions following cover crops with legume components, while the effect of mineral fertilizer was mixed across cover crops.
Resumo:
This paper presents a new tuning methodology of the main controller of an internal model control structure for n×n stable multivariable processes with multiple time delays based on the centralized inverted decoupling structure. Independently of the system size, very simple general expressions for the controller elements are obtained. The realizability conditions are provided and the specification of the closed-loop requirements is explained. A diagonal filter is added to the proposed control structure in order to improve the disturbance rejection without modifying the nominal set-point response. The effectiveness of the method is illustrated through different simulation examples in comparison with other works.
Resumo:
In energy harvesting communications, users transmit messages using energy harvested from nature. In such systems, transmission policies of the users need to be carefully designed according to the energy arrival profiles. When the energy management policies are optimized, the resulting performance of the system depends only on the energy arrival profiles. In this dissertation, we introduce and analyze the notion of energy cooperation in energy harvesting communications where users can share a portion of their harvested energy with the other users via wireless energy transfer. This energy cooperation enables us to control and optimize the energy arrivals at users to the extent possible. In the classical setting of cooperation, users help each other in the transmission of their data by exploiting the broadcast nature of wireless communications and the resulting overheard information. In contrast to the usual notion of cooperation, which is at the signal level, energy cooperation we introduce here is at the battery energy level. In a multi-user setting, energy may be abundant in one user in which case the loss incurred by transferring it to another user may be less than the gain it yields for the other user. It is this cooperation that we explore in this dissertation for several multi-user scenarios, where energy can be transferred from one user to another through a separate wireless energy transfer unit. We first consider the offline optimal energy management problem for several basic multi-user network structures with energy harvesting transmitters and one-way wireless energy transfer. In energy harvesting transmitters, energy arrivals in time impose energy causality constraints on the transmission policies of the users. In the presence of wireless energy transfer, energy causality constraints take a new form: energy can flow in time from the past to the future for each user, and from one user to the other at each time. This requires a careful joint management of energy flow in two separate dimensions, and different management policies are required depending on how users share the common wireless medium and interact over it. In this context, we analyze several basic multi-user energy harvesting network structures with wireless energy transfer. To capture the main trade-offs and insights that arise due to wireless energy transfer, we focus our attention on simple two- and three-user communication systems, such as the relay channel, multiple access channel and the two-way channel. Next, we focus on the delay minimization problem for networks. We consider a general network topology of energy harvesting and energy cooperating nodes. Each node harvests energy from nature and all nodes may share a portion of their harvested energies with neighboring nodes through energy cooperation. We consider the joint data routing and capacity assignment problem for this setting under fixed data and energy routing topologies. We determine the joint routing of energy and data in a general multi-user scenario with data and energy transfer. Next, we consider the cooperative energy harvesting diamond channel, where the source and two relays harvest energy from nature and the physical layer is modeled as a concatenation of a broadcast and a multiple access channel. Since the broadcast channel is degraded, one of the relays has the message of the other relay. Therefore, the multiple access channel is an extended multiple access channel with common data. We determine the optimum power and rate allocation policies of the users in order to maximize the end-to-end throughput of this system. Finally, we consider the two-user cooperative multiple access channel with energy harvesting users. The users cooperate at the physical layer (data cooperation) by establishing common messages through overheard signals and then cooperatively sending them. For this channel model, we investigate the effect of intermittent data arrivals to the users. We find the optimal offline transmit power and rate allocation policy that maximize the departure region. When the users can further cooperate at the battery level (energy cooperation), we find the jointly optimal offline transmit power and rate allocation policy together with the energy transfer policy that maximize the departure region.
Resumo:
Part 5: Service Orientation in Collaborative Networks
Resumo:
Ongoing quest for finding treatment against memory loss seen in aging and in many neurological and neurodegenerative diseases, so far has been unsuccessful and memory enhancers are seen as a potential remedy against this brain dysfunction. Recently, we showed that gene corresponding to a protein called regulator of G-protein signaling 14 of 414 amino acids (RGS14414) is a robust memory enhancer (Lopez-Aranda et al. 2009: Science). RGS14414-treatment in area V2 of visual cortex caused memory enhancement to such extent that it converted short-term object recognition memory (ORM) of 45min into long lasting long-term memory that could be traced even after many months. Now, through targeting of multiple receptors and molecules known to be involved in memory processing, we found that GluR2 subunit of AMPA receptor might be key to memory enhancement in RGS-animals. RGS14-animals showed a progressive increase in GluR2 protein expression while processing an object information which reached to highest level after 60min of object exposure, a time period required for conversion of short-term ORM into long-term memory in our laboratory set up. Normal rats could retain an object information in brain for 45min (short-term) and not for 60min. However, RGS-treated rats are able to retain the same information for 24h or longer (long-term). Therefore, highest expression of GluR2 subunit seen at 60min suggests that this protein might be key in memory enhancement and conversion to long-term memory in RGS-animals. In addition, we will also discuss the implication of Hebbian plasticity and interaction of brain circuits in memory enhancement.
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
Resumo:
After a crime has occurred, one of the most pressing objectives for investigators is to identify and interview any eyewitness that can provide information about the crime. Depending on his or her training, the investigative interviewer will use (to varying degrees) mostly yes/no questions, some cued and multiple-choice questions, with few open-ended questions. When the witness cannot generate any more details about the crime, one assumes the eyewitness’ memory for the critical event has been exhausted. However, given what we know about memory, is this a safe assumption? In line with the extant literature on human cognition, if one assumes (a) an eyewitness has more available memories of the crime than he or she has accessible and (b) only explicit probes have been used to elicit information, then one can argue this eyewitness may still be able to provide additional information via implicit memory tests. In accordance with these notions, the present study had two goals: demonstrate that (1) eyewitnesses can reveal memory implicitly for a detail-rich event and (2) particularly for brief crimes, eyewitnesses can reveal memory for event details implicitly that were inaccessible when probed for explicitly. Undergraduates (N = 227) participated in a psychological experiment in exchange for research credit. Participants were presented with one of three stimulus videos (brief crime vs. long crime vs. irrelevant video). Then, participants either completed a series of implicit memory tasks or worked on a puzzle for 5 minutes. Lastly, participants were interviewed explicitly about the previous video via free recall and recognition tasks. Findings indicated that participants who viewed the brief crime provided significantly more crime-related details implicitly than those who viewed the long crime. The data also showed participants who viewed the long crime provided marginally more accurate details during free recall than participants who viewed the brief crime. Furthermore, participants who completed the implicit memory tasks provided significantly less accurate information during the explicit interview than participants who were not given implicit memory tasks. This study was the first to investigate implicit memory for eyewitnesses of a crime. To determine its applied value, additional empirical work is required.