980 resultados para memory systems
Resumo:
This thesis is a study of performance management of Complex Event Processing (CEP) systems. Since CEP systems have distinct characteristics from other well-studied computer systems such as batch and online transaction processing systems and database-centric applications, these characteristics introduce new challenges and opportunities to the performance management for CEP systems. Methodologies used in benchmarking CEP systems in many performance studies focus on scaling the load injection, but not considering the impact of the functional capabilities of CEP systems. This thesis proposes the approach of evaluating the performance of CEP engines’ functional behaviours on events and develops a benchmark platform for CEP systems: CEPBen. The CEPBen benchmark platform is developed to explore the fundamental functional performance of event processing systems: filtering, transformation and event pattern detection. It is also designed to provide a flexible environment for exploring new metrics and influential factors for CEP systems and evaluating the performance of CEP systems. Studies on factors and new metrics are carried out using the CEPBen benchmark platform on Esper. Different measurement points of response time in performance management of CEP systems are discussed and response time of targeted event is proposed to be used as a metric for quality of service evaluation combining with the traditional response time in CEP systems. Maximum query load as a capacity indicator regarding to the complexity of queries and number of live objects in memory as a performance indicator regarding to the memory management are proposed in performance management of CEP systems. Query depth is studied as a performance factor that influences CEP system performance.
Resumo:
The problems of constructing the selfsrtucturized systems of memory of intelligence information processing tools, allowing formation of associative links in the memory, hierarchical organization and classification, generating concepts in the process of the information input, are discussed. The principles and methods for realization of selfstructurized systems on basis of hierarchic network structures of some special class – growing pyramidal network are studied. The algorithms for building, learning and recognition on basis of such type network structures are proposed. The examples of practical application are demonstrated.
Resumo:
This article demonstrates the use of embedded fibre Bragg gratings as vector bending sensor to monitor two-dimensional shape deformation of a shape memory polymer plate. The shape memory polymer plate was made by using thermal-responsive epoxy-based shape memory polymer materials, and the two fibre Bragg grating sensors were orthogonally embedded, one on the top and the other on the bottom layer of the plate, in order to measure the strain distribution in both longitudinal and transverse directions separately and also with temperature reference. When the shape memory polymer plate was bent at different angles, the Bragg wavelengths of the embedded fibre Bragg gratings showed a red-shift of 50 pm/°caused by the bent-induced tensile strain on the plate surface. The finite element method was used to analyse the stress distribution for the whole shape recovery process. The strain transfer rate between the shape memory polymer and optical fibre was also calculated from the finite element method and determined by experimental results, which was around 0.25. During the experiment, the embedded fibre Bragg gratings showed very high temperature sensitivity due to the high thermal expansion coefficient of the shape memory polymer, which was around 108.24 pm/°C below the glass transition temperature (Tg) and 47.29 pm/°C above Tg. Therefore, the orthogonal arrangement of the two fibre Bragg grating sensors could provide a temperature compensation function, as one of the fibre Bragg gratings only measures the temperature while the other is subjected to the directional deformation. © The Author(s) 2013.
Resumo:
Accelerated graft rejection can be used to determine immune memory in the gorgonian coral swiftia exserta. The extent ofthe persistence of immune memory will be determined in this experiment using replicate sets that are time elapsed from 1, 3, and 6 month. Although corals lack circulatory systems which can be a component of adaptive systemic immunity, this study will attempt to determine whether this gorgonian coral is capable of transmitting immune information throughout its colonial body. Results showed that at each of the time points; one, three, and six months the secondary response group and the primary response group were significantly different (at p=0.001) therefore, demonstrating long term immune memory. While the primary response group and the 3rd party specificity response group were similar, both were significantly different (at p=O. 001) from the secondary response group which shows the response to be specific, with memory applicable to the original antigen. Systemic immunity was not determined to be present for 15 em and one week after initial sensitization.
Resumo:
The neurotransmitter dopamine (DA) plays an essential role in reward-related incentive learning, whereby neutral stimuli gain the ability to elicit approach and other responses. In an incentive learning paradigm called conditioned activity, animals receive a stimulant drug in a specific environment over the course of several days. When then placed in that environment drug-free, they generally display a conditioned hyperactive response. Modulating DA transmission at different time points during the paradigm has been shown to disrupt or enhance conditioning effects. For instance, blocking DA D2 receptors before sessions generally impedes the acquisition of conditioned activity. To date, no studies have examined the role of D2 receptors in the consolidation phase of conditioned activity; this phase occurs immediately after acquisition and involves the stabilization of memories for long-term storage. To investigate this possible role, I trained Wistar rats (N = 108) in the conditioned activity paradigm produced by amphetamine (2.0 mg/kg, intraperitoneally) to examine the effects of the D2 antagonist haloperidol (doses 0.10, 0.25, 0.50, 0.75, 1.0, & 2.0 mg/kg, intraperitoneally) administered 5 min after conditioning sessions. Two positive control groups received haloperidol 1 h before conditioning sessions (doses 1.0 mg/kg and 2.0 mg/kg). The results revealed that post-session haloperidol at all doses tested did not disrupt the consolidation of conditioned activity, while pre-session haloperidol at 2.0 mg/kg prevented acquisition, with the 1.0 mg/kg group trending toward a block. Additionally, post-session haloperidol did not diminish activity during conditioning days, unlike pre-session haloperidol. One possible reason for these findings is that the consolidation phase may have begun earlier than when haloperidol was administered, since the conditioned activity paradigm uses longer learning sessions than those generally used in consolidation studies. Future studies may test if conditioned activity can be achieved with shorter sessions; if so, haloperidol would then be re-tested at an earlier time point. D2 receptor second messenger systems may also be investigated in consolidation. Since drug-related incentive stimuli can evoke cravings in those with drug addiction, a better understanding of the mechanisms of incentive learning may lead to the development of solutions for these individuals.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
This article examines two American vampire narratives that depict the perspective and memories of a main character who is turned into a vampire in the US in the nineteenth century: Jewelle Gomez’s novel The Gilda Stories (1991), and the first season of Alan Ball’s popular TV series True Blood (2008). In both narratives, the relationship between the past and the present, embodied by the main vampire character, is of utmost importance, but the two narratives use vampire conventions as well as representations of and references to the nineteenth century in different ways that comment on, revise, or reinscribe generic and socio-historical assumptions about race, gender, class, and sexuality.
Resumo:
Young children often experience relational memory failures, which are thought to be due to underdeveloped recollection processes. Manipulations with adults, however, have suggested that relational memory tasks can be accomplished with familiarity, a processes that is fully developed during early childhood. The goal of the present study was to determine if relational memory performance could be improved in early childhood by teaching children a memory strategy (i.e., unitization) shown to increase familiarity in adults. Six- and 8-year old children were taught to use visualization strategies that either unitized or did not unitize pictures and colored borders. Analysis revealed inconclusive results regarding differences in familiarity between the two conditions, suggesting that the unitization memory strategy did not improve the contribution of familiarity as it has been shown to do in adults. Based on these findings, it cannot be concluded that unitization strategies increase the contribution of familiarity in childhood.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
In this paper we outline initial concepts for an immune inspired algorithm to evaluate price time series data. The proposed solution evolves a short term pool of trackers dynamically through a process of proliferation and mutation, with each member attempting to map to trends in price movements. Successful trackers feed into a long term memory pool that can generalise across repeating trend patterns. Tests are performed to examine the algorithm’s ability to successfully identify trends in a small data set. The influence of the long term memory pool is then examined. We find the algorithm is able to identify price trends presented successfully and efficiently.
Resumo:
Accurate immunological models offer the possibility of performing highthroughput experiments in silico that can predict, or at least suggest, in vivo phenomena. In this chapter, we compare various models of immunological memory. We first validate an experimental immunological simulator, developed by the authors, by simulating several theories of immunological memory with known results. We then use the same system to evaluate the predicted effects of a theory of immunological memory. The resulting model has not been explored before in artificial immune systems research, and we compare the simulated in silico output with in vivo measurements. Although the theory appears valid, we suggest that there are a common set of reasons why immunological memory models are a useful support tool; not conclusive in themselves.
Resumo:
Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.
Resumo:
We outline initial concepts for an immune inspired algorithm to evaluate and predict oil price time series data. The proposed solution evolves a short term pool of trackers dynamically, with each member attempting to map trends and anticipate future price movements. Successful trackers feed into a long term memory pool that can generalise across repeating trend patterns. The resulting sequence of trackers, ordered in time, can be used as a forecasting tool. Examination of the pool of evolving trackers also provides valuable insight into the properties of the crude oil market.
Resumo:
Studies of non-equilibrium current fluctuations enable assessing correlations involved in quantum transport through nanoscale conductors. They provide additional information to the mean current on charge statistics and the presence of coherence, dissipation, disorder, or entanglement. Shot noise, being a temporal integral of the current autocorrelation function, reveals dynamical information. In particular, it detects presence of non-Markovian dynamics, i.e., memory, within open systems, which has been subject of many current theoretical studies. We report on low-temperature shot noise measurements of electronic transport through InAs quantum dots in the Fermi-edge singularity regime and show that it exhibits strong memory effects caused by quantum correlations between the dot and fermionic reservoirs. Our work, apart from addressing noise in archetypical strongly correlated system of prime interest, discloses generic quantum dynamical mechanism occurring at interacting resonant Fermi edges.