999 resultados para memory disk


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two novel read-only memory (ROM) disks, one with an AgOx mask layer and the other with an AgInSbTe mask layer, are proposed and studied. The AgOx and the AgInSbTe films sputtered on the premastered substrates with pit depths of 50 nm and pit lengths (space) of 380 nm are studied by atomic force microscopy. Disk readout measurement is carried out using a dynamic setup with a laser wavelength of 632.8 nm and an object lens numerical aperture (NA) of 0.40. Results show that the superresolution effect happens only at a suitable oxygen flow ratio for the AgOx ROM disk. The best superresolution readout effect is achieved at an oxygen flow ratio of 0.5 with the smoothest film surface. Compared with the AgOx ROM disk, the AgInSbTe ROM disk has a much smoother film surface and better superresolution effect. A carrier-to-noise ratio (CNR) of above 40 dB can be obtained at an appropriate readout power and readout velocity. The readout CNR of both the AgOx and AgInSbTe ROM disks have a nonlinear dependence on the readout power. The superresolution readout mechanisms for these ROM disks are analyzed and compared as well. (c) 2005 Society of Photo-Optical Instrumentation Engineers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel read-only memory (ROM) disk with an AgOx mask layer was proposed and studied in this letter. The AgOx films sputtered on the premastered substrates, with pits depth of 50 nm and pits length of 380 nm, were studied by an atomic force microscopy. The transmittances of these AgOx films were also measured by a spectrophotometer. Disk measurement was carried out by a dynamic setup with a laser wavelength of 632.8 nm and a lens numerical aperture (NA) of 0.40. The readout resolution limit of this setup was λ/(4NA) (400 nm). Results showed that the super-resolution readout happened only when the oxygen flow ratios were at suitable values for these disks. The best super-resolution performance was achieved at the oxygen flow ratio of 0.5 with the smoothest film surface. The super-resolution readout mechanism of these ROM disks was analyzed as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two novel read-only memory (ROM) disks, one with an AgOx mask layer and the other with an AgInSbTe mask layer, are proposed and studied. The AgOx and the AgInSbTe films sputtered on the premastered substrates with pit depths of 50 nm and pit lengths (space) of 380 nm are studied by atomic force microscopy. Disk readout measurement is carried out using a dynamic setup with a laser wavelength of 632.8 nm and an object lens numerical aperture (NA) of 0.40. Results show that the superresolution effect happens only at a suitable oxygen flow ratio for the AgOx ROM disk. The best superresolution readout effect is achieved at an oxygen flow ratio of 0.5 with the smoothest film surface. Compared with the AgOx ROM disk, the AgInSbTe ROM disk has a much smoother film surface and better superresolution effect. A carrier-to-noise ratio (CNR) of above 40 dB can be obtained at an appropriate readout power and readout velocity. The readout CNR of both the AgOx and AgInSbTe ROM disks have a nonlinear dependence on the readout power. The superresolution readout mechanisms for these ROM disks are analyzed and compared as well. (c) 2005 Society of Photo-Optical Instrumentation Engineers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

"February 1984."

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present external memory data structures for efficiently answering range-aggregate queries. The range-aggregate problem is defined as follows: Given a set of weighted points in R-d, compute the aggregate of the weights of the points that lie inside a d-dimensional orthogonal query rectangle. The aggregates we consider in this paper include COUNT, sum, and MAX. First, we develop a structure for answering two-dimensional range-COUNT queries that uses O(N/B) disk blocks and answers a query in O(log(B) N) I/Os, where N is the number of input points and B is the disk block size. The structure can be extended to obtain a near-linear-size structure for answering range-sum queries using O(log(B) N) I/Os, and a linear-size structure for answering range-MAX queries in O(log(B)(2) N) I/Os. Our structures can be made dynamic and extended to higher dimensions. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fast content addressable data access mechanisms have compelling applications in today's systems. Many of these exploit the powerful wildcard matching capabilities provided by ternary content addressable memories. For example, TCAM based implementations of important algorithms in data mining been developed in recent years; these achieve an an order of magnitude speedup over prevalent techniques. However, large hardware TCAMs are still prohibitively expensive in terms of power consumption and cost per bit. This has been a barrier to extending their exploitation beyond niche and special purpose systems. We propose an approach to overcome this barrier by extending the traditional virtual memory hierarchy to scale up the user visible capacity of TCAMs while mitigating the power consumption overhead. By exploiting the notion of content locality (as opposed to spatial locality), we devise a novel combination of software and hardware techniques to provide an abstraction of a large virtual ternary content addressable space. In the long run, such abstractions enable applications to disassociate considerations of spatial locality and contiguity from the way data is referenced. If successful, ideas for making content addressability a first class abstraction in computing systems can open up a radical shift in the way applications are optimized for memory locality, just as storage class memories are soon expected to shift away from the way in which applications are typically optimized for disk access locality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dependence of thermal properties of Ag8In14Sb55Te23 phase-change memory materials in crystalline and amorphous states on temperature was measured and analyzed. The results show that in the crystalline state, the thermal properties monotonically decrease with the temperature and present obvious crystalline semiconductor characteristics. The heat capacity, thermal diffusivity, and thermal conductivity decrease from 0.35 J/g K, 1.85 mm(2)/s, and 4.0 W/m K at 300 K to 0.025 J/g K, 1.475 mm(2)/s, and 0.25 W/m K at 600 K, respectively. In the amorphous state, while the dependence of thermal properties on temperature does not present significant changes, the materials retain the glass-like thermal characteristics. Within the temperature range from 320 K to 440 K, the heat capacity fluctuates between 0.27 J/g K and 0.075 J/g K, the thermal diffusivity basically maintains at 0.525 mm(2)/s, and the thermal conductivity decreases from 1.02 W/m K at 320 K to 0.2 W/m K at 440 K. Whether in the crystalline or amorphous state, Ag8In14Sb55Te23 are more thermally active than Ge2Sb2Te5, that is, the Ag8In14Sb55Te23 composites bear stronger thermal conduction and diffusion than the Ge2Sb2Te5 phase-change memory materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In-Memory Databases (IMDBs), such as SAP HANA, enable new levels of database performance by removing the disk bottleneck and by compressing data in memory. The consequence of this improved performance means that reports and analytic queries can now be processed on demand. Therefore, the goal is now to provide near real-time responses to compute and data intensive analytic queries. To facilitate this, much work has investigated the use of acceleration technologies within the database context. While current research into the application of these technologies has yielded positive results, they have tended to focus on single database tasks or on isolated single user requests. This paper uses SHEPARD, a framework for managing accelerated tasks across shared heterogeneous resources, to introduce acceleration into an IMDB. Results show how, using SHEPARD, multiple simultaneous user queries all receive speed-up by using a shared pool of accelerators. Results also show that offloading analytic tasks onto accelerators can have indirect benefits for other database workloads by reducing contention for CPU resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current computer systems have evolved from featuring only a single processing unit and limited RAM, in the order of kilobytes or few megabytes, to include several multicore processors, o↵ering in the order of several tens of concurrent execution contexts, and have main memory in the order of several tens to hundreds of gigabytes. This allows to keep all data of many applications in the main memory, leading to the development of inmemory databases. Compared to disk-backed databases, in-memory databases (IMDBs) are expected to provide better performance by incurring in less I/O overhead. In this dissertation, we present a scalability study of two general purpose IMDBs on multicore systems. The results show that current general purpose IMDBs do not scale on multicores, due to contention among threads running concurrent transactions. In this work, we explore di↵erent direction to overcome the scalability issues of IMDBs in multicores, while enforcing strong isolation semantics. First, we present a solution that requires no modification to either database systems or to the applications, called MacroDB. MacroDB replicates the database among several engines, using a master-slave replication scheme, where update transactions execute on the master, while read-only transactions execute on slaves. This reduces contention, allowing MacroDB to o↵er scalable performance under read-only workloads, while updateintensive workloads su↵er from performance loss, when compared to the standalone engine. Second, we delve into the database engine and identify the concurrency control mechanism used by the storage sub-component as a scalability bottleneck. We then propose a new locking scheme that allows the removal of such mechanisms from the storage sub-component. This modification o↵ers performance improvement under all workloads, when compared to the standalone engine, while scalability is limited to read-only workloads. Next we addressed the scalability limitations for update-intensive workloads, and propose the reduction of locking granularity from the table level to the attribute level. This further improved performance for intensive and moderate update workloads, at a slight cost for read-only workloads. Scalability is limited to intensive-read and read-only workloads. Finally, we investigate the impact applications have on the performance of database systems, by studying how operation order inside transactions influences the database performance. We then propose a Read before Write (RbW) interaction pattern, under which transaction perform all read operations before executing write operations. The RbW pattern allowed TPC-C to achieve scalable performance on our modified engine for all workloads. Additionally, the RbW pattern allowed our modified engine to achieve scalable performance on multicores, almost up to the total number of cores, while enforcing strong isolation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

GraphChi is the first reported disk-based graph engine that can handle billion-scale graphs on a single PC efficiently. GraphChi is able to execute several advanced data mining, graph mining and machine learning algorithms on very large graphs. With the novel technique of parallel sliding windows (PSW) to load subgraph from disk to memory for vertices and edges updating, it can achieve data processing performance close to and even better than those of mainstream distributed graph engines. GraphChi mentioned that its memory is not effectively utilized with large dataset, which leads to suboptimal computation performances. In this paper we are motivated by the concepts of 'pin ' from TurboGraph and 'ghost' from GraphLab to propose a new memory utilization mode for GraphChi, which is called Part-in-memory mode, to improve the GraphChi algorithm performance. The main idea is to pin a fixed part of data inside the memory during the whole computing process. Part-in-memory mode is successfully implemented with only about 40 additional lines of code to the original GraphChi engine. Extensive experiments are performed with large real datasets (including Twitter graph with 1.4 billion edges). The preliminary results show that Part-in-memory mode memory management approach effectively reduces the GraphChi running time by up to 60% in PageRank algorithm. Interestingly it is found that a larger portion of data pinned in memory does not always lead to better performance in the case that the whole dataset cannot be fitted in memory. There exists an optimal portion of data which should be kept in the memory to achieve the best computational performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrical energy is an essential resource for the modern world. Unfortunately, its price has almost doubled in the last decade. Furthermore, energy production is also currently one of the primary sources of pollution. These concerns are becoming more important in data-centers. As more computational power is required to serve hundreds of millions of users, bigger data-centers are becoming necessary. This results in higher electrical energy consumption. Of all the energy used in data-centers, including power distribution units, lights, and cooling, computer hardware consumes as much as 80%. Consequently, there is opportunity to make data-centers more energy efficient by designing systems with lower energy footprint. Consuming less energy is critical not only in data-centers. It is also important in mobile devices where battery-based energy is a scarce resource. Reducing the energy consumption of these devices will allow them to last longer and re-charge less frequently. Saving energy in computer systems is a challenging problem. Improving a system's energy efficiency usually comes at the cost of compromises in other areas such as performance or reliability. In the case of secondary storage, for example, spinning-down the disks to save energy can incur high latencies if they are accessed while in this state. The challenge is to be able to increase the energy efficiency while keeping the system as reliable and responsive as before. This thesis tackles the problem of improving energy efficiency in existing systems while reducing the impact on performance. First, we propose a new technique to achieve fine grained energy proportionality in multi-disk systems; Second, we design and implement an energy-efficient cache system using flash memory that increases disk idleness to save energy; Finally, we identify and explore solutions for the page fetch-before-update problem in caching systems that can: (a) control better I/O traffic to secondary storage and (b) provide critical performance improvement for energy efficient systems.