144 resultados para caching


Relevância:

10.00% 10.00%

Publicador:

Resumo:

3D geographic information system (GIS) is data and computation intensive in nature. Internet users are usually equipped with low-end personal computers and network connections of limited bandwidth. Data reduction and performance optimization techniques are of critical importance in quality of service (QoS) management for online 3D GIS. In this research, QoS management issues regarding distributed 3D GIS presentation were studied to develop 3D TerraFly, an interactive 3D GIS that supports high quality online terrain visualization and navigation. ^ To tackle the QoS management challenges, multi-resolution rendering model, adaptive level of detail (LOD) control and mesh simplification algorithms were proposed to effectively reduce the terrain model complexity. The rendering model is adaptively decomposed into sub-regions of up-to-three detail levels according to viewing distance and other dynamic quality measurements. The mesh simplification algorithm was designed as a hybrid algorithm that combines edge straightening and quad-tree compression to reduce the mesh complexity by removing geometrically redundant vertices. The main advantage of this mesh simplification algorithm is that grid mesh can be directly processed in parallel without triangulation overhead. Algorithms facilitating remote accessing and distributed processing of volumetric GIS data, such as data replication, directory service, request scheduling, predictive data retrieving and caching were also proposed. ^ A prototype of the proposed 3D TerraFly implemented in this research demonstrates the effectiveness of our proposed QoS management framework in handling interactive online 3D GIS. The system implementation details and future directions of this research are also addressed in this thesis. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the exponential increasing demands and uses of GIS data visualization system, such as urban planning, environment and climate change monitoring, weather simulation, hydrographic gauge and so forth, the geospatial vector and raster data visualization research, application and technology has become prevalent. However, we observe that current web GIS techniques are merely suitable for static vector and raster data where no dynamic overlaying layers. While it is desirable to enable visual explorations of large-scale dynamic vector and raster geospatial data in a web environment, improving the performance between backend datasets and the vector and raster applications remains a challenging technical issue. This dissertation is to implement these challenging and unimplemented areas: how to provide a large-scale dynamic vector and raster data visualization service with dynamic overlaying layers accessible from various client devices through a standard web browser, and how to make the large-scale dynamic vector and raster data visualization service as rapid as the static one. To accomplish these, a large-scale dynamic vector and raster data visualization geographic information system based on parallel map tiling and a comprehensive performance improvement solution are proposed, designed and implemented. They include: the quadtree-based indexing and parallel map tiling, the Legend String, the vector data visualization with dynamic layers overlaying, the vector data time series visualization, the algorithm of vector data rendering, the algorithm of raster data re-projection, the algorithm for elimination of superfluous level of detail, the algorithm for vector data gridding and re-grouping and the cluster servers side vector and raster data caching.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer's processor. In order to maximize performance, the speeds of the memory and the processor should be equal. However, using memory that always match the speed of the processor is prohibitively expensive. Computer hardware designers have managed to drastically lower the cost of the system with the use of memory caches by sacrificing some performance. A cache is a small piece of fast memory that stores popular data so it can be accessed faster. Modern computers have evolved into a hierarchy of caches, where a memory level is the cache for a larger and slower memory level immediately below it. Thus, by using caches, manufacturers are able to store terabytes of data at the cost of cheapest memory while achieving speeds close to the speed of the fastest one.^ The most important decision about managing a cache is what data to store in it. Failing to make good decisions can lead to performance overheads and over-provisioning. Surprisingly, caches choose data to store based on policies that have not changed in principle for decades. However, computing paradigms have changed radically leading to two noticeably different trends. First, caches are now consolidated across hundreds to even thousands of processes. And second, caching is being employed at new levels of the storage hierarchy due to the availability of high-performance flash-based persistent media. This brings four problems. First, as the workloads sharing a cache increase, it is more likely that they contain duplicated data. Second, consolidation creates contention for caches, and if not managed carefully, it translates to wasted space and sub-optimal performance. Third, as contented caches are shared by more workloads, administrators need to carefully estimate specific per-workload requirements across the entire memory hierarchy in order to meet per-workload performance goals. And finally, current cache write policies are unable to simultaneously provide performance and consistency guarantees for the new levels of the storage hierarchy.^ We addressed these problems by modeling their impact and by proposing solutions for each of them. First, we measured and modeled the amount of duplication at the buffer cache level and contention in real production systems. Second, we created a unified model of workload cache usage under contention to be used by administrators for provisioning, or by process schedulers to decide what processes to run together. Third, we proposed methods for removing cache duplication and to eliminate wasted space because of contention for space. And finally, we proposed a technique to improve the consistency guarantees of write-back caches while preserving their performance benefits.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electrical energy is an essential resource for the modern world. Unfortunately, its price has almost doubled in the last decade. Furthermore, energy production is also currently one of the primary sources of pollution. These concerns are becoming more important in data-centers. As more computational power is required to serve hundreds of millions of users, bigger data-centers are becoming necessary. This results in higher electrical energy consumption. Of all the energy used in data-centers, including power distribution units, lights, and cooling, computer hardware consumes as much as 80%. Consequently, there is opportunity to make data-centers more energy efficient by designing systems with lower energy footprint. Consuming less energy is critical not only in data-centers. It is also important in mobile devices where battery-based energy is a scarce resource. Reducing the energy consumption of these devices will allow them to last longer and re-charge less frequently. Saving energy in computer systems is a challenging problem. Improving a system's energy efficiency usually comes at the cost of compromises in other areas such as performance or reliability. In the case of secondary storage, for example, spinning-down the disks to save energy can incur high latencies if they are accessed while in this state. The challenge is to be able to increase the energy efficiency while keeping the system as reliable and responsive as before. This thesis tackles the problem of improving energy efficiency in existing systems while reducing the impact on performance. First, we propose a new technique to achieve fine grained energy proportionality in multi-disk systems; Second, we design and implement an energy-efficient cache system using flash memory that increases disk idleness to save energy; Finally, we identify and explore solutions for the page fetch-before-update problem in caching systems that can: (a) control better I/O traffic to secondary storage and (b) provide critical performance improvement for energy efficient systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Unequaled improvements in processor and I/O speeds make many applications such as databases and operating systems to be increasingly I/O bound. Many schemes such as disk caching and disk mirroring have been proposed to address the problem. In this thesis we focus only on disk mirroring. In disk mirroring, a logical disk image is maintained on two physical disks allowing a single disk failure to be transparent to application programs. Although disk mirroring improves data availability and reliability, it has two major drawbacks. First, writes are expensive because both disks must be updated. Second, load balancing during failure mode operation is poor because all requests are serviced by the surviving disk. Distorted mirrors was proposed to address the write problem and interleaved declustering to address the load balancing problem. In this thesis we perform a comparative study of these two schemes under various operating modes. In addition we also study traditional mirroring to provide a common basis for comparison.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bioturbation in marine sediments has basically two aspects of interest for palaeo-environmental studies. First, the traces left by the burrowing organisms reflect the prevailing environmental conditions at the seafloor and thus can be used to reconstruct the ecologic and palaeoceanographic situation. Traces have the advantage over other proxies of practically always being preserved in situ. Secondly, for high- resolution stratigraphy, bioturbation is a nuisance due to the stirring and mixing processes that destroy the stratigraphic record. In order to evaluate the applicability of biogenic traces as palaeoenvironmental indicators, a number of gravity cores from the Portuguese continental slope, covering the period from the last glacial to the present were investigated through X-ray radiographs. In addition, physical and chemical parameters were determined to define the environmental niche in each core interval. A number of traces could be recognized, the most important being: Thalassinoides, Planolites, Zoophycos, Chondrites, Scolicia, Palaeophycus, Phycosiphon and the generally pyritized traces Trichichnus and Mycellia. The shifts between the different ichnofabrics agree strikingly well with the variations in ocean circulation caused by the changing climate. On the upper and middle slope, variations in current intensity and oxygenation of the Mediterranean Outflow Water were responsible for shifts in the ichnofabric. Larger traces such as Planolites and Thalassinoides dominated in coarse, well oxygenated intervals, while small traces such as Chondrites and Trichichnus dominated in fine grained, poorly oxygenated intervals. In contrast, on the lower slope where calm steady sedimentation conditions prevail, changes in sedimentation rate and nutrient flux have controlled variations in the distribution of larger traces such as Planolites, Thalassinoides, and Palaeophycus. Additionally, distinct layers of abundant Chondrites correspond to Heinrich events 1, 2, and 4, and are interpreted as a response to incursions of nutrient rich, oxygen depleted Antarctic waters during phases of reduced thermohaline circulation. The results clearly show that not one single factor but a combination of several factors is necessary to explain the changes in ichnofabric. Furthermore, large variations in the extent and type of bioturbation and tiering between different settings clearly show that a more detailed knowledge of the factors governing bioturbation is necessary if we shall fully comprehend how proxy records are disturbed. A first attempt to automatize a part of the recognition and quantification of the ichnofabric was performed using the DIAna image analysis program on digitized X-ray radiographs. The results show that enhanced abundance of pyritized microburrows appears to be coupled to organic rich sediments deposited under dysoxic conditions. Coarse grained sediments inhibit the formation of pyritized burrows. However, the smallest changes in program settings controlling the grey scale threshold and the sensitivity resulted in large shifts in the number of detected burrows. Therefore, this method can only be considered to be semi-quantitative. Through AMS-^C dating of sample pairs from the Zoophycos spreiten and the surrounding host sediment, age reversals of up to 3,320 years could be demonstrated for the first time. The spreiten material is always several thousands of years younger than the surrounding host sediment. Together with detailed X-ray radiograph studies this shows that the trace maker collects the material on the seafloor, and then transports it downwards up to more than one meter in to the underlying sediment where it is deposited in distinct structures termed spreiten. This clearly shows that age reversals of several thousands of years can be expected whenever Zoophycos is unknowingly sampled. These results also render the hitherto proposed ethological models proposed for Zoophycos as largely implausible. Therefore, a combination of detritus feeding, short time caching, and hibernation possibly combined also with gardening, is suggested here as an explanation for this complicated burrow.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Content Centric Network (CCN) is a proposed future internet architecture that is based on the concept of contents name instead of the hosts name followed in the traditional internet architecture. CCN architecture might do changes in the existing internet architecture or might replace it completely. In this paper, we present modifications to the existing Domain Name System (DNS) based on the CCN architecture requirements without changing the existing routing architecture. Hence the proposed solution achieves the benefits of both CCN and existing network infrastructure (i.e. content based routing, independent of host location, caching and content delivery protocols).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the current challenges in model-driven engineering is enabling effective collaborative modelling. Two common approaches are either storing the models in a central repository, or keeping them under a traditional file-based version control system and build a centralized index for model-wide queries. Either way, special attention must be paid to the nature of these repositories and indexes as networked services: they should remain responsive even with an increasing number of concurrent clients. This paper presents an empirical study on the impact of certain key decisions on the scalability of concurrent model queries, using an Eclipse Connected Data Objects model repository and a Hawk model index. The study evaluates the impact of the network protocol, the API design and the internal caching mechanisms and analyzes the reasons for their varying performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Concerns have been raised in the past several years that introducing new transport protocols on the Internet has be- come increasingly difficult, not least because there is no agreed-upon way for a source end host to find out if a trans- port protocol is supported all the way to a destination peer. A solution to a similar problem—finding out support for IPv6—has been proposed and is currently being deployed: the Happy Eyeballs (HE) mechanism. HE has also been proposed as an efficient way for an application to select an appropriate transport protocol. Still, there are few, if any, performance evaluations of transport HE. This paper demonstrates that transport HE could indeed be a feasible solution to the transport support problem. The paper evaluates HE between TCP and SCTP using TLS encrypted and unencrypted traffic, and shows that although there is indeed a cost in terms of CPU load to introduce HE, the cost is rel- atively small, especially in comparison with the cost of using TLS encryption. Moreover, our results suggest that HE has a marginal impact on memory usage. Finally, by introduc- ing caching of previous connection attempts, the additional cost of transport HE could be significantly reduced.