90 resultados para caching
Resumo:
Information-centric networking (ICN) is a new communication paradigm that has been proposed to cope with drawbacks of host-based communication protocols, namely scalability and security. In this thesis, we base our work on Named Data Networking (NDN), which is a popular ICN architecture, and investigate NDN in the context of wireless and mobile ad hoc networks. In a first part, we focus on NDN efficiency (and potential improvements) in wireless environments by investigating NDN in wireless one-hop communication, i.e., without any routing protocols. A basic requirement to initiate informationcentric communication is the knowledge of existing and available content names. Therefore, we develop three opportunistic content discovery algorithms and evaluate them in diverse scenarios for different node densities and content distributions. After content names are known, requesters can retrieve content opportunistically from any neighbor node that provides the content. However, in case of short contact times to content sources, content retrieval may be disrupted. Therefore, we develop a requester application that keeps meta information of disrupted content retrievals and enables resume operations when a new content source has been found. Besides message efficiency, we also evaluate power consumption of information-centric broadcast and unicast communication. Based on our findings, we develop two mechanisms to increase efficiency of information-centric wireless one-hop communication. The first approach called Dynamic Unicast (DU) avoids broadcast communication whenever possible since broadcast transmissions result in more duplicate Data transmissions, lower data rates and higher energy consumption on mobile nodes, which are not interested in overheard Data, compared to unicast communication. Hence, DU uses broadcast communication only until a content source has been found and then retrieves content directly via unicast from the same source. The second approach called RC-NDN targets efficiency of wireless broadcast communication by reducing the number of duplicate Data transmissions. In particular, RC-NDN is a Data encoding scheme for content sources that increases diversity in wireless broadcast transmissions such that multiple concurrent requesters can profit from each others’ (overheard) message transmissions. If requesters and content sources are not in one-hop distance to each other, requests need to be forwarded via multi-hop routing. Therefore, in a second part of this thesis, we investigate information-centric wireless multi-hop communication. First, we consider multi-hop broadcast communication in the context of rather static community networks. We introduce the concept of preferred forwarders, which relay Interest messages slightly faster than non-preferred forwarders to reduce redundant duplicate message transmissions. While this approach works well in static networks, the performance may degrade in mobile networks if preferred forwarders may regularly move away. Thus, to enable routing in mobile ad hoc networks, we extend DU for multi-hop communication. Compared to one-hop communication, multi-hop DU requires efficient path update mechanisms (since multi-hop paths may expire quickly) and new forwarding strategies to maintain NDN benefits (request aggregation and caching) such that only a few messages need to be transmitted over the entire end-to-end path even in case of multiple concurrent requesters. To perform quick retransmission in case of collisions or other transmission errors, we implement and evaluate retransmission timers from related work and compare them to CCNTimer, which is a new algorithm that enables shorter content retrieval times in information-centric wireless multi-hop communication. Yet, in case of intermittent connectivity between requesters and content sources, multi-hop routing protocols may not work because they require continuous end-to-end paths. Therefore, we present agent-based content retrieval (ACR) for delay-tolerant networks. In ACR, requester nodes can delegate content retrieval to mobile agent nodes, which move closer to content sources, can retrieve content and return it to requesters. Thus, ACR exploits the mobility of agent nodes to retrieve content from remote locations. To enable delay-tolerant communication via agents, retrieved content needs to be stored persistently such that requesters can verify its authenticity via original publisher signatures. To achieve this, we develop a persistent caching concept that maintains received popular content in repositories and deletes unpopular content if free space is required. Since our persistent caching concept can complement regular short-term caching in the content store, it can also be used for network caching to store popular delay-tolerant content at edge routers (to reduce network traffic and improve network performance) while real-time traffic can still be maintained and served from the content store.
Resumo:
Resource pulses are common in various ecosystems and often have large impacts on ecosystem functioning. Many animals hoard food during resource pulses, yet how this behaviour affects pulse diffusion through trophic levels is poorly known because of a lack of individual-based studies. Our objective was to examine how the hoarding behaviour of arctic foxes (Alopex lagopus) preying on a seasonal pulsed resource (goose eggs) was affected by annual and seasonal changes in resource availability. We monitored foraging behaviour of foxes in a greater snow goose (Chen caerulescens atlanticus) colony during 8 nesting seasons that covered 2 lemming cycles. The number of goose eggs taken and cached per hour by foxes declined 6-fold from laying to hatching, while the proportion of eggs cached remained constant. In contrast, the proportion of eggs cached by foxes fluctuated in response to the annual lemming cycle independently of the seasonal pulse of goose eggs. Foxes cached the majority of eggs taken (> 90%) when lemming abundance was high or moderate but only 40% during the low phase of the cycle. This likely occurred because foxes consumed a greater proportion of goose eggs to fulfill their energy requirement at low lemming abundance. Our study clearly illustrates a behavioural mechanism that extends the energetic benefits of a resource pulse. The hoarding behaviour of the main predator enhances the allochthonous nutrients input brought by migrating birds from the south into the arctic terrestrial ecosystem. This could increase average predator density and promote indirect interactions among prey.
Resumo:
1. Successful seed dispersal by animals is assumed to occur when undamaged seeds arrive at a favourable microsite. Most seed removal and dispersal studies consider only two possible seed fates, predation or escape intact. Whether partial consumption of seeds has ecological implications for natural regeneration is unclear. We studied partial consumption of seeds in a rodent-dispersed oak species. 2. Fifteen percent of dispersed acorns were found partially eaten in a field experiment. Most damage affected only the basal portion of the seeds, resulting in no embryo damage. Partially eaten acorns had no differences in dispersal distance compared to intact acorns but were recovered at farther distances than completely consumed acorns. 3. Partially eaten acorns were found under shrub cover unlike intact acorns that were mostly dispersed to open microhabitats. 4. Partially eaten acorns were not found buried proportionally more often than intact acorns, leading to desiccation and exposure to biotic agents (predators, bacteria and fungi). However, partial consumption caused more rapid germination, which enables the acorns to tolerate the negative effects of exposure. 5. Re-caching and shrub cover as microhabitat of destination promote partial seed consumption. Larger acorns escaped predation more often and had higher uneaten cotyledon mass. Satiation at seed level is the most plausible explanation for partial consumption. 6. Partial consumption caused no differences in root biomass when acorns experienced only small cotyledon loss. However, root biomass was lower when acorns experienced heavy loss of tissue but, surprisingly, they produced longer roots, which allow the seeds to gain access sooner to deeper resources. 7.Synthesis. Partial consumption of acorns is an important event in the oak regeneration process, both quantitatively and qualitatively. Most acorns were damaged non-lethally, without decreasing both dispersal distances and the probability of successful establishment. Faster germination and production of longer roots allow partially eaten seeds to tolerate better the exposure disadvantages caused by the removal of the pericarp and the non-buried deposition. Consequently, partially consumed seeds can contribute significantly to natural regeneration and must be considered in future seed dispersal studies.
Resumo:
Motivated by the increasing demand and challenges of video streaming in this thesis, we investigate methods by which the quality of the video can be improved. We utilise overlay networks that have been created by implemented relay nodes to produce path diversity, and show through analytical and simulation models for which environments path diversity can improve the packet loss probability. We take the simulation and analytical models further by implementing a real overlay network on top of Planetlab, and show that when the network conditions remain constant the video quality received by the client can be improved. In addition, we show that in the environments where path diversity improves the video quality forward error correction can be used to further enhance the quality. We then investigate the effect of IEEE 802.11e Wireless LAN standard with quality of service enabled on the video quality received by a wireless client. We find that assigning all the video to a single class outperforms a cross class assignment scheme proposed by other researchers. The issue of virtual contention at the access point is also examined. We increase the intelligence of our relay nodes and enable them to cache video, in order to maximise the usefulness of these caches. For this purpose, we introduce a measure, called the PSNR profit, and present an optimal caching method for achieving the maximum PSNR profit at the relay nodes where partitioned video contents are stored and provide an enhanced quality for the client. We also show that the optimised cache the degradation in the video quality received by the client becomes more graceful than the non-optimised system when the network experiences packet loss or is congested.
Resumo:
AOSD'03 Practitioner Report Performance analysis is motivated as an ideal domain for benefiting from the application of Aspect Oriented (AO) technology. The experience of a ten week project to apply AO to the performance analysis domain is described. We show how all phases of a performance analysts’ activities – initial profiling, problem identification, problem analysis and solution exploration – were candidates for AO technology assistance – some being addressed with more success than others. A Profiling Workbench is described that leverages the capabilities of AspectJ, and delivers unique capabilities into the hands of developers exploring caching opportunities.
Resumo:
3D geographic information system (GIS) is data and computation intensive in nature. Internet users are usually equipped with low-end personal computers and network connections of limited bandwidth. Data reduction and performance optimization techniques are of critical importance in quality of service (QoS) management for online 3D GIS. In this research, QoS management issues regarding distributed 3D GIS presentation were studied to develop 3D TerraFly, an interactive 3D GIS that supports high quality online terrain visualization and navigation. ^ To tackle the QoS management challenges, multi-resolution rendering model, adaptive level of detail (LOD) control and mesh simplification algorithms were proposed to effectively reduce the terrain model complexity. The rendering model is adaptively decomposed into sub-regions of up-to-three detail levels according to viewing distance and other dynamic quality measurements. The mesh simplification algorithm was designed as a hybrid algorithm that combines edge straightening and quad-tree compression to reduce the mesh complexity by removing geometrically redundant vertices. The main advantage of this mesh simplification algorithm is that grid mesh can be directly processed in parallel without triangulation overhead. Algorithms facilitating remote accessing and distributed processing of volumetric GIS data, such as data replication, directory service, request scheduling, predictive data retrieving and caching were also proposed. ^ A prototype of the proposed 3D TerraFly implemented in this research demonstrates the effectiveness of our proposed QoS management framework in handling interactive online 3D GIS. The system implementation details and future directions of this research are also addressed in this thesis. ^
Resumo:
With the exponential increasing demands and uses of GIS data visualization system, such as urban planning, environment and climate change monitoring, weather simulation, hydrographic gauge and so forth, the geospatial vector and raster data visualization research, application and technology has become prevalent. However, we observe that current web GIS techniques are merely suitable for static vector and raster data where no dynamic overlaying layers. While it is desirable to enable visual explorations of large-scale dynamic vector and raster geospatial data in a web environment, improving the performance between backend datasets and the vector and raster applications remains a challenging technical issue. This dissertation is to implement these challenging and unimplemented areas: how to provide a large-scale dynamic vector and raster data visualization service with dynamic overlaying layers accessible from various client devices through a standard web browser, and how to make the large-scale dynamic vector and raster data visualization service as rapid as the static one. To accomplish these, a large-scale dynamic vector and raster data visualization geographic information system based on parallel map tiling and a comprehensive performance improvement solution are proposed, designed and implemented. They include: the quadtree-based indexing and parallel map tiling, the Legend String, the vector data visualization with dynamic layers overlaying, the vector data time series visualization, the algorithm of vector data rendering, the algorithm of raster data re-projection, the algorithm for elimination of superfluous level of detail, the algorithm for vector data gridding and re-grouping and the cluster servers side vector and raster data caching.
Resumo:
Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer's processor. In order to maximize performance, the speeds of the memory and the processor should be equal. However, using memory that always match the speed of the processor is prohibitively expensive. Computer hardware designers have managed to drastically lower the cost of the system with the use of memory caches by sacrificing some performance. A cache is a small piece of fast memory that stores popular data so it can be accessed faster. Modern computers have evolved into a hierarchy of caches, where a memory level is the cache for a larger and slower memory level immediately below it. Thus, by using caches, manufacturers are able to store terabytes of data at the cost of cheapest memory while achieving speeds close to the speed of the fastest one.^ The most important decision about managing a cache is what data to store in it. Failing to make good decisions can lead to performance overheads and over-provisioning. Surprisingly, caches choose data to store based on policies that have not changed in principle for decades. However, computing paradigms have changed radically leading to two noticeably different trends. First, caches are now consolidated across hundreds to even thousands of processes. And second, caching is being employed at new levels of the storage hierarchy due to the availability of high-performance flash-based persistent media. This brings four problems. First, as the workloads sharing a cache increase, it is more likely that they contain duplicated data. Second, consolidation creates contention for caches, and if not managed carefully, it translates to wasted space and sub-optimal performance. Third, as contented caches are shared by more workloads, administrators need to carefully estimate specific per-workload requirements across the entire memory hierarchy in order to meet per-workload performance goals. And finally, current cache write policies are unable to simultaneously provide performance and consistency guarantees for the new levels of the storage hierarchy.^ We addressed these problems by modeling their impact and by proposing solutions for each of them. First, we measured and modeled the amount of duplication at the buffer cache level and contention in real production systems. Second, we created a unified model of workload cache usage under contention to be used by administrators for provisioning, or by process schedulers to decide what processes to run together. Third, we proposed methods for removing cache duplication and to eliminate wasted space because of contention for space. And finally, we proposed a technique to improve the consistency guarantees of write-back caches while preserving their performance benefits.^
Resumo:
Electrical energy is an essential resource for the modern world. Unfortunately, its price has almost doubled in the last decade. Furthermore, energy production is also currently one of the primary sources of pollution. These concerns are becoming more important in data-centers. As more computational power is required to serve hundreds of millions of users, bigger data-centers are becoming necessary. This results in higher electrical energy consumption. Of all the energy used in data-centers, including power distribution units, lights, and cooling, computer hardware consumes as much as 80%. Consequently, there is opportunity to make data-centers more energy efficient by designing systems with lower energy footprint. Consuming less energy is critical not only in data-centers. It is also important in mobile devices where battery-based energy is a scarce resource. Reducing the energy consumption of these devices will allow them to last longer and re-charge less frequently. Saving energy in computer systems is a challenging problem. Improving a system's energy efficiency usually comes at the cost of compromises in other areas such as performance or reliability. In the case of secondary storage, for example, spinning-down the disks to save energy can incur high latencies if they are accessed while in this state. The challenge is to be able to increase the energy efficiency while keeping the system as reliable and responsive as before. This thesis tackles the problem of improving energy efficiency in existing systems while reducing the impact on performance. First, we propose a new technique to achieve fine grained energy proportionality in multi-disk systems; Second, we design and implement an energy-efficient cache system using flash memory that increases disk idleness to save energy; Finally, we identify and explore solutions for the page fetch-before-update problem in caching systems that can: (a) control better I/O traffic to secondary storage and (b) provide critical performance improvement for energy efficient systems.
Resumo:
Unequaled improvements in processor and I/O speeds make many applications such as databases and operating systems to be increasingly I/O bound. Many schemes such as disk caching and disk mirroring have been proposed to address the problem. In this thesis we focus only on disk mirroring. In disk mirroring, a logical disk image is maintained on two physical disks allowing a single disk failure to be transparent to application programs. Although disk mirroring improves data availability and reliability, it has two major drawbacks. First, writes are expensive because both disks must be updated. Second, load balancing during failure mode operation is poor because all requests are serviced by the surviving disk. Distorted mirrors was proposed to address the write problem and interleaved declustering to address the load balancing problem. In this thesis we perform a comparative study of these two schemes under various operating modes. In addition we also study traditional mirroring to provide a common basis for comparison.
Resumo:
Bioturbation in marine sediments has basically two aspects of interest for palaeo-environmental studies. First, the traces left by the burrowing organisms reflect the prevailing environmental conditions at the seafloor and thus can be used to reconstruct the ecologic and palaeoceanographic situation. Traces have the advantage over other proxies of practically always being preserved in situ. Secondly, for high- resolution stratigraphy, bioturbation is a nuisance due to the stirring and mixing processes that destroy the stratigraphic record. In order to evaluate the applicability of biogenic traces as palaeoenvironmental indicators, a number of gravity cores from the Portuguese continental slope, covering the period from the last glacial to the present were investigated through X-ray radiographs. In addition, physical and chemical parameters were determined to define the environmental niche in each core interval. A number of traces could be recognized, the most important being: Thalassinoides, Planolites, Zoophycos, Chondrites, Scolicia, Palaeophycus, Phycosiphon and the generally pyritized traces Trichichnus and Mycellia. The shifts between the different ichnofabrics agree strikingly well with the variations in ocean circulation caused by the changing climate. On the upper and middle slope, variations in current intensity and oxygenation of the Mediterranean Outflow Water were responsible for shifts in the ichnofabric. Larger traces such as Planolites and Thalassinoides dominated in coarse, well oxygenated intervals, while small traces such as Chondrites and Trichichnus dominated in fine grained, poorly oxygenated intervals. In contrast, on the lower slope where calm steady sedimentation conditions prevail, changes in sedimentation rate and nutrient flux have controlled variations in the distribution of larger traces such as Planolites, Thalassinoides, and Palaeophycus. Additionally, distinct layers of abundant Chondrites correspond to Heinrich events 1, 2, and 4, and are interpreted as a response to incursions of nutrient rich, oxygen depleted Antarctic waters during phases of reduced thermohaline circulation. The results clearly show that not one single factor but a combination of several factors is necessary to explain the changes in ichnofabric. Furthermore, large variations in the extent and type of bioturbation and tiering between different settings clearly show that a more detailed knowledge of the factors governing bioturbation is necessary if we shall fully comprehend how proxy records are disturbed. A first attempt to automatize a part of the recognition and quantification of the ichnofabric was performed using the DIAna image analysis program on digitized X-ray radiographs. The results show that enhanced abundance of pyritized microburrows appears to be coupled to organic rich sediments deposited under dysoxic conditions. Coarse grained sediments inhibit the formation of pyritized burrows. However, the smallest changes in program settings controlling the grey scale threshold and the sensitivity resulted in large shifts in the number of detected burrows. Therefore, this method can only be considered to be semi-quantitative. Through AMS-^C dating of sample pairs from the Zoophycos spreiten and the surrounding host sediment, age reversals of up to 3,320 years could be demonstrated for the first time. The spreiten material is always several thousands of years younger than the surrounding host sediment. Together with detailed X-ray radiograph studies this shows that the trace maker collects the material on the seafloor, and then transports it downwards up to more than one meter in to the underlying sediment where it is deposited in distinct structures termed spreiten. This clearly shows that age reversals of several thousands of years can be expected whenever Zoophycos is unknowingly sampled. These results also render the hitherto proposed ethological models proposed for Zoophycos as largely implausible. Therefore, a combination of detritus feeding, short time caching, and hibernation possibly combined also with gardening, is suggested here as an explanation for this complicated burrow.
Resumo:
Content Centric Network (CCN) is a proposed future internet architecture that is based on the concept of contents name instead of the hosts name followed in the traditional internet architecture. CCN architecture might do changes in the existing internet architecture or might replace it completely. In this paper, we present modifications to the existing Domain Name System (DNS) based on the CCN architecture requirements without changing the existing routing architecture. Hence the proposed solution achieves the benefits of both CCN and existing network infrastructure (i.e. content based routing, independent of host location, caching and content delivery protocols).
Resumo:
One of the current challenges in model-driven engineering is enabling effective collaborative modelling. Two common approaches are either storing the models in a central repository, or keeping them under a traditional file-based version control system and build a centralized index for model-wide queries. Either way, special attention must be paid to the nature of these repositories and indexes as networked services: they should remain responsive even with an increasing number of concurrent clients. This paper presents an empirical study on the impact of certain key decisions on the scalability of concurrent model queries, using an Eclipse Connected Data Objects model repository and a Hawk model index. The study evaluates the impact of the network protocol, the API design and the internal caching mechanisms and analyzes the reasons for their varying performance.
Resumo:
Concerns have been raised in the past several years that introducing new transport protocols on the Internet has be- come increasingly difficult, not least because there is no agreed-upon way for a source end host to find out if a trans- port protocol is supported all the way to a destination peer. A solution to a similar problem—finding out support for IPv6—has been proposed and is currently being deployed: the Happy Eyeballs (HE) mechanism. HE has also been proposed as an efficient way for an application to select an appropriate transport protocol. Still, there are few, if any, performance evaluations of transport HE. This paper demonstrates that transport HE could indeed be a feasible solution to the transport support problem. The paper evaluates HE between TCP and SCTP using TLS encrypted and unencrypted traffic, and shows that although there is indeed a cost in terms of CPU load to introduce HE, the cost is rel- atively small, especially in comparison with the cost of using TLS encryption. Moreover, our results suggest that HE has a marginal impact on memory usage. Finally, by introduc- ing caching of previous connection attempts, the additional cost of transport HE could be significantly reduced.
Resumo:
Many real-word decision- making problems are defined based on forecast parameters: for example, one may plan an urban route by relying on traffic predictions. In these cases, the conventional approach consists in training a predictor and then solving an optimization problem. This may be problematic since mistakes made by the predictor may trick the optimizer into taking dramatically wrong decisions. Recently, the field of Decision-Focused Learning overcomes this limitation by merging the two stages at training time, so that predictions are rewarded and penalized based on their outcome in the optimization problem. There are however still significant challenges toward a widespread adoption of the method, mostly related to the limitation in terms of generality and scalability. One possible solution for dealing with the second problem is introducing a caching-based approach, to speed up the training process. This project aims to investigate these techniques, in order to reduce even more, the solver calls. For each considered method, we designed a particular smart sampling approach, based on their characteristics. In the case of the SPO method, we ended up discovering that it is only necessary to initialize the cache with only several solutions; those needed to filter the elements that we still need to properly learn. For the Blackbox method, we designed a smart sampling approach, based on inferred solutions.