402 resultados para speculative prefetching


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Speculative prefetching has been proposed to improve the response time of network access. Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper investigates a complementary area which has been largely ignored, that of performance modeling. We analyze the performance of a prefetcher that has uncertain knowledge about future accesses. Our performance metric is the improvement in access time, for which we derive a formula in terms of resource parameters (time available and time required for prefetehing) and speculative parameters (probabilities for next access). We develop a prefetch algorithm to maximize the improvement in access time. The algorithm is based on finding the best solution to a stretch knapsack problem, using theoretically proven apparatus to reduce the search space. An integration between speculative prefetching and caching is also investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mobile users connected to wireless networks expect performance comparable to those on wired networks for interactive multimedia applications. Satisfying Quality of Service (QoS) requirements for such applications in wireless networks is a challenging problem due to limitations of low bandwidth, high error rate and frequent disconnections of wireless channels. In addition, wireless networks suffer from varying bandwidth. In this paper we investigate object prefetching during times of connectedness and bandwidth availability to enhance user perceived connectedness. This paper presents an access model that is suitable for multimedia access in wireless networks. Access modelling for the purpose of predicting future accesses in the context of speculative prefetching has received much attention in the literature. The model recognizes that a web page, instead of just a single file, is typically a compound of several files. When it comes to making prefetch decisions, most previous studies in speculative prefetching resort to simple heuristics, such as prefetching an item with access probabilities larger than a manually tuned threshold. This paper takes a different approach. Specifically, it models the performance of the prefetcher, taking into account access predictions and resource parameters, and develops a prefetch policy based on a theoretical analysis of the model. Since the analysis considers cache as one of the resource parameters, the resulting policy integrates prefetch and cache replacement decisions. The paper investigates the effect of prefetching on network load. In order to make effective use of available resources and maximize access improvement, it is beneficial to prefetch all items with access probabilities exceeding certain threshold.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To improve the accuracy of access prediction, a prefetcher for web browsing should recognize the fact that a web page is a compound. By this term we mean that a user request for a single web page may require the retrieval of several multimedia items. Our prediction algorithm builds an access graph that captures the dynamics of web navigation rather than merely attaching probabilities to hypertext structure. When it comes to making prefetch decisions, most previous studies in speculative prefetching resort to simple heuristics, such as prefetching an item with access probabilities larger than a manually tuned threshold. The paper takes a different approach. Specifically, it models the performance of the prefetcher and develops a prefetch policy based on a theoretical analysis of the model. In the analysis, we derive a formula for the expected improvement in access time when prefetch is performed in anticipation for a compound request. We then develop an algorithm that integrates prefetch and cache replacement decisions so as to maximize this improvement. We present experimental results to demonstrate the effectiveness of compound-based prefetching in low bandwidth networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper investigates a complementary area which has been largely ignored, that of performance modelling. We use improvement in access time as the performance metric, for which we derive a formula in terms of resource parameters (time available and time required for prefetching) and speculative parameters (probabilities for next access). The performance maximization problem is expressed as a stretch knapsack problem. We develop an algorithm to maximize the improvement in access time by solving the stretch knapsack problem, using theoretically proven apparatus to reduce the search space. Integration between speculative prefetching and caching is also investigated, albeit under the assumption of equal item sizes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper on the other hand investigates the performance of speculative prefetching. When prefetching is performed speculatively, there is bound to be an increase in the network load. Furthermore, the prefetched items must compete for space with existing cache occupants. These two factors-increased load and eviction of potentially useful cache entries-are considered in the analysis. We obtain the following conclusion: to maximise the improvement in access time, prefetch exclusively all items with access probabilities exceeding a certain threshold.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigate speculative prefetching under a model in which prefetching is neither aborted nor preempted by demand fetch but instead gets equal priority in network bandwidth utilisation. We argue that the non-abortive assumption is appropriate for wireless networks where bandwidth is low and latency is high, and the non-preemptive assumption is appropriate for Internet where prioritization is not always possible. This paper assumes the existence of an access model to provide some knowledge about future accesses and investigates analytically the performance of a prefetcher that utilises this knowledge. In mobile computing, because resources are severely constrained, performance prediction is as important as access prediction. For uniform retrieval time, we derive a theoretical limit of improvement in access time due to prefetching. This leads to the formulation of an optimal algorithrn for prefetching one access ahead. For non-uniform retrieval time, two different types of prefetching of multiple documents, namely mainline and branch prefetch, are evaluated against prefetch of single document. In mainline prefetch, the most probable sequence of future accesses is prefetched. In branch prefetch, a set of different alternatives for future accesses is prefetched. Under some conditions, mainline prefetch may give slight improvement in user-perceived access time over single prefetch with nominal extra retrieval cost, where retrieval cost is defined as the expected network time wasted in non-useful prefetch. Branch prefetch performs better than mainline prefetch but incurs more retrieval cost.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Tide Lords series of fantasy novels set out to examine the issue of immortality. Its purpose was to look at the desirability of immortality, specifically why people actively seek it. It was meant to examine the practicality of immortality, specifically — having got there, what does one do to pass the time with eternity to fill? I also wished to examine the notion of true immortality — immortals who could not be killed. What I did not anticipate when embarking upon this series, and what did not become apparent until after the series had been sold to two major publishing houses in Australia and the US, was the strength of the immortality tropes. This series was intended to fly in the face of these tropes, but confronted with the reality of such a work, the Australian publishers baulked at the ideas presented, requesting the series be re-written with the tropes taken into consideration. They wanted immortals who could die, mortals who wanted to be immortal. And a hero with a sense of humour. This exegesis aims to explore where these tropes originated. It will also discuss the ways I negotiated a way around the tropes, and was eventually able to please the publishers by appearing to adhere to the tropes, while still staying true to the story I wanted to tell. As such, this discussion is, in part, an analysis of how an author negotiates the tensions around writing within a genre while trying to innovate within it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Dark Ages are generally held to be a time of technological and intellectual stagnation in western development. But that is not necessarily the case. Indeed, from a certain perspective, nothing could be further from the truth. In this paper we draw historical comparisons, focusing especially on the thirteenth and fourteenth centuries, between the technological and intellectual ruptures in Europe during the Dark Ages, and those of our current period. Our analysis is framed in part by Harold Innis’s2 notion of "knowledge monopolies". We give an overview of how these were affected by new media, new power struggles, and new intellectual debates that emerged in thirteenth and fourteenth century Europe. The historical salience of our focus may seem elusive. Our world has changed so much, and history seems to be an increasingly far-from-favoured method for understanding our own period and its future potentials. Yet our seemingly distant historical focus provides some surprising insights into the social dynamics that are at work today: the fracturing of established knowledge and power bases; the democratisation of certain "sacred" forms of communication and knowledge, and, conversely, the "sacrosanct" appropriation of certain vernacular forms; challenges and innovations in social and scientific method and thought; the emergence of social world-shattering media practices; struggles over control of vast networks of media and knowledge monopolies; and the enclosure of public discursive and social spaces for singular, manipulative purposes. The period between the eleventh and fourteenth centuries in Europe prefigured what we now call the Enlightenment, perhaps moreso than any other period before or after; it shaped what the Enlightenment was to become. We claim no knowledge of the future here. But in the "post-everything" society, where history is as much up for sale as it is for argument, we argue that our historical perspective provides a useful analogy for grasping the wider trends in the political economy of media, and for recognising clear and actual threats to the future of the public sphere in supposedly democratic societies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Dark Ages are generally held to be a time of technological and intellectual stagnation in western development. But that is not necessarily the case. Indeed, from a certain perspective, nothing could be further from the truth. In this paper we draw historical comparisons, focusing especially on the thirteenth and fourteenth centuries, between the technological and intellectual ruptures in Europe during the Dark Ages, and those of our current period. Our analysis is framed in part by Harold Innis’s2 notion of "knowledge monopolies". We give an overview of how these were affected by new media, new power struggles, and new intellectual debates that emerged in thirteenth and fourteenth century Europe. The historical salience of our focus may seem elusive. Our world has changed so much, and history seems to be an increasingly far-from-favoured method for understanding our own period and its future potentials. Yet our seemingly distant historical focus provides some surprising insights into the social dynamics that are at work today: the fracturing of established knowledge and power bases; the democratisation of certain "sacred" forms of communication and knowledge, and, conversely, the "sacrosanct" appropriation of certain vernacular forms; challenges and innovations in social and scientific method and thought; the emergence of social world-shattering media practices; struggles over control of vast networks of media and knowledge monopolies; and the enclosure of public discursive and social spaces for singular, manipulative purposes. The period between the eleventh and fourteenth centuries in Europe prefigured what we now call the Enlightenment, perhaps moreso than any other period before or after; it shaped what the Enlightenment was to become. We claim no knowledge of the future here. But in the "post-everything" society, where history is as much up for sale as it is for argument, we argue that our historical perspective provides a useful analogy for grasping the wider trends in the political economy of media, and for recognising clear and actual threats to the future of the public sphere in supposedly democratic societies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This practice-led doctorate involved the development of a collection – a bricolage – of interwoven fragments of literary texts and visual imagery explor-ing questions of speculative fiction, urban space and embodiment. As a sup-plement to the creative work, I also developed an exegesis, using a combina-tion of theoretical and contextual analysis combined with critical reflections on my creative process and outputs. An emphasis on issues of creative practice and a sustained investigation into an aesthetics of fragmentation and assem-blage is organised around the concept and methodology of bricolage, the eve-ryday art of ‘making do’. The exegesis also addresses my interest in the city and urban forms of subjectivity and embodiment through the use of a range of theorists, including Michel de Certeau and Elizabeth Grosz.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Urban planning policies in Australia presuppose apartments as the new dominant housing type, but much of what the market has delivered is criticised as over-development, and as being generic, poorly-designed, environmentally unsustainable and unaffordable. Policy responses to this problem typically focus on planning regulation and construction costs as the primary issues needing to be addressed in order to increase the supply of quality, affordable apartment housing. In contrast, this paper uses Ball’s (1983) ‘structures of provision’ approach to outline the key processes informing apartment development and identifies a substantial gap in critical understanding of how apartments are developed in Australia. This reveals economic problems not typically considered by policymakers. Using mainstream economic analysis to review the market itself, the authors found high search costs, demand risk, problems with exchange, and lack of competition present key barriers to achieving greater affordability and limit the extent to which ‘speculative’ developers can respond to the preferences of would be owner-occupiers of apartments. The existing development model, which is reliant on capturing uplift in site value, suits investors seeking rental yields in the first instance and capital gains in the second instance, and actively encourages housing price inflation. This is exacerbated by lack of density restrictions, such as have existed in inner Melbourne for many years, which permits greater yields on redevelopment sites. The price of land in the vicinity of such redevelopment sites is pushed up as landholders' expectation of future yield is raised. All too frequently existing redevelopment sites go back onto the market as vendors seek to capture the uplift in site value and exit the project in a risk free manner...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Loads that miss in L1 or L2 caches and waiting for their data at the head of the ROB cause significant slow down in the form of commit stalls. We identify that most of these commit stalls are caused by a small set of loads, referred to as LIMCOS (Loads Incurring Majority of COmmit Stalls). We propose simple history-based classifiers that track commit stalls suffered by loads to help us identify this small set of loads. We study an application of these classifiers to prefetching. The classifiers are used to train the prefetcher to focus on the misses suffered by LIMCOS. This, referred to as focused prefetching, results in a 9.8% gain in IPC over naive GHB based delta correlation prefetcher along with a 20.3% reduction in memory traffic for a set of 17 memory-intensive SPEC2000 benchmarks. Another important impact of focused prefetching is a 61% improvement in the accuracy of prefetches. We demonstrate that the proposed classification criterion performs better than other existing criteria like criticality and delinquent loads. Also we show that the criterion of focusing on commit stalls is robust enough across cache levels and can be applied to any prefetcher without any modifications to the prefetcher.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Unending quest for performance improvement coupled with the advancements in integrated circuit technology have led to the development of new architectural paradigm. Speculative multithreaded architecture (SpMT) philosophy relies on aggressive speculative execution for improved performance. However, aggressive speculative execution comes with a mixed flavor of improving performance, when successful, and adversely affecting the energy consumption (and performance) because of useless computation in the event of mis-speculation. Dynamic instruction criticality information can be usefully applied to control and guide such an aggressive speculative execution. In this paper, we present a model of micro-execution for SpMT architecture that we have developed to determine the dynamic instruction criticality. We have also developed two novel techniques utilizing the criticality information namely delaying the non-critical loads and the criticality based thread-prediction for reducing useless computations and energy consumption. Experimental results showing break-up of critical instructions and effectiveness of proposed techniques in reducing energy consumption are presented in the context of multiscalar processor that implements SpMT architecture. Our experiments show 17.7% and 11.6% reduction in dynamic energy for criticality based thread prediction and criticality based delayed load scheme respectively while the improvement in dynamic energy delay product is 13.9% and 5.5%, respectively. (c) 2012 Published by Elsevier B.V.