365 resultados para Speculative prefetching


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Tide Lords series of fantasy novels set out to examine the issue of immortality. Its purpose was to look at the desirability of immortality, specifically why people actively seek it. It was meant to examine the practicality of immortality, specifically — having got there, what does one do to pass the time with eternity to fill? I also wished to examine the notion of true immortality — immortals who could not be killed. What I did not anticipate when embarking upon this series, and what did not become apparent until after the series had been sold to two major publishing houses in Australia and the US, was the strength of the immortality tropes. This series was intended to fly in the face of these tropes, but confronted with the reality of such a work, the Australian publishers baulked at the ideas presented, requesting the series be re-written with the tropes taken into consideration. They wanted immortals who could die, mortals who wanted to be immortal. And a hero with a sense of humour. This exegesis aims to explore where these tropes originated. It will also discuss the ways I negotiated a way around the tropes, and was eventually able to please the publishers by appearing to adhere to the tropes, while still staying true to the story I wanted to tell. As such, this discussion is, in part, an analysis of how an author negotiates the tensions around writing within a genre while trying to innovate within it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Dark Ages are generally held to be a time of technological and intellectual stagnation in western development. But that is not necessarily the case. Indeed, from a certain perspective, nothing could be further from the truth. In this paper we draw historical comparisons, focusing especially on the thirteenth and fourteenth centuries, between the technological and intellectual ruptures in Europe during the Dark Ages, and those of our current period. Our analysis is framed in part by Harold Innis’s2 notion of "knowledge monopolies". We give an overview of how these were affected by new media, new power struggles, and new intellectual debates that emerged in thirteenth and fourteenth century Europe. The historical salience of our focus may seem elusive. Our world has changed so much, and history seems to be an increasingly far-from-favoured method for understanding our own period and its future potentials. Yet our seemingly distant historical focus provides some surprising insights into the social dynamics that are at work today: the fracturing of established knowledge and power bases; the democratisation of certain "sacred" forms of communication and knowledge, and, conversely, the "sacrosanct" appropriation of certain vernacular forms; challenges and innovations in social and scientific method and thought; the emergence of social world-shattering media practices; struggles over control of vast networks of media and knowledge monopolies; and the enclosure of public discursive and social spaces for singular, manipulative purposes. The period between the eleventh and fourteenth centuries in Europe prefigured what we now call the Enlightenment, perhaps moreso than any other period before or after; it shaped what the Enlightenment was to become. We claim no knowledge of the future here. But in the "post-everything" society, where history is as much up for sale as it is for argument, we argue that our historical perspective provides a useful analogy for grasping the wider trends in the political economy of media, and for recognising clear and actual threats to the future of the public sphere in supposedly democratic societies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Dark Ages are generally held to be a time of technological and intellectual stagnation in western development. But that is not necessarily the case. Indeed, from a certain perspective, nothing could be further from the truth. In this paper we draw historical comparisons, focusing especially on the thirteenth and fourteenth centuries, between the technological and intellectual ruptures in Europe during the Dark Ages, and those of our current period. Our analysis is framed in part by Harold Innis’s2 notion of "knowledge monopolies". We give an overview of how these were affected by new media, new power struggles, and new intellectual debates that emerged in thirteenth and fourteenth century Europe. The historical salience of our focus may seem elusive. Our world has changed so much, and history seems to be an increasingly far-from-favoured method for understanding our own period and its future potentials. Yet our seemingly distant historical focus provides some surprising insights into the social dynamics that are at work today: the fracturing of established knowledge and power bases; the democratisation of certain "sacred" forms of communication and knowledge, and, conversely, the "sacrosanct" appropriation of certain vernacular forms; challenges and innovations in social and scientific method and thought; the emergence of social world-shattering media practices; struggles over control of vast networks of media and knowledge monopolies; and the enclosure of public discursive and social spaces for singular, manipulative purposes. The period between the eleventh and fourteenth centuries in Europe prefigured what we now call the Enlightenment, perhaps moreso than any other period before or after; it shaped what the Enlightenment was to become. We claim no knowledge of the future here. But in the "post-everything" society, where history is as much up for sale as it is for argument, we argue that our historical perspective provides a useful analogy for grasping the wider trends in the political economy of media, and for recognising clear and actual threats to the future of the public sphere in supposedly democratic societies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This practice-led doctorate involved the development of a collection – a bricolage – of interwoven fragments of literary texts and visual imagery explor-ing questions of speculative fiction, urban space and embodiment. As a sup-plement to the creative work, I also developed an exegesis, using a combina-tion of theoretical and contextual analysis combined with critical reflections on my creative process and outputs. An emphasis on issues of creative practice and a sustained investigation into an aesthetics of fragmentation and assem-blage is organised around the concept and methodology of bricolage, the eve-ryday art of ‘making do’. The exegesis also addresses my interest in the city and urban forms of subjectivity and embodiment through the use of a range of theorists, including Michel de Certeau and Elizabeth Grosz.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Urban planning policies in Australia presuppose apartments as the new dominant housing type, but much of what the market has delivered is criticised as over-development, and as being generic, poorly-designed, environmentally unsustainable and unaffordable. Policy responses to this problem typically focus on planning regulation and construction costs as the primary issues needing to be addressed in order to increase the supply of quality, affordable apartment housing. In contrast, this paper uses Ball’s (1983) ‘structures of provision’ approach to outline the key processes informing apartment development and identifies a substantial gap in critical understanding of how apartments are developed in Australia. This reveals economic problems not typically considered by policymakers. Using mainstream economic analysis to review the market itself, the authors found high search costs, demand risk, problems with exchange, and lack of competition present key barriers to achieving greater affordability and limit the extent to which ‘speculative’ developers can respond to the preferences of would be owner-occupiers of apartments. The existing development model, which is reliant on capturing uplift in site value, suits investors seeking rental yields in the first instance and capital gains in the second instance, and actively encourages housing price inflation. This is exacerbated by lack of density restrictions, such as have existed in inner Melbourne for many years, which permits greater yields on redevelopment sites. The price of land in the vicinity of such redevelopment sites is pushed up as landholders' expectation of future yield is raised. All too frequently existing redevelopment sites go back onto the market as vendors seek to capture the uplift in site value and exit the project in a risk free manner...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Loads that miss in L1 or L2 caches and waiting for their data at the head of the ROB cause significant slow down in the form of commit stalls. We identify that most of these commit stalls are caused by a small set of loads, referred to as LIMCOS (Loads Incurring Majority of COmmit Stalls). We propose simple history-based classifiers that track commit stalls suffered by loads to help us identify this small set of loads. We study an application of these classifiers to prefetching. The classifiers are used to train the prefetcher to focus on the misses suffered by LIMCOS. This, referred to as focused prefetching, results in a 9.8% gain in IPC over naive GHB based delta correlation prefetcher along with a 20.3% reduction in memory traffic for a set of 17 memory-intensive SPEC2000 benchmarks. Another important impact of focused prefetching is a 61% improvement in the accuracy of prefetches. We demonstrate that the proposed classification criterion performs better than other existing criteria like criticality and delinquent loads. Also we show that the criterion of focusing on commit stalls is robust enough across cache levels and can be applied to any prefetcher without any modifications to the prefetcher.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Unending quest for performance improvement coupled with the advancements in integrated circuit technology have led to the development of new architectural paradigm. Speculative multithreaded architecture (SpMT) philosophy relies on aggressive speculative execution for improved performance. However, aggressive speculative execution comes with a mixed flavor of improving performance, when successful, and adversely affecting the energy consumption (and performance) because of useless computation in the event of mis-speculation. Dynamic instruction criticality information can be usefully applied to control and guide such an aggressive speculative execution. In this paper, we present a model of micro-execution for SpMT architecture that we have developed to determine the dynamic instruction criticality. We have also developed two novel techniques utilizing the criticality information namely delaying the non-critical loads and the criticality based thread-prediction for reducing useless computations and energy consumption. Experimental results showing break-up of critical instructions and effectiveness of proposed techniques in reducing energy consumption are presented in the context of multiscalar processor that implements SpMT architecture. Our experiments show 17.7% and 11.6% reduction in dynamic energy for criticality based thread prediction and criticality based delayed load scheme respectively while the improvement in dynamic energy delay product is 13.9% and 5.5%, respectively. (c) 2012 Published by Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Davies, Jeffrey, 'Gates, beads and Danubians. The defences and garrison of the auxiliary fort at Castell Collen: a speculative relationship', Bayerische Vorgeschichtsblatter (2006) 71 pp.3-13 RAE2008

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Speculative Concurrency Control (SCC) [Best92a] is a new concurrency control approach especially suited for real-time database applications. It relies on the use of redundancy to ensure that serializable schedules are discovered and adopted as early as possible, thus increasing the likelihood of the timely commitment of transactions with strict timing constraints. In [Best92b], SCC-nS, a generic algorithm that characterizes a family of SCC-based algorithms was described, and its correctness established by showing that it only admits serializable histories. In this paper, we evaluate the performance of the Two-Shadow SCC algorithm (SCC-2S), a member of the SCC-nS family, which is notable for its minimal use of redundancy. In particular, we show that SCC-2S (as a representative of SCC-based algorithms) provides significant performance gains over the widely used Optimistic Concurrency Control with Broadcast Commit (OCC-BC), under a variety of operating conditions and workloads.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an algorithm which extends the relatively new notion of speculative concurrency control by delaying the commitment of transactions, thus allowing other conflicting transactions to continue execution and commit rather than restart. This algorithm propagates uncommitted data to other outstanding transactions thus allowing more speculative schedules to be considered. The algorithm is shown always to find a serializable schedule, and to avoid cascading aborts. Like speculative concurrency control, it considers strictly more schedules than traditional concurrency control algorithms. Further work is needed to determine which of these speculative methods performs better on actual transaction loads.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a new class of Concurrency Control Algorithms that is especially suited for real-time database applications. Our approach relies on the use of (potentially) redundant computations to ensure that serializable schedules are found and executed as early as possible, thus, increasing the chances of a timely commitment of transactions with strict timing constraints. Due to its nature, we term our concurrency control algorithms Speculative. The aforementioned description encompasses many algorithms that we call collectively Speculative Concurrency Control (SCC) algorithms. SCC algorithms combine the advantages of both Pessimistic and Optimistic Concurrency Control (PCC and OCC) algorithms, while avoiding their disadvantages. On the one hand, SCC resembles PCC in that conflicts are detected as early as possible, thus making alternative schedules available in a timely fashion in case they are needed. On the other hand, SCC resembles OCC in that it allows conflicting transactions to proceed concurrently, thus avoiding unnecessary delays that may jeopardize their timely commitment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We leverage the buffering capabilities of end-systems to achieve scalable, asynchronous delivery of streams in a peer-to-peer environment. Unlike existing cache-and-relay schemes, we propose a distributed prefetching protocol where peers prefetch and store portions of the streaming media ahead of their playout time, thus not only turning themselves to possible sources for other peers but their prefetched data can allow them to overcome the departure of their source-peer. This stands in sharp contrast to existing cache-and-relay schemes where the departure of the source-peer forces its peer children to go the original server, thus disrupting their service and increasing server and network load. Through mathematical analysis and simulations, we show the effectiveness of maintaining such asynchronous multicasts from several source-peers to other children peers, and the efficacy of prefetching in the face of peer departures. We confirm the scalability of our dPAM protocol as it is shown to significantly reduce server load.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A problem with Speculative Concurrency Control algorithms and other common concurrency control schemes using forward validation is that committing a transaction as soon as it finishes validating, may result in a value loss to the system. Haritsa showed that by making a lower priority transaction wait after it is validated, the number of transactions meeting their deadlines is increased, which may result in a higher value-added to the system. SCC-based protocols can benefit from the introduction of such delays by giving optimistic shadows with high value-added to the system more time to execute and commit instead of being aborted in favor of other validating transactions, whose value-added to the system is lower. In this paper we present and evaluate an extension to SCC algorithms that allows for commit deferments.