884 resultados para Popularity.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

While ATM bandwidth-reservation techniques are able to offer the guarantees necessary for the delivery of real-time streams in many applications (e.g. live audio and video), they suffer from many disadvantages that make them inattractive (or impractical) for many others. These limitations coupled with the flexibility and popularity of TCP/IP as a best-effort transport protocol have prompted the network research community to propose and implement a number of techniques that adapt TCP/IP to the Available Bit Rate (ABR) and Unspecified Bit Rate (UBR) services in ATM network environments. This allows these environments to smoothly integrate (and make use of) currently available TCP-based applications and services without much (if any) modifications. However, recent studies have shown that TCP/IP, when implemented over ATM networks, is susceptible to serious performance limitations. In a recently completed study, we have unveiled a new transport protocol, TCP Boston, that turns ATM's 53-byte cell-oriented switching architecture into an advantage for TCP/IP. In this paper, we demonstrate the real-time features of TCP Boston that allow communication bandwidth to be traded off for timeliness. We start with an overview of the protocol. Next, we analytically characterize the dynamic redundancy control features of TCP Boston. Next, We present detailed simulation results that show the superiority of our protocol when compared to other adaptations of TCP/IP over ATMs. In particular, we show that TCP Boston improves TCP/IP's performance over ATMs for both network-centric metrics (e.g., effective throughput and percent of missed deadlines) and real-time application-centric metrics (e.g., response time and jitter).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One role for workload generation is as a means for understanding how servers and networks respond to variation in load. This enables management and capacity planning based on current and projected usage. This paper applies a number of observations of Web server usage to create a realistic Web workload generation tool which mimics a set of real users accessing a server. The tool, called Surge (Scalable URL Reference Generator) generates references matching empirical measurements of 1) server file size distribution; 2) request size distribution; 3) relative file popularity; 4) embedded file references; 5) temporal locality of reference; and 6) idle periods of individual users. This paper reviews the essential elements required in the generation of a representative Web workload. It also addresses the technical challenges to satisfying this large set of simultaneous constraints on the properties of the reference stream, the solutions we adopted, and their associated accuracy. Finally, we present evidence that Surge exercises servers in a manner significantly different from other Web server benchmarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a tool called Gismo (Generator of Internet Streaming Media Objects and workloads). Gismo enables the specification of a number of streaming media access characteristics, including object popularity, temporal correlation of request, seasonal access patterns, user session durations, user interactivity times, and variable bit-rate (VBR) self-similarity and marginal distributions. The embodiment of these characteristics in Gismo enables the generation of realistic and scalable request streams for use in the benchmarking and comparative evaluation of Internet streaming media delivery techniques. To demonstrate the usefulness of Gismo, we present a case study that shows the importance of various workload characteristics in determining the effectiveness of proxy caching and server patching techniques in reducing bandwidth requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Internet streaming applications are adversely affected by network conditions such as high packet loss rates and long delays. This paper aims at mitigating such effects by leveraging the availability of client-side caching proxies. We present a novel caching architecture (and associated cache management algorithms) that turn edge caches into accelerators of streaming media delivery. A salient feature of our caching algorithms is that they allow partial caching of streaming media objects and joint delivery of content from caches and origin servers. The caching algorithms we propose are both network-aware and stream-aware; they take into account the popularity of streaming media objects, their bit-rate requirements, and the available bandwidth between clients and servers. Using realistic models of Internet bandwidth (derived from proxy cache logs and measured over real Internet paths), we have conducted extensive simulations to evaluate the performance of various cache management alternatives. Our experiments demonstrate that network-aware caching algorithms can significantly reduce service delay and improve overall stream quality. Also, our experiments show that partial caching is particularly effective when bandwidth variability is not very high.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present what we believe to be the first thorough characterization of live streaming media content delivered over the Internet. Our characterization of over five million requests spanning a 28-day period is done at three increasingly granular levels, corresponding to clients, sessions, and transfers. Our findings support two important conclusions. First, we show that the nature of interactions between users and objects is fundamentally different for live versus stored objects. Access to stored objects is user driven, whereas access to live objects is object driven. This reversal of active/passive roles of users and objects leads to interesting dualities. For instance, our analysis underscores a Zipf-like profile for user interest in a given object, which is to be contrasted to the classic Zipf-like popularity of objects for a given user. Also, our analysis reveals that transfer lengths are highly variable and that this variability is due to the stickiness of clients to a particular live object, as opposed to structural (size) properties of objects. Second, based on observations we make, we conjecture that the particular characteristics of live media access workloads are likely to be highly dependent on the nature of the live content being accessed. In our study, this dependence is clear from the strong temporal correlations we observed in the traces, which we attribute to the synchronizing impact of live content on access characteristics. Based on our analyses, we present a model for live media workload generation that incorporates many of our findings, and which we implement in GISMO [19].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has been considerable work done in the study of Web reference streams: sequences of requests for Web objects. In particular, many studies have looked at the locality properties of such streams, because of the impact of locality on the design and performance of caching and prefetching systems. However, a general framework for understanding why reference streams exhibit given locality properties has not yet emerged. In this work we take a first step in this direction, based on viewing the Web as a set of reference streams that are transformed by Web components (clients, servers, and intermediaries). We propose a graph-based framework for describing this collection of streams and components. We identify three basic stream transformations that occur at nodes of the graph: aggregation, disaggregation and filtering, and we show how these transformations can be used to abstract the effects of different Web components on their associated reference streams. This view allows a structured approach to the analysis of why reference streams show given properties at different points in the Web. Applying this approach to the study of locality requires good metrics for locality. These metrics must meet three criteria: 1) they must accurately capture temporal locality; 2) they must be independent of trace artifacts such as trace length; and 3) they must not involve manual procedures or model-based assumptions. We describe two metrics meeting these criteria that each capture a different kind of temporal locality in reference streams. The popularity component of temporal locality is captured by entropy, while the correlation component is captured by interreference coefficient of variation. We argue that these metrics are more natural and more useful than previously proposed metrics for temporal locality. We use this framework to analyze a diverse set of Web reference traces. We find that this framework can shed light on how and why locality properties vary across different locations in the Web topology. For example, we find that filtering and aggregation have opposing effects on the popularity component of the temporal locality, which helps to explain why multilevel caching can be effective in the Web. Furthermore, we find that all transformations tend to diminish the correlation component of temporal locality, which has implications for the utility of different cache replacement policies at different points in the Web.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Understanding the nature of the workloads and system demands created by users of the World Wide Web is crucial to properly designing and provisioning Web services. Previous measurements of Web client workloads have been shown to exhibit a number of characteristic features; however, it is not clear how those features may be changing with time. In this study we compare two measurements of Web client workloads separated in time by three years, both captured from the same computing facility at Boston University. The older dataset, obtained in 1995, is well-known in the research literature and has been the basis for a wide variety of studies. The newer dataset was captured in 1998 and is comparable in size to the older dataset. The new dataset has the drawback that the collection of users measured may no longer be representative of general Web users; however using it has the advantage that many comparisons can be drawn more clearly than would be possible using a new, different source of measurement. Our results fall into two categories. First we compare the statistical and distributional properties of Web requests across the two datasets. This serves to reinforce and deepen our understanding of the characteristic statistical properties of Web client requests. We find that the kinds of distributions that best describe document sizes have not changed between 1995 and 1998, although specific values of the distributional parameters are different. Second, we explore the question of how the observed differences in the properties of Web client requests, particularly the popularity and temporal locality properties, affect the potential for Web file caching in the network. We find that for the computing facility represented by our traces between 1995 and 1998, (1) the benefits of using size-based caching policies have diminished; and (2) the potential for caching requested files in the network has declined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dynamic service aggregation techniques can exploit skewed access popularity patterns to reduce the costs of building interactive VoD systems. These schemes seek to cluster and merge users into single streams by bridging the temporal skew between them, thus improving server and network utilization. Rate adaptation and secondary content insertion are two such schemes. In this paper, we present and evaluate an optimal scheduling algorithm for inserting secondary content in this scenario. The algorithm runs in polynomial time, and is optimal with respect to the total bandwidth usage over the merging interval. We present constraints on content insertion which make the overall QoS of the delivered stream acceptable, and show how our algorithm can satisfy these constraints. We report simulation results which quantify the excellent gains due to content insertion. We discuss dynamic scenarios with user arrivals and interactions, and show that content insertion reduces the channel bandwidth requirement to almost half. We also discuss differentiated service techniques, such as N-VoD and premium no-advertisement service, and show how our algorithm can support these as well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Temporal locality of reference in Web request streams emerges from two distinct phenomena: the popularity of Web objects and the {\em temporal correlation} of requests. Capturing these two elements of temporal locality is important because it enables cache replacement policies to adjust how they capitalize on temporal locality based on the relative prevalence of these phenomena. In this paper, we show that temporal locality metrics proposed in the literature are unable to delineate between these two sources of temporal locality. In particular, we show that the commonly-used distribution of reference interarrival times is predominantly determined by the power law governing the popularity of documents in a request stream. To capture (and more importantly quantify) both sources of temporal locality in a request stream, we propose a new and robust metric that enables accurate delineation between locality due to popularity and that due to temporal correlation. Using this metric, we characterize the locality of reference in a number of representative proxy cache traces. Our findings show that there are measurable differences between the degrees (and sources) of temporal locality across these traces, and that these differences are effectively captured using our proposed metric. We illustrate the significance of our findings by summarizing the performance of a novel Web cache replacement policy---called GreedyDual*---which exploits both long-term popularity and short-term temporal correlation in an adaptive fashion. Our trace-driven simulation experiments (which are detailed in an accompanying Technical Report) show the superior performance of GreedyDual* when compared to other Web cache replacement policies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The relative importance of long-term popularity and short-term temporal correlation of references for Web cache replacement policies has not been studied thoroughly. This is partially due to the lack of accurate characterization of temporal locality that enables the identification of the relative strengths of these two sources of temporal locality in a reference stream. In [21], we have proposed such a metric and have shown that Web reference streams differ significantly in the prevalence of these two sources of temporal locality. These finding underscore the importance of a Web caching strategy that can adapt in a dynamic fashion to the prevalence of these two sources of temporal locality. In this paper, we propose a novel cache replacement algorithm, GreedyDual*, which is a generalization of GreedyDual-Size. GreedyDual* uses the metrics proposed in [21] to adjust the relative worth of long-term popularity versus short-term temporal correlation of references. Our trace-driven simulation experiments show the superior performance of GreedyDual* when compared to other Web cache replacement policies proposed in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Li-ion battery has for several years been at the forefront of powering an ever-increasing number of modem consumer electronic devices such as laptops, tablet PCs, cell phones, portable music players etc., while in more recent times, has also been sought to power a range of emerging electric and hybrid-electric vehicle classes. Given their extreme popularity, a number of features which define the performance of the Li-ion battery have become a target of improvement and have garnered tremendous research effort over the past two decades. Features such as battery capacity, voltage, lifetime, rate performance, together with important implications such as safety, environmental benignity and cost have all attracted attention. Although properties such as cell voltage and theoretical capacity are bound by the selection of electrode materials which constitute its interior, other performance makers of the Li-ion battery such as actual capacity, lifetime and rate performance may be improved by tailoring such materials with characteristics favourable to Li+ intercalation. One such tailoring route involves shrinking of the constituent electrode materials to that of the nanoscale, where the ultra-small diameters may bestow favourable Li+ intercalation properties while providing a necessary mechanical robustness during routine electrochemical operation. The work detailed in this thesis describes a range of synthetic routes taken in nanostructuring a selection of choice Li-ion positive electrode candidates, together with a review of their respective Li-ion performances. Chapter one of this thesis serves to highlight a number of key advancements which have been made and detailed in the literature over recent years pertaining to the use of nanostructured materials in Li-ion technology. Chapter two provides an overview of the experimental conditions and techniques employed in the synthesis and electrochemical characterisation of the as-prepared electrode materials constituting this doctoral thesis. Chapter three details the synthesis of small-diameter V2O5 and V2O5/TiO2 nanocomposite structures prepared by a novel carbon nanocage templating method using liquid precursors. Chapter four details a hydrothermal synthesis and characterisation of nanostructured β-LiVOPO4 powders together with an overview of their Li+ insertion properties while chapter five focuses on supercritical fluid synthesis as one technique in the tailoring of FeF2 and CoF2 powders having potentially appealing Li-ion 'conversion' properties. Finally, chapter six summarises the overall conclusions drawn from the results presented in this thesis, coupled with an indication of potential future work which may be explored upon the materials described in this work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ribosome profiling (ribo-seq) is a recently developed technique that provides genomewide information on protein synthesis (GWIPS) in vivo. The high resolution of ribo-seq is one of the exciting properties of this technique. In Chapter 2, I present a computational method that utilises the sub-codon precision and triplet periodicity of ribosome profiling data to detect transitions in the translated reading frame. Application of this method to ribosome profiling data generated for human HeLa cells allowed us to detect several human genes where the same genomic segment is translated in more than one reading frame. Since the initial publication of the ribosome profiling technique in 2009, there has been a proliferation of studies that have used the technique to explore various questions with respect to translation. A review of the many uses and adaptations of the technique is provided in Chapter 1. Indeed, owing to the increasing popularity of the technique and the growing number of published ribosome profiling datasets, we have developed GWIPS-viz (http://gwips.ucc.ie), a ribo-seq dedicated genome browser. Details on the development of the browser and its usage are provided in Chapter 3. One of the surprising findings of ribosome profiling of initiating ribosomes carried out in 3 independent studies, was the widespread use of non-AUG codons as translation initiation start sites in mammals. Although initiation at non-AUG codons in mammals has been documented for some time, the extent of non-AUG initiation reported by these ribo-seq studies was unexpected. In Chapter 4, I present an approach for estimating the strength of initiating codons based on the leaky scanning model of translation initiation. Application of this approach to ribo-seq data illustrates that initiation at non-AUG codons is inefficient compared to initiation at AUG codons. In addition, our approach provides a probability of initiation score for each start site that allows its strength of initiation to be evaluated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Li-ion battery has for a number of years been a key factor that has enabled an ever increasing number of modern consumer devices, while in recent years has also been sought to power a range of emerging electric and hybrid electric vehicles. Due to their importance and popularity, a number of characteristics of Li-ion batteries have been subjected to intense work aimed at radical improvement. Although electrode material selection intrinsically defines characteristics like maximum capacity or voltage, engineering of the electrode structure may yield significant improvements to the lifetime performance of the battery, which would not be available if the material was used in its bulk form. The body of work presented in this thesis describes the relationship between the structure of electrochemically active materials and the course of the electrochemical processes occurring within the electrode. Chapter one describes the motivation behind the research presented herein. Chapter two serves to highlight a number of key advancements which have been made and detailed in the literature over recent years, pertaining to the use of nanostructured materials in Li-ion technology. Chapter three details methods and techniques applied in developing the body of work presented in this thesis. Chapter four details structural, molecular and electrochemical characteristics of tin oxide nanoparticle based electrodes, with particular emphasis on the relationship between the size distribution and the electrode performance. Chapter five presents findings of structural, electrochemical and optical study of indium oxide nanoparticles grown on silicon by molecular beam epitaxy. In chapter 6, tin oxide inverted opal electrodes are investigated for the conduct of the electrochemical performance of the electrodes under varying rate of change of potential. Chapter 7 presents the overall conclusions drawn from the results presented in this thesis, coupled with an indication of potential future work which may be explored further.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent popularity of the IEEE 802.11b Wireless Local Area Networks (WLANs) in a host of current-day applications has instigated a suite of research challenges. The 802.11b WLANs are highly reliable and wide spread. In this work, we study the temporal characteristics of RSSI in the real-working environment by conducting a controlled set of experiments. Our results indicate that a significant variability in the RSSI can occur over time. Some of this variability in the RSSI may be due to systematic causes while the other component can be expressed as stochastic noise. We present an analysis of both these aspects of RSSI. We treat the moving average of the RSSI as the systematic causes and the noise as the stochastic causes. We give a reasonable estimate for the moving average to compute the noise accurately. We attribute the changes in the environment such as the movement of people and the noise associated with the NIC circuitry and the network access point as causes for this variability. We find that the results of our analysis are of primary importance to active research areas such as location determination of users in a WLAN. The techniques used in some of the RF-based WLAN location determination systems, exploit the characteristics of the RSSI presented in this work to infer the location of a wireless client in a WLAN. Thus our results form the building blocks for other users of the exact characteristics of the RSSI.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of focal therapy is rapidly evolving and gaining popularity from both physician and patient perspectives. We review the rationale, candidate selection, and results of the first clinical studies of focal cryoablation for selected patients with low volume and low- to low-moderate-risk features of prostate cancer as an alternative to whole-gland treatment. In spite of improved understanding of the tumor biology of early stage disease, we currently have limited tools to select appropriate patients with low- to low-moderate risk unifocal or unilateral prostate cancer who may be amenable to focal therapy. From a technical point, a number of ablative treatment options for focal therapy are available, with cryoablation having the most clinical experience. Recently, several reports have been published from single and multi-institutional studies that discuss focal therapy as a reasonable balance between cancer control and quality-of-life outcomes. Retrospective pathologic data from large prostatectomy series, however, do not clearly reveal valid and reproducible criteria to select appropriate candidates for focal cryoablation because of the complexity of tumorigenesis in early stage disease. At this time, a more feasible option remains hemiablation of the prostate with reasonable certainty about the absence of clinically significant cancer lesion(s) on the contralateral side of the prostate based on three-dimensional transperineal prostate biopsy mapping studies. Minimally invasive, parenchyma-preserving cryoablation can be considered as a potential feasible option in the treatment armamentarium of early stage, localized prostate cancer in appropriately selected candidates. There is a need to further test this technique in randomized, multicenter clinical trials.