1000 resultados para especialized media
Resumo:
This paper presents a tool called Gismo (Generator of Internet Streaming Media Objects and workloads). Gismo enables the specification of a number of streaming media access characteristics, including object popularity, temporal correlation of request, seasonal access patterns, user session durations, user interactivity times, and variable bit-rate (VBR) self-similarity and marginal distributions. The embodiment of these characteristics in Gismo enables the generation of realistic and scalable request streams for use in the benchmarking and comparative evaluation of Internet streaming media delivery techniques. To demonstrate the usefulness of Gismo, we present a case study that shows the importance of various workload characteristics in determining the effectiveness of proxy caching and server patching techniques in reducing bandwidth requirements.
Resumo:
Internet streaming applications are adversely affected by network conditions such as high packet loss rates and long delays. This paper aims at mitigating such effects by leveraging the availability of client-side caching proxies. We present a novel caching architecture (and associated cache management algorithms) that turn edge caches into accelerators of streaming media delivery. A salient feature of our caching algorithms is that they allow partial caching of streaming media objects and joint delivery of content from caches and origin servers. The caching algorithms we propose are both network-aware and stream-aware; they take into account the popularity of streaming media objects, their bit-rate requirements, and the available bandwidth between clients and servers. Using realistic models of Internet bandwidth (derived from proxy cache logs and measured over real Internet paths), we have conducted extensive simulations to evaluate the performance of various cache management alternatives. Our experiments demonstrate that network-aware caching algorithms can significantly reduce service delay and improve overall stream quality. Also, our experiments show that partial caching is particularly effective when bandwidth variability is not very high.
Resumo:
We present what we believe to be the first thorough characterization of live streaming media content delivered over the Internet. Our characterization of over five million requests spanning a 28-day period is done at three increasingly granular levels, corresponding to clients, sessions, and transfers. Our findings support two important conclusions. First, we show that the nature of interactions between users and objects is fundamentally different for live versus stored objects. Access to stored objects is user driven, whereas access to live objects is object driven. This reversal of active/passive roles of users and objects leads to interesting dualities. For instance, our analysis underscores a Zipf-like profile for user interest in a given object, which is to be contrasted to the classic Zipf-like popularity of objects for a given user. Also, our analysis reveals that transfer lengths are highly variable and that this variability is due to the stickiness of clients to a particular live object, as opposed to structural (size) properties of objects. Second, based on observations we make, we conjecture that the particular characteristics of live media access workloads are likely to be highly dependent on the nature of the live content being accessed. In our study, this dependence is clear from the strong temporal correlations we observed in the traces, which we attribute to the synchronizing impact of live content on access characteristics. Based on our analyses, we present a model for live media workload generation that incorporates many of our findings, and which we implement in GISMO [19].
Resumo:
We consider the problem of delivering popular streaming media to a large number of asynchronous clients. We propose and evaluate a cache-and-relay end-system multicast approach, whereby a client joining a multicast session caches the stream, and if needed, relays that stream to neighboring clients which may join the multicast session at some later time. This cache-and-relay approach is fully distributed, scalable, and efficient in terms of network link cost. In this paper we analytically derive bounds on the network link cost of our cache-and-relay approach, and we evaluate its performance under assumptions of limited client bandwidth and limited client cache capacity. When client bandwidth is limited, we show that although finding an optimal solution is NP-hard, a simple greedy algorithm performs surprisingly well in that it incurs network link costs that are very close to a theoretical lower bound. When client cache capacity is limited, we show that our cache-and-relay approach can still significantly reduce network link cost. We have evaluated our cache-and-relay approach using simulations over large, synthetic random networks, power-law degree networks, and small-world networks, as well as over large real router-level Internet maps.
Resumo:
Overlay networks have become popular in recent times for content distribution and end-system multicasting of media streams. In the latter case, the motivation is based on the lack of widespread deployment of IP multicast and the ability to perform end-host processing. However, constructing routes between various end-hosts, so that data can be streamed from content publishers to many thousands of subscribers, each having their own QoS constraints, is still a challenging problem. First, any routes between end-hosts using trees built on top of overlay networks can increase stress on the underlying physical network, due to multiple instances of the same data traversing a given physical link. Second, because overlay routes between end-hosts may traverse physical network links more than once, they increase the end-to-end latency compared to IP-level routing. Third, algorithms for constructing efficient, large-scale trees that reduce link stress and latency are typically more complex. This paper therefore compares various methods to construct multicast trees between end-systems, that vary in terms of implementation costs and their ability to support per-subscriber QoS constraints. We describe several algorithms that make trade-offs between algorithmic complexity, physical link stress and latency. While no algorithm is best in all three cases we show how it is possible to efficiently build trees for several thousand subscribers with latencies within a factor of two of the optimal, and link stresses comparable to, or better than, existing technologies.
Resumo:
Recent years have witnessed a rapid growth in the demand for streaming video over the Internet, exposing challenges in coping with heterogeneous device capabilities and varying network throughput. When we couple this rise in streaming with the growing number of portable devices (smart phones, tablets, laptops) we see an ever-increasing demand for high-definition videos online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide us with graceful changes in video quality, all while respecting our viewing satisfaction. In this context the use of well-known scalable media streaming techniques, commonly known as scalable coding, is an attractive solution and the focus of this thesis. In this thesis we investigate the transmission of existing scalable video models over a lossy network and determine how the variation in viewable quality is affected by packet loss. This work focuses on leveraging the benefits of scalable media, while reducing the effects of data loss on achievable video quality. The overall approach is focused on the strategic packetisation of the underlying scalable video and how to best utilise error resiliency to maximise viewable quality. In particular, we examine the manner in which scalable video is packetised for transmission over lossy networks and propose new techniques that reduce the impact of packet loss on scalable video by selectively choosing how to packetise the data and which data to transmit. We also exploit redundancy techniques, such as error resiliency, to enhance the stream quality by ensuring a smooth play-out with fewer changes in achievable video quality. The contributions of this thesis are in the creation of new segmentation and encapsulation techniques which increase the viewable quality of existing scalable models by fragmenting and re-allocating the video sub-streams based on user requirements, available bandwidth and variations in loss rates. We offer new packetisation techniques which reduce the effects of packet loss on viewable quality by leveraging the increase in the number of frames per group of pictures (GOP) and by providing equality of data in every packet transmitted per GOP. These provide novel mechanisms for packetizing and error resiliency, as well as providing new applications for existing techniques such as Interleaving and Priority Encoded Transmission. We also introduce three new scalable coding models, which offer a balance between transmission cost and the consistency of viewable quality.
Resumo:
This dissertation examines the role of communications technology in social change. It examines secondary data on contemporary China arguing that many interpretations of events in China are unsuitable at best and at worst conceptually damages our understanding of social change in China. This is especially the case in media studies under the ‘democratic framework’. It proposes that there is an alternative framework in studying the media and social change. This alternative conceptual framework is termed a zone of interpretative development offering a means by which to discuss events that take place in a mediated environment. Taking a theoretical foundation using the philosophy of Mikhail Bakhtin this dissertation develops a platform with which to understand communication technology from an anthropological perspective. Three media events from contemporary China are examined. The first examines the Democracy Wall event and the implications of using a public sphere framework. The second case examines the phenomenon of the Grass Mud Horse, a symbol that has gained popular purchase as a humorous expression of political dissatisfaction and develops the problems seen in the first case but with some solutions. Using a modification of Lev Vygotskiĭ’s zone of proximal development this symbol is understood as an expression of the collective recognition of a shared experience. In the second example from the popular TV talent show contests in China further expressions of collective experience are introduced. With the evidence from these media events in contemporary China this dissertation proposes that we can understand certain modes of communication as occurring in a zone of interpretative development. This proposed anthropological feature of social change via communication and technology can fruitfully describe meaning-formation in society via the expression and recognition of shared experiences.
Resumo:
Video compression techniques enable adaptive media streaming over heterogeneous links to end-devices. Scalable Video Coding (SVC) and Multiple Description Coding (MDC) represent well-known techniques for video compression with distinct characteristics in terms of bandwidth efficiency and resiliency to packet loss. In this paper, we present Scalable Description Coding (SDC), a technique to compromise the tradeoff between bandwidth efficiency and error resiliency without sacrificing user-perceived quality. Additionally, we propose a scheme that combines network coding and SDC to further improve the error resiliency. SDC yields upwards of 25% bandwidth savings over MDC. Additionally, our scheme features higher quality for longer durations even at high packet loss rates.
Resumo:
Galvanic replacement is a versatile synthetic strategy for the synthesis of alloy and hollow nanostructures. The structural evolution of single crystalline and multiply twinned nanoparticles <20 nm in diameter and capped with oleylamine is systematically studied. Changes in chemical composition are dependent on the size and crystallinity of the parent nanoparticle. The effects of reaction temperature and rate of precursor addition are also investigated. Galvanic replacement of single crystal spherical and truncated cubic nanoparticles follows the same mechanism to form hollow octahedral nanoparticles, a mechanism which is not observed for galvanic replacement of Ag templates in aqueous systems. Multiply twinned nanoparticles can form nanorings or solid alloys by manipulating the reaction conditions. Oleylamine-capped Ag nanoparticles are highly adaptable templates to synthesize a range of hollow and alloy nanostructures with tuneable localised surface plasmon resonance.
Resumo:
The transport of uncoated silver nanoparticles (AgNPs) in a porous medium composed of silica glass beads modified with a partial coverage of iron oxide (hematite) was studied and compared to that in a porous medium composed of unmodified glass beads (GB). At a pH lower than the point of zero charge (PZC) of hematite, the affinity of AgNPs for a hematite-coated glass bead (FeO-GB) surface was significantly higher than that for an uncoated surface. There was a linear correlation between the average nanoparticle affinity for media composed of mixtures of FeO-GB and GB collectors and the relative composition of those media as quantified by the attachment efficiency over a range of mixing mass ratios of the two types of collectors, so that the average AgNPs affinity for these media is readily predicted from the mass (or surface) weighted average of affinities for each of the surface types. X-ray photoelectron spectroscopy (XPS) was used to quantify the composition of the collector surface as a basis for predicting the affinity between the nanoparticles for a heterogeneous collector surface. A correlation was also observed between the local abundances of AgNPs and FeO on the collector surface.
Resumo:
We have recently developed a spectral re-shaping technique to simultaneously measure nonlinear refractive index and nonlinear absorption. In this technique, the information about the nonlinearities is encoded in the frequency domain, rather than in the spatial domain as in the conventional Z-scan method. Here we show that frequency encoding is much more robust with respect to scattering. We compare spectral re-shaping and Z-scan measurements in a highly scattering environment and show that reliable spectral re-shaping measurements can be performed even in a regime that precludes standard Z-scans.
Resumo:
We introduce a class of optical media based on adiabatically modulated, dielectric-only, and potentially extremely low-loss, photonic crystals (PC). The media we describe represent a generalization of the eikonal limit of transformation optics (TO). The basis of the concept is the possibility to fit some equal frequency surfaces of certain PCs with elliptic surfaces, allowing them to mimic the dispersion relation of light in anisotropic effective media. PC cloaks and other TO devices operating at visible wavelengths can be constructed from optically transparent substances such as glasses, whose attenuation coefficient can be as small as 10 dB/km, suggesting the TO design methodology can be applied to the development of optical devices not limited by the losses inherent to metal-based, passive metamaterials.
Resumo:
We introduce an approach to the design of three-dimensional transformation optical (TO) media based on a generalized quasiconformal mapping approach. The generalized quasiconformal TO (QCTO) approach enables the design of media that can, in principle, be broadband and low loss, while controlling the propagation of waves with arbitrary angles of incidence and polarization. We illustrate the method in the design of a three-dimensional carpet ground plane cloak and of a flattened Luneburg lens. Ray-trace studies provide a confirmation of the performance of the QCTO media, while also revealing the limited performance of index-only versions of these devices.
Resumo:
We introduce a new concept for the manipulation of fluid flow around three-dimensional bodies. Inspired by transformation optics, the concept is based on a mathematical idea of coordinate transformations and physically implemented with anisotropic porous media permeable to the flow of fluids. In two situations-for an impermeable object placed either in a free-flowing fluid or in a fluid-filled porous medium-we show that the object can be coated with an inhomogeneous, anisotropic permeable medium, such as to preserve the flow that would have existed in the absence of the object. The proposed fluid flow cloak eliminates downstream wake and compensates viscous drag, hinting at the possibility of novel propulsion techniques.
Resumo:
info:eu-repo/semantics/nonPublished