975 resultados para Internet delivery
Resumo:
We present what we believe to be the first thorough characterization of live streaming media content delivered over the Internet. Our characterization of over five million requests spanning a 28-day period is done at three increasingly granular levels, corresponding to clients, sessions, and transfers. Our findings support two important conclusions. First, we show that the nature of interactions between users and objects is fundamentally different for live versus stored objects. Access to stored objects is user driven, whereas access to live objects is object driven. This reversal of active/passive roles of users and objects leads to interesting dualities. For instance, our analysis underscores a Zipf-like profile for user interest in a given object, which is to be contrasted to the classic Zipf-like popularity of objects for a given user. Also, our analysis reveals that transfer lengths are highly variable and that this variability is due to the stickiness of clients to a particular live object, as opposed to structural (size) properties of objects. Second, based on observations we make, we conjecture that the particular characteristics of live media access workloads are likely to be highly dependent on the nature of the live content being accessed. In our study, this dependence is clear from the strong temporal correlations we observed in the traces, which we attribute to the synchronizing impact of live content on access characteristics. Based on our analyses, we present a model for live media workload generation that incorporates many of our findings, and which we implement in GISMO [19].
Resumo:
One relatively unexplored question about the Internet's physical structure concerns the geographical location of its components: routers, links and autonomous systems (ASes). We study this question using two large inventories of Internet routers and links, collected by different methods and about two years apart. We first map each router to its geographical location using two different state-of-the-art tools. We then study the relationship between router location and population density; between geographic distance and link density; and between the size and geographic extent of ASes. Our findings are consistent across the two datasets and both mapping methods. First, as expected, router density per person varies widely over different economic regions; however, in economically homogeneous regions, router density shows a strong superlinear relationship to population density. Second, the probability that two routers are directly connected is strongly dependent on distance; our data is consistent with a model in which a majority (up to 75-95%) of link formation is based on geographical distance (as in the Waxman topology generation method). Finally, we find that ASes show high variability in geographic size, which is correlated with other measures of AS size (degree and number of interfaces). Among small to medium ASes, ASes show wide variability in their geographic dispersal; however, all ASes exceeding a certain threshold in size are maximally dispersed geographically. These findings have many implications for the next generation of topology generators, which we envisage as producing router-level graphs annotated with attributes such as link latencies, AS identifiers and geographical locations.
Resumo:
The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand (VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper, we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reservations and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed frame-work uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings, typically yielding losses that are an order of magnitude or more below our analytically derived bounds.
Resumo:
This position paper outlines a new network architecture, i.e., a style of construction that identifies the objects and how they relate. We do not specify particular protocol implementations or specific interfaces and policies. After all, it should be possible to change protocols in an architecture without changing the architecture. Rather we outline the repeating patterns and structures, and how the proposed model would cope with the challenges faced by today's Internet (and that of the future). Our new architecture is based on the following principle: Application processes communicate via a distributed inter-process communication (IPC) facility. The application processes that make up this facility provide a protocol that implements an IPC mechanism, and a protocol for managing distributed IPC (routing, security and other management tasks). Existing implementation strategies, algorithms, and protocols can be cast and used within our proposed new structure.
Resumo:
Overlay networks have been used for adding and enhancing functionality to the end-users without requiring modifications in the Internet core mechanisms. Overlay networks have been used for a variety of popular applications including routing, file sharing, content distribution, and server deployment. Previous work has focused on devising practical neighbor selection heuristics under the assumption that users conform to a specific wiring protocol. This is not a valid assumption in highly decentralized systems like overlay networks. Overlay users may act selfishly and deviate from the default wiring protocols by utilizing knowledge they have about the network when selecting neighbors to improve the performance they receive from the overlay. This thesis goes against the conventional thinking that overlay users conform to a specific protocol. The contributions of this thesis are threefold. It provides a systematic evaluation of the design space of selfish neighbor selection strategies in real overlays, evaluates the performance of overlay networks that consist of users that select their neighbors selfishly, and examines the implications of selfish neighbor and server selection to overlay protocol design and service provisioning respectively. This thesis develops a game-theoretic framework that provides a unified approach to modeling Selfish Neighbor Selection (SNS) wiring procedures on behalf of selfish users. The model is general, and takes into consideration costs reflecting network latency and user preference profiles, the inherent directionality in overlay maintenance protocols, and connectivity constraints imposed on the system designer. Within this framework the notion of user’s "best response" wiring strategy is formalized as a k-median problem on asymmetric distance and is used to obtain overlay structures in which no node can re-wire to improve the performance it receives from the overlay. Evaluation results presented in this thesis indicate that selfish users can reap substantial performance benefits when connecting to overlay networks composed of non-selfish users. In addition, in overlays that are dominated by selfish users, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naïve wiring strategies. To capitalize on the performance advantages of optimal neighbor selection strategies and the emergent global wirings that result, this thesis presents EGOIST: an SNS-inspired overlay network creation and maintenance routing system. Through an extensive measurement study on the deployed prototype, results presented in this thesis show that EGOIST’s neighbor selection primitives outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, these results demonstrate that EGOIST is competitive with an optimal but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overheads. This thesis also studies selfish neighbor selection strategies for swarming applications. The main focus is on n-way broadcast applications where each of n overlay user wants to push its own distinct file to all other destinations as well as download their respective data files. Results presented in this thesis demonstrate that the performance of our swarming protocol for n-way broadcast on top of overlays of selfish users is far superior than the performance on top of existing overlays. In the context of service provisioning, this thesis examines the use of distributed approaches that enable a provider to determine the number and location of servers for optimal delivery of content or services to its selfish end-users. To leverage recent advances in virtualization technologies, this thesis develops and evaluates a distributed protocol to migrate servers based on end-users demand and only on local topological knowledge. Results under a range of network topologies and workloads suggest that the performance of the distributed deployment is comparable to that of the optimal but unscalable centralized deployment.
Resumo:
The TCP/IP architecture was originally designed without taking security measures into consideration. Over the years, it has been subjected to many attacks, which has led to many patches to counter them. Our investigations into the fundamental principles of networking have shown that carefully following an abstract model of Interprocess Communication (IPC) addresses many problems [1]. Guided by this IPC principle, we designed a clean-slate Recursive INternet Architecture (RINA) [2]. In this paper, we show how, without the aid of cryptographic techniques, the bare-bones architecture of RINA can resist most of the security attacks faced by TCP/IP. We also show how hard it is for an intruder to compromise RINA. Then, we show how RINA inherently supports security policies in a more manageable, on-demand basis, in contrast to the rigid, piecemeal approach of TCP/IP.
Resumo:
Recent empirical studies have shown that Internet topologies exhibit power laws of the form for the following relationships: (P1) outdegree of node (domain or router) versus rank; (P2) number of nodes versus outdegree; (P3) number of node pairs y = x^α within a neighborhood versus neighborhood size (in hops); and (P4) eigenvalues of the adjacency matrix versus rank. However, causes for the appearance of such power laws have not been convincingly given. In this paper, we examine four factors in the formation of Internet topologies. These factors are (F1) preferential connectivity of a new node to existing nodes; (F2) incremental growth of the network; (F3) distribution of nodes in space; and (F4) locality of edge connections. In synthetically generated network topologies, we study the relevance of each factor in causing the aforementioned power laws as well as other properties, namely diameter, average path length and clustering coefficient. Different kinds of network topologies are generated: (T1) topologies generated using our parametrized generator, we call BRITE; (T2) random topologies generated using the well-known Waxman model; (T3) Transit-Stub topologies generated using GT-ITM tool; and (T4) regular grid topologies. We observe that some generated topologies may not obey power laws P1 and P2. Thus, the existence of these power laws can be used to validate the accuracy of a given tool in generating representative Internet topologies. Power laws P3 and P4 were observed in nearly all considered topologies, but different topologies showed different values of the power exponent α. Thus, while the presence of power laws P3 and P4 do not give strong evidence for the representativeness of a generated topology, the value of α in P3 and P4 can be used as a litmus test for the representativeness of a generated topology. We also find that factors F1 and F2 are the key contributors in our study which provide the resemblance of our generated topologies to that of the Internet.
Resumo:
In this position paper, we review basic control strategies that machines acting as "traffic controllers" could deploy in order to improve the management of Internet services. Such traffic controllers are likely to spur the widespread emergence of advanced applications, which have (so far) been hindered by the inability of the networking infrastructure to deliver on the promise of Quality-of-Service (QoS).
Resumo:
The majority of the traffic (bytes) flowing over the Internet today have been attributed to the Transmission Control Protocol (TCP). This strong presence of TCP has recently spurred further investigations into its congestion avoidance mechanism and its effect on the performance of short and long data transfers. At the same time, the rising interest in enhancing Internet services while keeping the implementation cost low has led to several service-differentiation proposals. In such service-differentiation architectures, much of the complexity is placed only in access routers, which classify and mark packets from different flows. Core routers can then allocate enough resources to each class of packets so as to satisfy delivery requirements, such as predictable (consistent) and fair service. In this paper, we investigate the interaction among short and long TCP flows, and how TCP service can be improved by employing a low-cost service-differentiation scheme. Through control-theoretic arguments and extensive simulations, we show the utility of isolating TCP flows into two classes based on their lifetime/size, namely one class of short flows and another of long flows. With such class-based isolation, short and long TCP flows have separate service queues at routers. This protects each class of flows from the other as they possess different characteristics, such as burstiness of arrivals/departures and congestion/sending window dynamics. We show the benefits of isolation, in terms of better predictability and fairness, over traditional shared queueing systems with both tail-drop and Random-Early-Drop (RED) packet dropping policies. The proposed class-based isolation of TCP flows has several advantages: (1) the implementation cost is low since it only requires core routers to maintain per-class (rather than per-flow) state; (2) it promises to be an effective traffic engineering tool for improved predictability and fairness for both short and long TCP flows; and (3) stringent delay requirements of short interactive transfers can be met by increasing the amount of resources allocated to the class of short flows.
Resumo:
We propose a new technique for efficiently delivering popular content from information repositories with bounded file caches. Our strategy relies on the use of fast erasure codes (a.k.a. forward error correcting codes) to generate encodings of popular files, of which only a small sliding window is cached at any time instant, even to satisfy an unbounded number of asynchronous requests for the file. Our approach capitalizes on concurrency to maximize sharing of state across different request threads while minimizing cache memory utilization. Additional reduction in resource requirements arises from providing for a lightweight version of the network stack. In this paper, we describe the design and implementation of our Cyclone server as a Linux kernel subsystem.
Resumo:
The measurement of users’ attitudes towards and confidence with using the Internet is an important yet poorly researched topic. Previous research has encountered issues that serve to obfuscate rather than clarify. Such issues include a lack of distinction between the terms ‘attitude’ and ‘self-efficacy’, the absence of a theoretical framework to measure each concept, and failure to follow well-established techniques for the measurement of both attitude and self-efficacy. Thus, the primary aim of this research was to develop two statistically reliable scales which independently measure attitudes towards the Internet and Internet self-efficacy. This research addressed the outlined issues by applying appropriate theoretical frameworks to each of the constructs under investigation. First, the well-known three component (affect, behaviour, cognition) model of attitudes was applied to previous Internet attitude statements. The scale was distributed to four large samples of participants. Exploratory factor analyses revealed four underlying factors in the scale: Internet Affect, Internet Exhilaration, Social Benefit of the Internet and Internet Detriment. The final scale contains 21 items, demonstrates excellent reliability and achieved excellent model fit in the confirmatory factor analysis. Second, Bandura’s (1997) model of self-efficacy was followed to develop a reliable measure of Internet self-efficacy. Data collected as part of this research suggests that there are ten main activities which individuals can carry out on the Internet. Preliminary analyses suggested that self-efficacy is confounded with previous experience; thus, individuals were invited to indicate how frequently they performed the listed Internet tasks in addition to rating their feelings of self-efficacy for each task. The scale was distributed to a sample of 841 participants. Results from the analyses suggest that the more frequently an individual performs an activity on the Internet, the higher their self-efficacy score for that activity. This suggests that frequency of use ought to be taken into account in individual’s self-efficacy scores to obtain a ‘true’ self-efficacy score for the individual. Thus, a formula was devised to incorporate participants’ previous experience of Internet tasks in their Internet self-efficacy scores. This formula was then used to obtain an overall Internet self-efficacy score for participants. Following the development of both scales, gender and age differences were explored in Internet attitudes and Internet self-efficacy scores. The analyses indicated that there were no gender differences between groups for Internet attitude or Internet self-efficacy scores. However, age group differences were identified for both attitudes and self-efficacy. Individuals aged 25-34 years achieved the highest scores on both the Internet attitude and Internet self-efficacy measures. Internet attitude and self-efficacy scores tended to decrease with age with older participants achieving lower scores on both measures than younger participants. It was also found that the more exposure individuals had to the Internet, the higher their Internet attitude and Internet self-efficacy scores. Examination of the relationship between attitude and self-efficacy found a significantly positive relationship between the two measures suggesting that the two constructs are related. Implication of such findings and directions for future research are outlined in detail in the Discussion section of this thesis.
Resumo:
The objective of this thesis was to improve the dissolution rate of the poorly waters-soluble drug, fenofibrate by processing it with a high surface area carrier, mesoporous silica. The subsequent properties of the drug – silica composite were studied in terms of drug distribution within the silica matrix, solid state and release properties. Prior to commencing any experimental work, the properties of unprocessed mesoporous silica and fenofibrate were characterised (chapter 3), this allowed for comparison with the processed samples studied in later chapters. Fenofibrate was a highly stable, crystalline drug that did not adsorb moisture, even under long term accelerated storage conditions. It maintained its crystallinity even after SC-CO2 processing. Its dissolution rate was limited and dependent on the characteristics of the particular in vitro media studied. Mesoporous silica had a large surface area and mesopore volume and readily picked up moisture when stored under long term accelerated storage conditions (75% RH, 40 oC). It maintained its mesopore character after SC-CO2 processing. A variety of methods were employed to process fenofibrate with mesoporous silica including physical mixing, melt method, solvent impregnation and novel methods such as liquid and supercritical carbon dioxide (SC-CO2) (chapter 4). It was found that it was important to break down the fenofibrate particulate structure to a molecular state to enable drug molecules enter into the silica mesopores. While all processing methods led to some increase in fenofibrate release properties; the impregnation, liquid and SC-CO2 methods produced the most rapid release rates. SC-CO2 processing was further studied with a view to optimising the processing parameters to achieve the highest drug-loading efficiency possible (chapter 5). In this thesis, it was that SC-CO2 processing pressure had a bearing on drug-loading efficiency. Neither pressure, duration or depressurisation rate affected drug solid state or release properties. The amount of drug that could be loaded onto to the mesoporous silica successfully was also investigated at different ratios of drug mass to silica surface area under constant SC-CO2 conditions; as the drug – silica ratio increased, the drug-loading efficiency decreased, while there was no effect on drug solid state or release properties. The influence of the number of drug-loading steps was investigated (chapter 6) with a view to increasing the drug-loading efficiency. This multiple step approach did not yield an increase in drug-loading efficiency compared to the single step approach. It was also an objective in this chapter to understand how much drug could be loaded into silica mesopores; a method based on the known volume of the mesopores and true density of drug was investigated. However, this approach led to serious repercussions in terms of the subsequent solid state nature of the drug and its release performance; there was significant drug crystallinity and reduced release extent. The impact of in vitro release media on fenofibrate release was also studied (chapter 6). Here it was seen that media containing HCl led to reduced drug release over time compared to equivalent media not containing HCl. The key findings of this thesis are discussed in chapter 7 and included: 1. Drug – silica processing method strongly influenced drug distribution within the silica matrix, drug solid state and release. 2. The silica surface area and mesopore volume also influenced how much drug could be loaded. It was shown that SC-CO2 processing variables such as processing pressure (13.79 – 41.37 MPa), duration time (4 – 24 h) and depressurisation rate (rapid or controlled) did not influence the drug distribution within the SBA- 15 matrix, drug solid state form or release. Possible avenues of research to be considered going forward include the development and application of high resolution imaging techniques to visualise drug molecules within the silica mesopores. Also, the issues surrounding SBA-15 usage in a pharmaceutical manufacturing environment should be addressed.
Resumo:
Depression is among the leading causes of disability worldwide. Currently available antidepressant drugs have unsatisfactory efficacy, with up to 60% of depressed patients failing to respond adequately to treatment. Emerging evidence has highlighted a potential role for the efflux transporter P-glycoprotein (P-gp), expressed at the blood-brain barrier (BBB), in the aetiology of treatment-resistant depression. In this thesis, the potential of P-gp inhibition as a strategy to enhance the brain distribution and pharmacodynamic effects of antidepressant drugs was investigated. Pharmacokinetic studies demonstrated that administration of the P-gp inhibitors verapamil or cyclosporin A (CsA) enhanced the BBB transport of the antidepressants imipramine and escitalopram in vivo. Furthermore, both imipramine and escitalopram were identified as transported substrates of human P-gp in vitro. Contrastingly, human P-gp exerted no effect on the transport of four other antidepressants (amitriptyline, duloxetine, fluoxetine and mirtazapine) in vitro. Pharmacodynamic studies revealed that pre-treatment with verapamil augmented the behavioural effects of escitalopram in the tail suspension test (TST) of antidepressant-like activity in mice. Moreover, pre-treatment with CsA exacerbated the behavioural manifestation of an escitalopram-induced mouse model of serotonin syndrome, a serious adverse reaction associated with serotonergic drugs. This finding highlights the potential for unwanted side-effects which may occur due to increasing brain levels of antidepressants by P-gp inhibition, although further studies are needed to fully elucidate the mechanism(s) at play. Taken together, the research outlined in this thesis indicates that P-gp may restrict brain concentrations of escitalopram and imipramine in patients. Moreover, we show that increasing the brain distribution of an antidepressant by P-gp inhibition can result in an augmentation of antidepressant-like activity in vivo. These findings raise the possibility that P-gp inhibition may represent a potentially beneficial strategy to augment antidepressant treatment in clinical practice. Further studies are now warranted to evaluate the safety and efficacy of this approach.
Resumo:
Huntington’s Disease (HD) is a rare autosomal dominant neurodegenerative disease caused by the expression of a mutant Huntingtin (muHTT) protein. Therefore, preventing the expression of muHTT by harnessing the specificity of the RNA interference (RNAi) pathway is a key research avenue for developing novel therapies for HD. However, the biggest caveat in the RNAi approach is the delivery of short interfering RNA (siRNAs) to neurons, which are notoriously difficult to transfect. Indeed, despite the great advances in the field of nanotechnology, there remains a great need to develop more effective and less toxic carriers for siRNA delivery to the Central Nervous System (CNS). Thus, the aim of this thesis was to investigate the utility of modified amphiphilic β-cyclodextrins (CDs), oligosaccharide-based molecules, as non-viral vectors for siRNA delivery for HD. Modified CDs were able to bind and complex siRNAs forming nanoparticles capable of delivering siRNAs to ST14A-HTT120Q cells and to human HD fibroblasts, and reducing the expression of the HTT gene in these in vitro models of HD. Moreover, direct administration of CD.siRNA nanoparticles into the R6/2 mouse brain resulted in significant HTT gene expression knockdown and selective alleviation of rotarod motor deficits in this mouse model of HD. In contrast to widely used transfection reagents, CD.siRNA nanoparticles only induced limited cytotoxic and neuroinflammatory responses in multiple brain-derived cell-lines, and also in vivo after single direct injections into the mouse brain. Alternatively, we have also described a PEGylation-based formulation approach to further stabilise CD.siRNA nanoparticles and progress towards a systemic delivery nanosystem. Resulting PEGylated CD.siRNA nanoparticles showed increased stability in physiological saltconditions and, to some extent, reduced protein-induced aggregation. Taken together, the work outlined in this thesis identifies modified CDs as effective, safe and versatile siRNA delivery systems that hold great potential for the treatment of CNS disorders, such as HD.
Resumo:
Drug delivery systems influence the various processes of release, absorption, distribution and elimination of drug. Conventional delivery methods administer drug through the mouth, the skin, transmucosal areas, inhalation or injection. However, one of the current challenges is the lack of effective and targeted oral drug administration. Development of sophisticated strategies, such as micro- and nanotechnology that can integrate the design and synthesis of drug delivery systems in a one-step, scalable process is fundamental in advancing the limitations of conventional processing techniques. Thus, the objective of this thesis is to evaluate novel microencapsulation technologies in the production of size-specific and target-specific drug-loaded particles. The first part of this thesis describes the utility of PDMS and silicon microfluidic flow focusing devices (MFFDs) to produce PLGA-based microparticles. The formation of uniform droplets was dependent on the surface of PDMS remaining hydrophilic. However, the durability of PDMS was limited to no more than 1 hour before wetting of the microchannel walls with dichloromethane and subsequent swelling occurred. Critically, silicon MFFDs revealed very good solvent compatibility and was sufficiently robust to withstand elevated fluid flow rates. Silicon MFFDs facilitated experiments to run over days with continuous use and re-use of the device with a narrower microparticle size distribution, relative to conventional production techniques. The second part of this thesis demonstrates an alternative microencapsulation technology, SmPill® minispheres, to target CsA delivery to the colon. Characterisation of CsA release in vitro and in vivo was performed. By modulating the ethylcellulose:pectin coating thickness, release of CsA in-vivo was more effectively controlled compared to current commercial CsA formulations and demonstrated a linear in-vitro in-vivo relationship. Coated minispheres were shown to limit CsA release in the upper small intestine and enhance localised CsA delivery to the colon.