966 resultados para BANDWIDTH HUNGRY APPLICATIONS
Resumo:
High volumes of data traffic along with bandwidth hungry applications, such as cloud computing and video on demand, is driving the core optical communication links closer and closer to their maximum capacity. The research community has clearly identifying the coming approach of the nonlinear Shannon limit for standard single mode fibre [1,2]. It is in this context that the work on modulation formats, contained in Chapter 3 of this thesis, was undertaken. The work investigates the proposed energy-efficient four-dimensional modulation formats. The work begins by studying a new visualisation technique for four dimensional modulation formats, akin to constellation diagrams. The work then carries out one of the first implementations of one such modulation format, polarisation-switched quadrature phase-shift keying (PS-QPSK). This thesis also studies two potential next-generation fibres, few-mode and hollow-core photonic band-gap fibre. Chapter 4 studies ways to experimentally quantify the nonlinearities in few-mode fibre and assess the potential benefits and limitations of such fibres. It carries out detailed experiments to measure the effects of stimulated Brillouin scattering, self-phase modulation and four-wave mixing and compares the results to numerical models, along with capacity limit calculations. Chapter 5 investigates hollow-core photonic band-gap fibre, where such fibres are predicted to have a low-loss minima at a wavelength of 2μm. To benefit from this potential low loss window requires the development of telecoms grade subsystems and components. The chapter will outline some of the development and characterisation of these components. The world's first wavelength division multiplexed (WDM) subsystem directly implemented at 2μm is presented along with WDM transmission over hollow-core photonic band-gap fibre at 2μm. References: [1]P. P. Mitra, J. B. Stark, Nature, 411, 1027-1030, 2001 [2] A. D. Ellis et al., JLT, 28, 423-433, 2010.
Resumo:
The evolution of cellular systems towards third generation (3G) or IMT-2000 seems to have a tendency to use W-CDMA as the standard access method, as ETSI decisions have showed. However, there is a question about the improvements in capacity and the wellness of this access method. One of the aspects that worry developers and researchers planning the third generation is the extended use of the Internet and more and more bandwidth hungry applications. This work shows the performance of a W-CDMA system simulated in a PC using cover maps generated with DC-Cell, a GIS based planning tool developed by the Technical University of Valencia, Spain. The maps are exported to MATLAB and used in the model. The system used consists of several microcells in a downtown area. We analyse the interference from users in the same cell and in adjacent cells and the effect in the system, assuming perfect control for each cell. The traffic generated by the simulator is voice and data. This model allows us to work with coverage that is more accurate and is a good approach to analyse the multiple access interference (MAI) problem in microcellular systems with irregular coverage. Finally, we compare the results obtained, with the performance of a similar system using TDMA.
Resumo:
The widespread availability and demand for multimedia capable devices and multimedia content have fueled the need for high-speed wireless connectivity beyond the capabilities of existing commercial standards. While fiber optic data transfer links can provide multigigabit- per-second data rates, cost and deployment are often prohibitive in many applications. Wireless links, on the contrary, can provide a cost-effective fiber alternative to interconnect the outlining areas beyond the reach of the fiber rollout. With this in mind, the ever increasing demand for multi-gigabit wireless applications, fiber segment replacement mobile backhauling and aggregation, and covering the last mile have posed enormous challenges for next generation wireless technologies. In particular, the unbalanced temporal and geographical variations of spectrum usage along with the rapid proliferation of bandwidth- hungry mobile applications, such as video streaming with high definition television (HDTV) and ultra-high definition video (UHDV), have inspired millimeter-wave (mmWave) communications as a promising technology to alleviate the pressure of scarce spectrum resources for fifth generation (5G) mobile broadband.
Resumo:
In prediction phase, the hierarchical tree structure obtained from the test image is used to predict every central pixel of an image by its four neighboring pixels. The prediction scheme generates the predicted error image, to which the wavelet/sub-band coding algorithm can be applied to obtain efficient compression. In quantization phase, we used a modified SPIHT algorithm to achieve efficiency in memory requirements. The memory constraint plays a vital role in wireless and bandwidth-limited applications. A single reusable list is used instead of three continuously growing linked lists as in case of SPIHT. This method is error resilient. The performance is measured in terms of PSNR and memory requirements. The algorithm shows good compression performance and significant savings in memory. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
The recent remarkable growth in bandwidth of both wired optical and wireless access networks supports a burst of new high bandwidth Internet applications such as: peer-topeer file sharing, cloud storage, on-line gaming, video streaming, etc. Within this scenario, the convergence of fixed and wireless access networks offers significant opportunities for network operators to satisfy user demands, and simultaneously reduce the cost of implementing and running separated wireless and wired networks. The integration of wired and wireless network can be accomplished within several scenarios and at several levels. In this thesis we will focus on converged radio over fiber architectures, particularly on two application scenarios: converged optical 60 GHz wireless networks and wireless overlay backhauling over bidirectional colorless wavelength division multiplexing passive optical networks (WDM-PONs). In the first application scenario, optical 60 GHz signal generation using external modulation of an optical carrier by means of lithium niobate (LiNbO3) Mach- Zehnder modulators (MZM) is considered. The performance of different optical modulation techniques, robust against fiber dispersion is assessed and dispersion mitigation strategies are identified. The study is extended to 60 GHz carriers digitally modulated with data and to systems employing subcarrier multiplexed (SCM) mm-wave channels. In the second application scenario, the performance of WDM-PONs employing reflective semiconductor optical amplifiers (RSOAs), transmitting an overlay orthogonal frequency-division multiplexing (OFDM) wireless signal is assessed analytically and experimentally, with the relevant system impairments being identified. It is demonstrated that the intermodulation due to the beating of the baseband signal and wireless signal at the receiver can seriously impair the wireless channel. Performance degradation of the wireless channel caused by the RSOA gain modulation owing to the downstream baseband data is also assessed, and system design guidelines are provided.
Resumo:
Scheduling tasks to efficiently use the available processor resources is crucial to minimizing the runtime of applications on shared-memory parallel processors. One factor that contributes to poor processor utilization is the idle time caused by long latency operations, such as remote memory references or processor synchronization operations. One way of tolerating this latency is to use a processor with multiple hardware contexts that can rapidly switch to executing another thread of computation whenever a long latency operation occurs, thus increasing processor utilization by overlapping computation with communication. Although multiple contexts are effective for tolerating latency, this effectiveness can be limited by memory and network bandwidth, by cache interference effects among the multiple contexts, and by critical tasks sharing processor resources with less critical tasks. This thesis presents techniques that increase the effectiveness of multiple contexts by intelligently scheduling threads to make more efficient use of processor pipeline, bandwidth, and cache resources. This thesis proposes thread prioritization as a fundamental mechanism for directing the thread schedule on a multiple-context processor. A priority is assigned to each thread either statically or dynamically and is used by the thread scheduler to decide which threads to load in the contexts, and to decide which context to switch to on a context switch. We develop a multiple-context model that integrates both cache and network effects, and shows how thread prioritization can both maintain high processor utilization, and limit increases in critical path runtime caused by multithreading. The model also shows that in order to be effective in bandwidth limited applications, thread prioritization must be extended to prioritize memory requests. We show how simple hardware can prioritize the running of threads in the multiple contexts, and the issuing of requests to both the local memory and the network. Simulation experiments show how thread prioritization is used in a variety of applications. Thread prioritization can improve the performance of synchronization primitives by minimizing the number of processor cycles wasted in spinning and devoting more cycles to critical threads. Thread prioritization can be used in combination with other techniques to improve cache performance and minimize cache interference between different working sets in the cache. For applications that are critical path limited, thread prioritization can improve performance by allowing processor resources to be devoted preferentially to critical threads. These experimental results show that thread prioritization is a mechanism that can be used to implement a wide range of scheduling policies.
Resumo:
The bandwidth requirements of the Internet are increasing every day and there are newer and more bandwidth-thirsty applications emerging on the horizon. Wavelength division multiplexing (WDM) is the next step towards leveraging the capabilities of the optical fiber, especially for wide-area backbone networks. The ability to switch a signal at intermediate nodes in a WDM network based on their wavelengths is known as wavelength-routing. One of the greatest advantages of using wavelength-routing WDM is the ability to create a virtual topology different from the physical topology of the underlying network. This virtual topology can be reconfigured when necessary, to improve performance. We discuss the previous work done on virtual topology design and also discuss and propose different reconfiguration algorithms applicable under different scenarios.
Resumo:
This paper is concerned with long-term (20+ years) forecasting of broadband traffic in next-generation networks. Such long-term approach requires going beyond extrapolations of past traffic data while facing high uncertainty in predicting the future developments and facing the fact that, in 20 years, the current network technologies and architectures will be obsolete. Thus, "order of magnitude" upper bounds of upstream and downstream traffic are deemed to be good enough to facilitate such long-term forecasting. These bounds can be obtained by evaluating the limits of human sighting and assuming that these limits will be achieved by future services or, alternatively, by considering the contents transferred by bandwidth-demanding applications such as those using embedded interactive 3D video streaming. The traffic upper bounds are a good indication of the peak values and, subsequently, also of the future network capacity demands. Furthermore, the main drivers of traffic growth including multimedia as well as non-multimedia applications are identified. New disruptive applications and services are explored that can make good use of the large bandwidth provided by next-generation networks. The results can be used to identify monetization opportunities of future services and to map potential revenues for network operators. © 2014 The Author(s).
Resumo:
Continuous progress in optical communication technology and corresponding increasing data rates in core fiber communication systems are stimulated by the evergrowing capacity demand due to constantly emerging new bandwidth-hungry services like cloud computing, ultra-high-definition video streams, etc. This demand is pushing the required capacity of optical communication lines close to the theoretical limit of a standard single-mode fiber, which is imposed by Kerr nonlinearity [1–4]. In recent years, there have been extensive efforts in mitigating the detrimental impact of fiber nonlinearity on signal transmission, through various compensation techniques. However, there are still many challenges in applying these methods, because a majority of technologies utilized in the inherently nonlinear fiber communication systems had been originally developed for linear communication channels. Thereby, the application of ”linear techniques” in a fiber communication systems is inevitably limited by the nonlinear properties of the fiber medium. The quest for the optimal design of a nonlinear transmission channels, development of nonlinear communication technqiues and the usage of nonlinearity in a“constructive” way have occupied researchers for quite a long time.
Resumo:
This paper will look at the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP). FEC can be used to reduce the number of retransmissions which would usually result from a lost packet. The requirement for TCP to deal with any losses is then greatly reduced. There are however side-effects to using FEC as a countermeasure to packet loss: an additional requirement for bandwidth. When applications such as real-time video conferencing are needed, delay must be kept to a minimum, and retransmissions are certainly not desirable. A balance, therefore, between additional bandwidth and delay due to retransmissions must be struck. Our results show that the throughput of data can be significantly improved when packet loss occurs using a combination of FEC and TCP, compared to relying solely on TCP for retransmissions. Furthermore, a case study applies the result to demonstrate the achievable improvements in the quality of streaming video perceived by end users.
Resumo:
Bandwidth allocation for multimedia applications in case of network congestion and failure poses technical challenges due to bursty and delay sensitive nature of the applications. The growth of multimedia services on Internet and the development of agent technology have made us to investigate new techniques for resolving the bandwidth issues in multimedia communications. Agent technology is emerging as a flexible promising solution for network resource management and QoS (Quality of Service) control in a distributed environment. In this paper, we propose an adaptive bandwidth allocation scheme for multimedia applications by deploying the static and mobile agents. It is a run-time allocation scheme that functions at the network nodes. This technique adaptively finds an alternate patchup route for every congested/failed link and reallocates the bandwidth for the affected multimedia applications. The designed method has been tested (analytical and simulation)with various network sizes and conditions. The results are presented to assess the performance and effectiveness of the approach. This work also demonstrates some of the benefits of the agent based schemes in providing flexibility, adaptability, software reusability, and maintainability. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Bandwidth allocation for multimedia applications in case of network congestion and failure poses technical challenges due to bursty and delay sensitive nature of the applications. The growth of multimedia services on Internet and the development of agent technology have made us to investigate new techniques for resolving the bandwidth issues in multimedia communications. Agent technology is emerging as a flexible promising solution for network resource management and QoS (Quality of Service) control in a distributed environment. In this paper, we propose an adaptive bandwidth allocation scheme for multimedia applications by deploying the static and mobile agents. It is a run-time allocation scheme that functions at the network nodes. This technique adaptively finds an alternate patchup route for every congested/failed link and reallocates the bandwidth for the affected multimedia applications. The designed method has been tested (analytical and simulation)with various network sizes and conditions. The results are presented to assess the performance and effectiveness of the approach. This work also demonstrates some of the benefits of the agent based schemes in providing flexibility, adaptability, software reusability, and maintainability. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
The integration of different wireless networks, such as GSM and WiFi, as a two-tier hybrid wireless network is more popular and economical. Efficient bandwidth management, call admission control strategies and mobility management are important issues in supporting multiple types of services with different bandwidth requirements in hybrid networks. In particular, bandwidth is a critical commodity because of the type of transactions supported by these hybrid networks, which may have varying bandwidth and time requirements. In this paper, we consider such a problem in a hybrid wireless network installed in a superstore environment and design a bandwidth management algorithm based on the priority level, classification of the incoming transactions. Our scheme uses a downlink transaction scheduling algorithm, which decides how to schedule the outgoing transactions based on their priority level with efficient use of available bandwidth. The transaction scheduling algorithm is used to maximize the number of transaction-executions. The proposed scheme is simulated in a superstore environment with multi Rooms. The performance results describe that the proposed scheme can considerably improve the bandwidth utilization by reducing transaction blocking and accommodating more essential transactions at the peak time of the business.
A dynamic bandwidth allocation scheme for interactive multimedia applications over cellular networks
Resumo:
Cellular networks played key role in enabling high level of bandwidth for users by employing traditional methods such as guaranteed QoS based on application category at radio access stratum level for various classes of QoSs. Also, the newer multimode phones (e.g., phones that support LTE (Long Term Evolution standard), UMTS, GSM, WIFI all at once) are capable to use multiple access methods simulta- neously and can perform seamless handover among various supported technologies to remain connected. With various types of applications (including interactive ones) running on these devices, which in turn have different QoS requirements, this work discusses as how QoS (measured in terms of user level response time, delay, jitter and transmission rate) can be achieved for interactive applications using dynamic bandwidth allocation schemes over cellular networks. In this work, we propose a dynamic bandwidth allocation scheme for interactive multimedia applications with/without background load in the cellular networks. The system has been simulated for many application types running in parallel and it has been observed that if interactive applications are to be provided with decent response time, a periodic overhauling of policy at admission control has to be done by taking into account history, criticality of applications. The results demonstrate that interactive appli- cations can be provided with good service if policy database at admission control is reviewed dynamically.