957 resultados para bandwidth AMSC: 11T71,94A15,14G50
Resumo:
Recent years have witnessed a rapid growth in the demand for streaming video over the Internet and mobile networks, exposes challenges in coping with heterogeneous devices and varying network throughput. Adaptive schemes, such as scalable video coding, are an attractive solution but fare badly in the presence of packet losses. Techniques that use description-based streaming models, such as multiple description coding (MDC), are more suitable for lossy networks, and can mitigate the effects of packet loss by increasing the error resilience of the encoded stream, but with an increased transmission byte cost. In this paper, we present our adaptive scalable streaming technique adaptive layer distribution (ALD). ALD is a novel scalable media delivery technique that optimises the tradeoff between streaming bandwidth and error resiliency. ALD is based on the principle of layer distribution, in which the critical stream data are spread amongst all packets, thus lessening the impact on quality due to network losses. Additionally, ALD provides a parameterised mechanism for dynamic adaptation of the resiliency of the scalable video. The Subjective testing results illustrate that our techniques and models were able to provide levels of consistent high-quality viewing, with lower transmission cost, relative to MDC, irrespective of clip type. This highlights the benefits of selective packetisation in addition to intuitive encoding and transmission.
Resumo:
Bandwidth constriction and datagram loss are prominent issues that affect the perceived quality of streaming video over lossy networks, such as wireless. The use of layered video coding seems attractive as a means to alleviate these issues, but its adoption has been held back in large part by the inherent priority assigned to the critical lower layers and the consequences for quality that result from their loss. The proposed use of forward error correction (FEC) as a solution only further burdens the bandwidth availability and can negate the perceived benefits of increased stream quality. In this paper, we propose Adaptive Layer Distribution (ALD) as a novel scalable media delivery technique that optimises the tradeoff between the streaming bandwidth and error resiliency. ALD is based on the principle of layer distribution, in which the critical stream data is spread amongst all datagrams thus lessening the impact on quality due to network losses. Additionally, ALD provides a parameterised mechanism for dynamic adaptation of the scalable video, while providing increased resilience to the highest quality layers. Our experimental results show that ALD improves the perceived quality and also reduces the bandwidth demand by up to 36% in comparison to the well-known Multiple Description Coding (MDC) technique.
Resumo:
We propose and experimentally validate a first-principles based model for the nonlinear piezoelectric response of an electroelastic energy harvester. The analysis herein highlights the importance of modeling inherent piezoelectric nonlinearities that are not limited to higher order elastic effects but also include nonlinear coupling to a power harvesting circuit. Furthermore, a nonlinear damping mechanism is shown to accurately restrict the amplitude and bandwidth of the frequency response. The linear piezoelectric modeling framework widely accepted for theoretical investigations is demonstrated to be a weak presumption for near-resonant excitation amplitudes as low as 0.5 g in a prefabricated bimorph whose oscillation amplitudes remain geometrically linear for the full range of experimental tests performed (never exceeding 0.25% of the cantilever overhang length). Nonlinear coefficients are identified via a nonlinear least-squares optimization algorithm that utilizes an approximate analytic solution obtained by the method of harmonic balance. For lead zirconate titanate (PZT-5H), we obtained a fourth order elastic tensor component of c1111p =-3.6673× 1017 N/m2 and a fourth order electroelastic tensor value of e3111 =1.7212× 108 m/V. © 2010 American Institute of Physics.
Resumo:
We explore the possibilities of obtaining compression in video through modified sampling strategies using multichannel imaging systems. The redundancies in video streams are exploited through compressive sampling schemes to achieve low power and low complexity video sensors. The sampling strategies as well as the associated reconstruction algorithms are discussed. These compressive sampling schemes could be implemented in the focal plane readout hardware resulting in drastic reduction in data bandwidth and computational complexity.
Resumo:
We apply the transformation optical technique to modify or improve conventional refractive and gradient index optical imaging devices. In particular, when it is known that a detector will terminate the paths of rays over some surface, more freedom is available in the transformation approach, since the wave behavior over a large portion of the domain becomes unimportant. For the analyzed configurations, quasi-conformal and conformal coordinate transformations can be used, leading to simplified constitutive parameter distributions that, in some cases, can be realized with isotropic index; index-only media can be low-loss and have broad bandwidth. We apply a coordinate transformation to flatten a Maxwell fish-eye lens, forming a near-perfect relay lens; and also flatten the focal surface associated with a conventional refractive lens, such that the system exhibits an ultra-wide field-of-view with reduced aberration.
Resumo:
For pt.I see ibid. vol.3, p.195 (1987). The authors have shown that the resolution of a confocal scanning microscope can be improved by recording the full image at each scanning point and then inverting the data. These analyses were restricted to the case of coherent illumination. They investigate, along similar lines, the incoherent case, which applies to fluorescence microscopy. They investigate the one-dimensional and two-dimensional square-pupil problems and they prove, by means of numerical computations of the singular value spectrum and of the impulse response function, that for a signal-to-noise ratio of, say 10%, it is possible to obtain an improvement of approximately 60% in resolution with respect to the conventional incoherent light confocal microscope. This represents a working bandwidth of 3.5 times the Rayleigh limit.
Resumo:
We study the generation of supercontinua in air-silica microstructured fibers by both nanosecond and femtosecond pulse excitation. In the nanosecond experiments, a 300-nm broadband visible continuum was generated in a 1.8-m length of fiber pumped at 532 nm by 0.8-ns pulses from a frequency-doubled passively Q-switched Nd:YAG microchip laser. At this wavelength, the dominant mode excited under the conditions of continuum generation is the LP 11 mode, and, with nanosecond pumping, self-phase modulation is negligible and the continuum generation is dominated by the interplay of Raman and parametric effects. The spectral extent of the continuum is well explained by calculations of the parametric gain curves for four-wave mixing about the zero-dispersion wavelength of the LP11 mode. In the femtosecond experiments, an 800-nm broad-band visible and near-infrared continuum has been generated in a 1-m length of fiber pumped at 780 nm by 100-fs pulses from a Kerr-lens model-locked Ti:sapphire laser. At this wavelength, excitation and continuum generation occur in the LP01 mode, and the spectral width of the observed continuum is shown to be consistent with the phase-matching bandwidth for parametric processes calculated for this fiber mode. In addition, numerical simulations based on an extended nonlinear Schrödinger equation were used to model supercontinuum generation in the femtosecond regime, with the simulation results reproducing the major features of the experimentally observed spectrum. © 2002 Optical Society of America.
Resumo:
There has been a recent revival of interest in the register insertion (RI) protocol because of its high throughput and low delay characteristics. Several variants of the protocol have been investigated with a view to integrating voice and data applications on a single local area network (LAN). In this paper the performance of an RI ring with a variable size buffer is studied by modelling and simulation. The chief advantage of the proposed scheme is that an efficient but simple bandwidth allocation scheme is easily incorporated. Approximate formulas are derived for queue lengths, queueing times, and total end-to-end transfer delays. The results are compared with previous analyses and with simulation estimates. The effectiveness of the proposed protocol in ensuring fairness of access under conditions of heavy and unequal loading is investigated.
Resumo:
The performance of the register insertion protocol for mixed voice-data traffic is investigated by simulation. The simulation model incorporates a common insertion buffer for station and ring packets. Bandwidth allocation is achieved by imposing a queue limit at each node. A simple priority scheme is introduced by allowing the queue limit to vary from node to node. This enables voice traffic to be given priority over data. The effect on performance of various operational and design parameters such as ratio of voice to data traffic, queue limit and voice packet size is investigated. Comparisons are made where possible with related work on other protocols proposed for voice-data integration. The main conclusions are: (a) there is a general degradation of performance as the ratio of voice traffic to data traffic increases, (b) substantial improvement in performance can be achieved by restricting the queue length at data nodes and (c) for a given ring utilisation, smaller voice packets result in lower delays for both voice and data traffic.
Resumo:
Traffic policing and bandwidth management strategies at the User Network Interface (UNI) of an ATM network are investigated by simulation. The network is assumed to transport real time (RT) traffic like voice and video as well as non-real time (non-RT) data traffic. The proposed policing function, called the super leaky bucket (S-LB), is based on the leaky bucket (LB), but handles the three types of traffic differently according to their quality of service (QoS) requirements. Separate queues are maintained for RT and non-RT traffic. They are normally served alternately, but if the number of RT cells exceeds a threshold, it gets non-pre-emptive priority. Further increase of the RT queue causes low priority cells to be discarded. Non-RT cells are buffered and the sources are throttled back during periods of congestion. The simulations clearly demonstrate the advantages of the proposed strategy in providing improved levels of service (delay, jitter and loss) for all types of traffic.
Resumo:
Light has the greatest information carrying potential of all the perceivable interconnect mediums; consequently, optical fiber interconnects rapidly replaced copper in telecommunications networks, providing bandwidth capacity far in excess of its predecessors. As a result the modern telecommunications infrastructure has evolved into a global mesh of optical networks with VCSEL’s (Vertical Cavity Surface Emitting Lasers) dominating the short-link markets, predominately due to their low-cost. This cost benefit of VCSELs has allowed optical interconnects to again replace bandwidth limited copper as bottlenecks appear on VSR (Very Short Reach) interconnects between co-located equipment inside the CO (Central-Office). Spurred by the successful deployment in the VSR domain and in response to both intra-board backplane applications and inter-board requirements to extend the bandwidth between IC’s (Integrated Circuits), current research is migrating optical links toward board level USR (Ultra Short Reach) interconnects. Whilst reconfigurable Free Space Optical Interconnect (FSOI) are an option, they are complicated by precise line-of-sight alignment conditions hence benefits exist in developing guided wave technologies, which have been classified into three generations. First and second generation technologies are based upon optical fibers and are both capable of providing a suitable platform for intra-board applications. However, to allow component assembly, an integral requirement for inter-board applications, 3rd generation Opto-Electrical Circuit Boards (OECB’s) containing embedded waveguides are desirable. Currently, the greatest challenge preventing the deployment of OECB’s is achieving the out-of-plane coupling to SMT devices. With the most suitable low-cost platform being to integrate the optics into the OECB manufacturing process, several research avenues are being explored although none to date have demonstrated sufficient coupling performance. Once in place, the OECB assemblies will generate new reliability issues such as assembly configurations, manufacturing tolerances, and hermetic requirements that will also require development before total off-chip photonic interconnection can truly be achieved
Resumo:
Optimisation in wireless sensor networks is necessary due to the resource constraints of individual devices, bandwidth limits of the communication channel, relatively high probably of sensor failure, and the requirement constraints of the deployed applications in potently highly volatile environments. This paper presents BioANS, a protocol designed to optimise a wireless sensor network for resource efficiency as well as to meet a requirement common to a whole class of WSN applications - namely that the sensor nodes are dynamically selected on some qualitative basis, for example the quality by which they can provide the required context information. The design of BioANS has been inspired by the communication mechanisms that have evolved in natural systems. The protocol tolerates randomness in its environment, including random message loss, and incorporates a non-deterministic ’delayed-bids’ mechanism. A simulation model is used to explore the protocol’s performance in a wide range of WSN configurations. Characteristics evaluated include tolerance to sensor node density and message loss, communication efficiency, and negotiation latency .
Resumo:
Orthogonal frequency division multiplexing (OFDM) systems are more sensitive to carrier frequency offset (CFO) compared to the conventional single carrier systems. CFO destroys the orthogonality among subcarriers, resulting in inter-carrier interference (ICI) and degrading system performance. To mitigate the effect of the CFO, it has to be estimated and compensated before the demodulation. The CFO can be divided into an integer part and a fractional part. In this paper, we investigate a maximum-likelihood estimator (MLE) for estimating the integer part of the CFO in OFDM systems, which requires only one OFDM block as the pilot symbols. To reduce the computational complexity of the MLE and improve the bandwidth efficiency, a suboptimum estimator (Sub MLE) is studied. Based on the hypothesis testing method, a threshold Sub MLE (T-Sub MLE) is proposed to further reduce the computational complexity. The performance analysis of the proposed T-Sub MLE is obtained and the analytical results match the simulation results well. Numerical results show that the proposed estimators are effective and reliable in both additive white Gaussian noise (AWGN) and frequency-selective fading channels in OFDM systems.
Resumo:
Orthogonal frequency division multiplexing(OFDM) is becoming a fundamental technology in future generation wireless communications. Call admission control is an effective mechanism to guarantee resilient, efficient, and quality-of-service (QoS) services in wireless mobile networks. In this paper, we present several call admission control algorithms for OFDM-based wireless multiservice networks. Call connection requests are differentiated into narrow-band calls and wide-band calls. For either class of calls, the traffic process is characterized as batch arrival since each call may request multiple subcarriers to satisfy its QoS requirement. The batch size is a random variable following a probability mass function (PMF) with realistically maximum value. In addition, the service times for wide-band and narrow-band calls are different. Following this, we perform a tele-traffic queueing analysis for OFDM-based wireless multiservice networks. The formulae for the significant performance metrics call blocking probability and bandwidth utilization are developed. Numerical investigations are presented to demonstrate the interaction between key parameters and performance metrics. The performance tradeoff among different call admission control algorithms is discussed. Moreover, the analytical model has been validated by simulation. The methodology as well as the result provides an efficient tool for planning next-generation OFDM-based broadband wireless access systems.
Resumo:
Thermocouples are one of the most popular devices for temperature measurement due to their robustness, ease of manufacture and installation, and low cost. However, when used in certain harsh environments, for example, in combustion systems and engine exhausts, large wire diameters are required, and consequently the measurement bandwidth is reduced. This article discusses a software compensation technique to address the loss of high frequency fluctuations based on measurements from two thermocouples. In particular, a difference equation sDEd approach is proposed and compared with existing methods both in simulation and on experimental test rig data with constant flow velocity. It is found that the DE algorithm, combined with the use of generalized total least squares for parameter identification, provides better performance in terms of time constant estimation without any a priori assumption on the time constant ratios of the thermocouples.