8 resultados para streaming SIMD extensions

em CORA - Cork Open Research Archive - University College Cork - Ireland


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent years have witnessed a rapid growth in the demand for streaming video over the Internet, exposing challenges in coping with heterogeneous device capabilities and varying network throughput. When we couple this rise in streaming with the growing number of portable devices (smart phones, tablets, laptops) we see an ever-increasing demand for high-definition videos online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide us with graceful changes in video quality, all while respecting our viewing satisfaction. In this context the use of well-known scalable media streaming techniques, commonly known as scalable coding, is an attractive solution and the focus of this thesis. In this thesis we investigate the transmission of existing scalable video models over a lossy network and determine how the variation in viewable quality is affected by packet loss. This work focuses on leveraging the benefits of scalable media, while reducing the effects of data loss on achievable video quality. The overall approach is focused on the strategic packetisation of the underlying scalable video and how to best utilise error resiliency to maximise viewable quality. In particular, we examine the manner in which scalable video is packetised for transmission over lossy networks and propose new techniques that reduce the impact of packet loss on scalable video by selectively choosing how to packetise the data and which data to transmit. We also exploit redundancy techniques, such as error resiliency, to enhance the stream quality by ensuring a smooth play-out with fewer changes in achievable video quality. The contributions of this thesis are in the creation of new segmentation and encapsulation techniques which increase the viewable quality of existing scalable models by fragmenting and re-allocating the video sub-streams based on user requirements, available bandwidth and variations in loss rates. We offer new packetisation techniques which reduce the effects of packet loss on viewable quality by leveraging the increase in the number of frames per group of pictures (GOP) and by providing equality of data in every packet transmitted per GOP. These provide novel mechanisms for packetizing and error resiliency, as well as providing new applications for existing techniques such as Interleaving and Priority Encoded Transmission. We also introduce three new scalable coding models, which offer a balance between transmission cost and the consistency of viewable quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Video compression techniques enable adaptive media streaming over heterogeneous links to end-devices. Scalable Video Coding (SVC) and Multiple Description Coding (MDC) represent well-known techniques for video compression with distinct characteristics in terms of bandwidth efficiency and resiliency to packet loss. In this paper, we present Scalable Description Coding (SDC), a technique to compromise the tradeoff between bandwidth efficiency and error resiliency without sacrificing user-perceived quality. Additionally, we propose a scheme that combines network coding and SDC to further improve the error resiliency. SDC yields upwards of 25% bandwidth savings over MDC. Additionally, our scheme features higher quality for longer durations even at high packet loss rates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In professional sports there are in general three steps required to improve performance namely task definition, training and performance assessment. This process is iteratively repeated and feedback generated from quantitative performance measurement is in turn used for task redefinition. Task definition can be achieved in a number of ways including via video streaming or indeed and as is more common, by listening to coaching staff. However non-subjective performance evaluation is difficult due to the complexity of the movements involved. When considering the subset of sports where precision accuracy and repeatability are a necessity this problem becomes inherently more difficult to solve. Until recently sports such as martial arts, fencing and darts, where the smallest deviation from a prescribed movement goal can result in large outcome error, were deemed too difficult to characterise fully. Advances in technology, as illustrated by this study, now make this type of physiometry possible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work considers the static calculation of a program’s average-case time. The number of systems that currently tackle this research problem is quite small due to the difficulties inherent in average-case analysis. While each of these systems make a pertinent contribution, and are individually discussed in this work, only one of them forms the basis of this research. That particular system is known as MOQA. The MOQA system consists of the MOQA language and the MOQA static analysis tool. Its technique for statically determining average-case behaviour centres on maintaining strict control over both the data structure type and the labeling distribution. This research develops and evaluates the MOQA language implementation, and adds to the functions already available in this language. Furthermore, the theory that backs MOQA is generalised and the range of data structures for which the MOQA static analysis tool can determine average-case behaviour is increased. Also, some of the MOQA applications and extensions suggested in other works are logically examined here. For example, the accuracy of classifying the MOQA language as reversible is investigated, along with the feasibility of incorporating duplicate labels into the MOQA theory. Finally, the analyses that take place during the course of this research reveal some of the MOQA strengths and weaknesses. This thesis aims to be pragmatic when evaluating the current MOQA theory, the advancements set forth in the following work and the benefits of MOQA when compared to similar systems. Succinctly, this work’s significant expansion of the MOQA theory is accompanied by a realistic assessment of MOQA’s accomplishments and a serious deliberation of the opportunities available to MOQA in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The demand for optical bandwidth continues to increase year on year and is being driven primarily by entertainment services and video streaming to the home. Current photonic systems are coping with this demand by increasing data rates through faster modulation techniques, spectrally efficient transmission systems and by increasing the number of modulated optical channels per fibre strand. Such photonic systems are large and power hungry due to the high number of discrete components required in their operation. Photonic integration offers excellent potential for combining otherwise discrete system components together on a single device to provide robust, power efficient and cost effective solutions. In particular, the design of optical modulators has been an area of immense interest in recent times. Not only has research been aimed at developing modulators with faster data rates, but there has also a push towards making modulators as compact as possible. Mach-Zehnder modulators (MZM) have proven to be highly successful in many optical communication applications. However, due to the relatively weak electro-optic effect on which they are based, they remain large with typical device lengths of 4 to 7 mm while requiring a travelling wave structure for high-speed operation. Nested MZMs have been extensively used in the generation of advanced modulation formats, where multi-symbol transmission can be used to increase data rates at a given modulation frequency. Such nested structures have high losses and require both complex fabrication and packaging. In recent times, it has been shown that Electro-absorption modulators (EAMs) can be used in a specific arrangement to generate Quadrature Phase Shift Keying (QPSK) modulation. EAM based QPSK modulators have increased potential for integration and can be made significantly more compact than MZM based modulators. Such modulator designs suffer from losses in excess of 40 dB, which limits their use in practical applications. The work in this thesis has focused on how these losses can be reduced by using photonic integration. In particular, the integration of multiple lasers with the modulator structure was considered as an excellent means of reducing fibre coupling losses while maximising the optical power on chip. A significant difficultly when using multiple integrated lasers in such an arrangement was to ensure coherence between the integrated lasers. The work investigated in this thesis demonstrates for the first time how optical injection locking between discrete lasers on a single photonic integrated circuit (PIC) can be used in the generation of coherent optical signals. This was done by first considering the monolithic integration of lasers and optical couplers to form an on chip optical power splitter, before then examining the behaviour of a mutually coupled system of integrated lasers. By operating the system in a highly asymmetric coupling regime, a stable phase locking region was found between the integrated lasers. It was then shown that in this stable phase locked region the optical outputs of each laser were coherent with each other and phase locked to a common master laser.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent years have witnessed a rapid growth in the demand for streaming video over the Internet and mobile networks, exposes challenges in coping with heterogeneous devices and varying network throughput. Adaptive schemes, such as scalable video coding, are an attractive solution but fare badly in the presence of packet losses. Techniques that use description-based streaming models, such as multiple description coding (MDC), are more suitable for lossy networks, and can mitigate the effects of packet loss by increasing the error resilience of the encoded stream, but with an increased transmission byte cost. In this paper, we present our adaptive scalable streaming technique adaptive layer distribution (ALD). ALD is a novel scalable media delivery technique that optimises the tradeoff between streaming bandwidth and error resiliency. ALD is based on the principle of layer distribution, in which the critical stream data are spread amongst all packets, thus lessening the impact on quality due to network losses. Additionally, ALD provides a parameterised mechanism for dynamic adaptation of the resiliency of the scalable video. The Subjective testing results illustrate that our techniques and models were able to provide levels of consistent high-quality viewing, with lower transmission cost, relative to MDC, irrespective of clip type. This highlights the benefits of selective packetisation in addition to intuitive encoding and transmission.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bandwidth constriction and datagram loss are prominent issues that affect the perceived quality of streaming video over lossy networks, such as wireless. The use of layered video coding seems attractive as a means to alleviate these issues, but its adoption has been held back in large part by the inherent priority assigned to the critical lower layers and the consequences for quality that result from their loss. The proposed use of forward error correction (FEC) as a solution only further burdens the bandwidth availability and can negate the perceived benefits of increased stream quality. In this paper, we propose Adaptive Layer Distribution (ALD) as a novel scalable media delivery technique that optimises the tradeoff between the streaming bandwidth and error resiliency. ALD is based on the principle of layer distribution, in which the critical stream data is spread amongst all datagrams thus lessening the impact on quality due to network losses. Additionally, ALD provides a parameterised mechanism for dynamic adaptation of the scalable video, while providing increased resilience to the highest quality layers. Our experimental results show that ALD improves the perceived quality and also reduces the bandwidth demand by up to 36% in comparison to the well-known Multiple Description Coding (MDC) technique.