876 resultados para Error resilience


Relevância:

100.00% 100.00%

Publicador:

Resumo:

An n-length block code C is said to be r-query locally correctable, if for any codeword x ∈ C, one can probabilistically recover any one of the n coordinates of the codeword x by querying at most r coordinates of a possibly corrupted version of x. It is known that linear codes whose duals contain 2-designs are locally correctable. In this article, we consider linear codes whose duals contain t-designs for larger t. It is shown here that for such codes, for a given number of queries r, under linear decoding, one can, in general, handle a larger number of corrupted bits. We exhibit to our knowledge, for the first time, a finite length code, whose dual contains 4-designs, which can tolerate a fraction of up to 0.567/r corrupted symbols as against a maximum of 0.5/r in prior constructions. We also present an upper bound that shows that 0.567 is the best possible for this code length and query complexity over this symbol alphabet thereby establishing optimality of this code in this respect. A second result in the article is a finite-length bound which relates the number of queries r and the fraction of errors that can be tolerated, for a locally correctable code that employs a randomized algorithm in which each instance of the algorithm involves t-error correction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we investigate the impact of circuit misbehavior due to parametric variations and voltage scaling on the performance of wireless communication systems. Our study reveals the inherent error resilience of such systems and argues that sufficiently reliable operation can be maintained even in the presence of unreliable circuits and manufacturing defects. We further show how selective application of more robust circuit design techniques is sufficient to deal with high defect rates at low overhead and improve energy efficiency with negligible system performance degradation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent years have witnessed a rapid growth in the demand for streaming video over the Internet, exposing challenges in coping with heterogeneous device capabilities and varying network throughput. When we couple this rise in streaming with the growing number of portable devices (smart phones, tablets, laptops) we see an ever-increasing demand for high-definition videos online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide us with graceful changes in video quality, all while respecting our viewing satisfaction. In this context the use of well-known scalable media streaming techniques, commonly known as scalable coding, is an attractive solution and the focus of this thesis. In this thesis we investigate the transmission of existing scalable video models over a lossy network and determine how the variation in viewable quality is affected by packet loss. This work focuses on leveraging the benefits of scalable media, while reducing the effects of data loss on achievable video quality. The overall approach is focused on the strategic packetisation of the underlying scalable video and how to best utilise error resiliency to maximise viewable quality. In particular, we examine the manner in which scalable video is packetised for transmission over lossy networks and propose new techniques that reduce the impact of packet loss on scalable video by selectively choosing how to packetise the data and which data to transmit. We also exploit redundancy techniques, such as error resiliency, to enhance the stream quality by ensuring a smooth play-out with fewer changes in achievable video quality. The contributions of this thesis are in the creation of new segmentation and encapsulation techniques which increase the viewable quality of existing scalable models by fragmenting and re-allocating the video sub-streams based on user requirements, available bandwidth and variations in loss rates. We offer new packetisation techniques which reduce the effects of packet loss on viewable quality by leveraging the increase in the number of frames per group of pictures (GOP) and by providing equality of data in every packet transmitted per GOP. These provide novel mechanisms for packetizing and error resiliency, as well as providing new applications for existing techniques such as Interleaving and Priority Encoded Transmission. We also introduce three new scalable coding models, which offer a balance between transmission cost and the consistency of viewable quality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent years have witnessed a rapid growth in the demand for streaming video over the Internet and mobile networks, exposes challenges in coping with heterogeneous devices and varying network throughput. Adaptive schemes, such as scalable video coding, are an attractive solution but fare badly in the presence of packet losses. Techniques that use description-based streaming models, such as multiple description coding (MDC), are more suitable for lossy networks, and can mitigate the effects of packet loss by increasing the error resilience of the encoded stream, but with an increased transmission byte cost. In this paper, we present our adaptive scalable streaming technique adaptive layer distribution (ALD). ALD is a novel scalable media delivery technique that optimises the tradeoff between streaming bandwidth and error resiliency. ALD is based on the principle of layer distribution, in which the critical stream data are spread amongst all packets, thus lessening the impact on quality due to network losses. Additionally, ALD provides a parameterised mechanism for dynamic adaptation of the resiliency of the scalable video. The Subjective testing results illustrate that our techniques and models were able to provide levels of consistent high-quality viewing, with lower transmission cost, relative to MDC, irrespective of clip type. This highlights the benefits of selective packetisation in addition to intuitive encoding and transmission.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bandwidth constriction and datagram loss are prominent issues that affect the perceived quality of streaming video over lossy networks, such as wireless. The use of layered video coding seems attractive as a means to alleviate these issues, but its adoption has been held back in large part by the inherent priority assigned to the critical lower layers and the consequences for quality that result from their loss. The proposed use of forward error correction (FEC) as a solution only further burdens the bandwidth availability and can negate the perceived benefits of increased stream quality. In this paper, we propose Adaptive Layer Distribution (ALD) as a novel scalable media delivery technique that optimises the tradeoff between the streaming bandwidth and error resiliency. ALD is based on the principle of layer distribution, in which the critical stream data is spread amongst all datagrams thus lessening the impact on quality due to network losses. Additionally, ALD provides a parameterised mechanism for dynamic adaptation of the scalable video, while providing increased resilience to the highest quality layers. Our experimental results show that ALD improves the perceived quality and also reduces the bandwidth demand by up to 36% in comparison to the well-known Multiple Description Coding (MDC) technique.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Privacy region protection in video surveillance systems is an active topic at present. In previous research, a binary mask mechanism has been developed to indicate the privacy region; however this incurs a significant bitrate overhead. In this paper, an adaptive binary mask is proposed to represent the privacy region. In a practical privacy region protection application, in which the privacy region typically occupies less than half of the overall frame and is rectangular or approximately rectangular, the proposed adaptive binary mask can effectively reduce the bitrate overhead. The proposed method can also be easily applied to the FMO mechanism of H.264/AVC, providing both error resilience and a lower bitrate overhead.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a new type of Flexible Macroblock Ordering (FMO) type for the H.264 Advanced Video Coding (AVC) standard, which can more efficiently flag the position and shape of regions of interest (ROIs) in each frame. In H.264/AVC, 7 types of FMO have been defined, all of which are designed for error resilience. Most previous work related to ROI processing has adopted Type-2 (foreground & background), or Type-6 (explicit), to flag the position and shape of the ROI. However, only rectangular shapes are allowed in Type-2 and for non-rectangular shapes, the non-ROI macroblocks may be wrongly flagged as being within the ROI, which could seriously affect subsequent processing of the ROI. In Type-6, each macroblock in a frame uses fixed-length bits to indicate to its slice group. In general, each ROI is assigned to one slice group identity. Although this FMO type can more accurately flag the position and shape of the ROI, it incurs a significant bitrate overhead. The proposed new FMO type uses the smallest rectangle that covers the ROI to indicate its position and a spiral binary mask is employed within the rectangle to indicate the shape of the ROI. This technique can accurately flag the ROI and provide significantly savings in the bitrate overhead. Compared with Type-6, an 80% to 90% reduction in the bitrate overhead can be obtained while achieving the same accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current variation aware design methodologies, tuned for worst-case scenarios, are becoming increasingly pessimistic from the perspective of power and performance. A good example of such pessimism is setting the refresh rate of DRAMs according to the worst-case access statistics, thereby resulting in very frequent refresh cycles, which are responsible for the majority of the standby power consumption of these memories. However, such a high refresh rate may not be required, either due to extremely low probability of the actual occurrence of such a worst-case, or due to the inherent error resilient nature of many applications that can tolerate a certain number of potential failures. In this paper, we exploit and quantify the possibilities that exist in dynamic memory design by shifting to the so-called approximate computing paradigm in order to save power and enhance yield at no cost. The statistical characteristics of the retention time in dynamic memories were revealed by studying a fabricated 2kb CMOS compatible embedded DRAM (eDRAM) memory array based on gain-cells. Measurements show that up to 73% of the retention power can be saved by altering the refresh time and setting it such that a small number of failures is allowed. We show that these savings can be further increased by utilizing known circuit techniques, such as body biasing, which can help, not only in extending, but also in preferably shaping the retention time distribution. Our approach is one of the first attempts to access the data integrity and energy tradeoffs achieved in eDRAMs for utilizing them in error resilient applications and can prove helpful in the anticipated shift to approximate computing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we investigate the impact of faulty memory bit-cells on the performance of LDPC and Turbo channel decoders based on realistic memory failure models. Our study investigates the inherent error resilience of such codes to potential memory faults affecting the decoding process. We develop two mitigation mechanisms that reduce the impact of memory faults rather than correcting every single error. We show how protection of only few bit-cells is sufficient to deal with high defect rates. In addition, we show how the use of repair-iterations specifically helps mitigating the impact of faults that occur inside the decoder itself.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

JPEG2000 is a new coming image standard. In this paper we analyze the performance of error resilience tools in JPEG2000, and present an analytical model to estimate the quality of JPEG2000 encoded image transmitted over wireless channels. The effectiveness of the analytical model is validated by simulation results. Furthermore, analytical model is utilized by the base station to design efficient unequally error protection schemes for JPEG2000 transmission. In the design, a utility function is denned to make a tradeoff between the image quality and the cost for transmitting the image over wireless channel. © 2002 IEEE.