990 resultados para Forward error correcting code


Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 94A29, 94B70

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present experimental results for wavelength-division multiplexed (WDM) transmission performance using unbalanced proportions of 1s and 0s in pseudo-random bit sequence (PRBS) data. This investigation simulates the effect of local, in time, data unbalancing which occurs in some coding systems such as forward error correction when extra bits are added to the WDM data stream. We show that such local unbalancing, which would practically give a time-dependent error-rate, can be employed to improve the legacy long-haul WDM system performance if the system is allowed to operate in the nonlinear power region. We use a recirculating loop to simulate a long-haul fibre system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electro-optical transceivers can be implemented employing all-analog signal processing in order to achieve low values of power consumption and latency. This paper shows that the spectral efficiency of such solutions can be increased by combining orthogonal multicarrier techniques and off-the-shelf microwave components. A real-time 108-Gbit/s experiment was performed emulating a wavelength division multiplexing (WDM) system composed of five optical channels. The optical carriers were provided by an externally injected gain switched optical frequency comb. Each optical channel transmitted a 21.6-Gbit/s orthogonal subcarrier multiplexing (SCM) signal that was modulated and demodulated in the electrical domain without the requirement for digital signal processing. The net data rate remained higher than 100 Gbit/s after taking into account forward error correction overheads. The use of orthogonally overlapping subchannels achieves an unprecedented spectral efficiency in all-analog real-time broadband WDM/SCM links.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Az új választási törvény egyik célja a korábbinál igazságosabb választási körzetek kialakítása. Ezt a Velencei Bizottság választási kódexében megfogalmazott ajánlásokhoz hasonló, bár azoknál némileg megengedőbb szabályok révén biztosítja. A szabályok rögzítik a körzetek számát, illetve hogy a körzetek nem oszthatnak ketté kisebb településeket, és nem nyúlhatnak át a megyehatárokon. Tanulmányunkban belátjuk, hogy a szabályok betartása mellett a körzetek kialakítása matematikailag lehetetlen. Javaslatot teszünk a probléma optimális megoldására elvi alapon is, vizsgáljuk a módszer tulajdonságait, majd az általunk megfogalmazott hatékony algoritmussal, a 2010. évi országgyűlési választások adatainak felhasználásával meghatározzuk a körzetek megyék közti elosztásának legjobb megoldását. Végül kitérünk a demográfiai változások várható hatásaira, és több javaslatot teszünk a korlátok hosszú távú betartására: javasoljuk a választási körzetek számának körülbelül 130-ra növelését; egy-egy felülvizsgálat alkalmával a választási körzetek számának megváltoztathatóságát; illetve a körzetek megyék helyett régiók szerinti szervezését. _______ One of the aims of the new electoral law of Hungary has been to apportion voters to voting districts more fairly. This is ensured by a set of rules rather more permissive than those put forward in the Code of Good Practice in Electoral Matters issued by the Venice Commission. These rules fix the size of the voting districts, and require voting districts not to split smaller towns and villages and not to cross county borders. The article shows that such an apportionment is mathematically impos-sible, and makes suggestions for a theoretical approach to resolving this problem: determine the optimal apportionment by studying the properties of their approach, and use the authors efficient algorithm on the data for the 2010 national elections. The article also examines the expected effect of demographic changes and formulates recommendations for adhering to the rules over the long term: increase the number of voting districts to about 130, allow the number of voting districts to change flexibly at each revision of the districts, and base the districts on regions rather than counties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modeling studies predict that changes in radiocarbon (14C) reservoir ages of surface waters during the last deglacial episode will reflect changes in both atmospheric 14C concentration and ocean circulation including the Atlantic Meridional Overturning Circulation. Tests of these models require the availability of accurate 14C reservoir ages in well-dated late Quaternary time series. We here test two models using plateau-tuned 14C time series in multiple well-placed sediment core age-depth sequences throughout the lower latitudes of the Atlantic Ocean. 14C age plateau tuning in glacial and deglacial sequences provides accurate calendar year ages that differ by as much as 500-2500 years from those based on assumed global reservoir ages around 400 years. This study demonstrates increases in local Atlantic surface reservoir ages of up to 1000 years during the Last Glacial Maximum, ages that reflect stronger trades off Benguela and summer winds off southern Brazil. By contrast, surface water reservoir ages remained close to zero in the Cariaco Basin in the southern Caribbean due to lagoon-style isolation and persistently strong atmospheric CO2 exchange. Later, during the early deglacial (16 ka) reservoir ages decreased to a minimum of 170-420 14C years throughout the South Atlantic, likely in response to the rapid rise in atmospheric pCO2 and Antarctic temperatures occurring then. Changes in magnitude and geographic distribution of 14C reservoir ages of peak glacial and deglacial surface waters deviate from the results of Franke et al. (2008) but are generally consistent with those of the more advanced ocean circulation model of Butzin et al. (2012).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Piston, gravity, and multicores as well as hydrographic data were collected along the Pacific margin of Baja California to reconstruct past variations in the intensity of the oxygen-minimum zone (OMZ). Gravity cores collected from within the OMZ north of 24°N did not contain laminated surface sediments even though bottom water oxygen (BWO) concentrations were close to 5 µmol/kg. However, many of the cores collected south of 24°N did contain millimeter- to centimeter-scale, brown to black laminations in Holocene and older sediments but not in sediments deposited during the Last Glacial Maximum. In addition to the dark laminations, Holocene sediments in Soledad Basin, silled at 290 m, also contain white coccolith laminae that probably represent individual blooms. Two open margin cores from 430 and 700 m depth that were selected for detailed radiocarbon dating show distinct transitions from bioturbated glacial sediment to laminated Holocene sediment occurring at 12.9 and 11.5 ka, respectively. The transition is delayed and more gradual (11.3-10.0 ka) in another dated core from Soledad Basin. The observations indicate that bottom-water oxygen concentrations dropped below a threshold for the preservation of laminations at different times or that a synchronous hydrographic change left an asynchronous sedimentary imprint due to local factors. With the caveat that laminated sections should therefore not be correlated without independent age control, the pattern of older sequences of laminations along the North American western margin reported by this and previous studies suggests that multiple patterns of regional productivity and ventilation prevailed over the past 60 kyr.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The presence of high phase noise in addition to additive white Gaussian noise in coherent optical systems affects the performance of forward error correction (FEC) schemes. In this paper, we propose a simple scheme for such systems, using block interleavers and binary Bose–Chaudhuri–Hocquenghem (BCH) codes. The block interleavers are specifically optimized for differential quadrature phase shift keying modulation. We propose a method for selecting BCH codes that, together with the interleavers, achieve a target post-FEC bit error rate (BER). This combination of interleavers and BCH codes has very low implementation complexity. In addition, our approach is straightforward, requiring only short pre-FEC simulations to parameterize a model, based on which we select codes analytically. We aim to correct a pre-FEC BER of around (Formula presented.). We evaluate the accuracy of our approach using numerical simulations. For a target post-FEC BER of (Formula presented.), codes selected using our method result in BERs around 3(Formula presented.) target and achieve the target with around 0.2 dB extra signal-to-noise ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper will look at the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP). FEC can be used to reduce the number of retransmissions which would usually result from a lost packet. The requirement for TCP to deal with any losses is then greatly reduced. There are however side-effects to using FEC as a countermeasure to packet loss: an additional requirement for bandwidth. When applications such as real-time video conferencing are needed, delay must be kept to a minimum, and retransmissions are certainly not desirable. A balance, therefore, between additional bandwidth and delay due to retransmissions must be struck. Our results show that the throughput of data can be significantly improved when packet loss occurs using a combination of FEC and TCP, compared to relying solely on TCP for retransmissions. Furthermore, a case study applies the result to demonstrate the achievable improvements in the quality of streaming video perceived by end users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we present an adaptive unequal loss protection (ULP) scheme for H264/AVC video transmission over lossy networks. This scheme combines erasure coding, H.264/AVC error resilience techniques and importance measures in video coding. The unequal importance of the video packets is identified in the group of pictures (GOP) and the H.264/AVC data partitioning levels. The presented method can adaptively assign unequal amount of forward error correction (FEC) parity across the video packets according to the network conditions, such as the available network bandwidth, packet loss rate and average packet burst loss length. A near optimal algorithm is developed to deal with the FEC assignment for optimization. The simulation results show that our scheme can effectively utilize network resources such as bandwidth, while improving the quality of the video transmission. In addition, the proposed ULP strategy ensures graceful degradation of the received video quality as the packet loss rate increases. © 2010 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Erasure control coding has been exploited in communication networks with an aim to improve the end-to-end performance of data delivery across the network. To address the concerns over the strengths and constraints of erasure coding schemes in this application, we examine the performance limits of two erasure control coding strategies, forward erasure recovery and adaptive erasure recovery. Our investigation shows that the throughput of a network using an (n, k) forward erasure control code is capped by r =k/n when the packet loss rate p ≤ (te/n) and by k(l-p)/(n-te) when p > (t e/n), where te is the erasure control capability of the code. It also shows that the lower bound of the residual loss rate of such a network is (np-te)/(n-te) for (te/n) < p ≤ 1. Especially, if the code used is maximum distance separable, the Shannon capacity of the erasure channel, i.e. 1-p, can be achieved and the residual loss rate is lower bounded by (p+r-1)/r, for (1-r) < p ≤ 1. To address the requirements in real-time applications, we also investigate the service completion time of different schemes. It is revealed that the latency of the forward erasure recovery scheme is fractionally higher than that of the scheme without erasure control coding or retransmission mechanisms (using UDP), but much lower than that of the adaptive erasure scheme when the packet loss rate is high. Results on comparisons between the two erasure control schemes exhibit their advantages as well as disadvantages in the role of delivering end-to-end services. To show the impact of the bounds derived on the end-to-end performance of a TCP/IP network, a case study is provided to demonstrate how erasure control coding could be used to maximize the performance of practical systems. © 2010 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We develop a framework for estimating the quality of transmission (QoT) of a new lightpath before it is established, as well as for calculating the expected degradation it will cause to existing lightpaths. The framework correlates the QoT metrics of established lightpaths, which are readily available from coherent optical receivers that can be extended to serve as optical performance monitors. Past similar studies used only space (routing) information and thus neglected spectrum, while they focused on oldgeneration noncoherent networks. The proposed framework accounts for correlation in both the space and spectrum domains and can be applied to both fixed-grid wavelength division multiplexing (WDM) and elastic optical networks. It is based on a graph transformation that exposes and models the interference between spectrum-neighboring channels. Our results indicate that our QoT estimates are very close to the actual performance data, that is, to having perfect knowledge of the physical layer. The proposed estimation framework is shown to provide up to 4 × 10-2 lower pre-forward error correction bit error ratio (BER) compared to theworst-case interference scenario,which overestimates the BER. The higher accuracy can be harvested when lightpaths are provisioned with low margins; our results showed up to 47% reduction in required regenerators, a substantial savings in equipment cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We quantify the error statistics and patterning effects in a 5x 40 Gbit/s WDM RZ-DBPSK SMF/DCF fibre link using hybrid Raman/EDFA amplification. We propose an adaptive constrained coding for the suppression of errors due to patterning effects. It is established, that this coding technique can greatly reduce the bit error rate (BER) value even for large BER (BER > 101). The proposed approach can be used in the combination with the forward error correction schemes (FEC) to correct the errors even when real channel BER is outside the FEC workspace.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lead isotopic compositions and Pb and Ba concentrations have been measured in ice cores from Law Dome, East Antarctica, covering the past 6500 years. 'Natural' background concentrations of Pb (ab. 0.4 pg/g) and Ba (ab. 1.3 pg/g) are observed until 1884 AD, after which increased Pb concentrations and lowered 206Pb/207Pb ratios indicate the influence of anthropogenic Pb. The isotopic composition of 'natural' Pb varies within the range 206Pb/207Pb=1.20-1.25 and 208Pb/207Pb=2.46-2.50, with an average rock and soil dust Pb contribution of 8-12%. A major pollution event is observed at Law Dome between 1884 and 1908 AD, elevating the Pb concentration four-fold and changing 206Pb/207Pb ratios in the ice to ab. 1.12. Based on Pb isotopic systematics and Pb emission statistics, this is attributed to Pb mined at Broken Hill and smelted at Broken Hill and Port Pirie, Australia. Anthropogenic Pb inputs are at their greatest from 1900 to 1910 and from ab. 1960 to ab. 1980. During the 20th century, Ba concentrations are consistently higher than 'natural' levels and are attributed to increased dust production, suggesting the influence of climate change and/or changes in land coverage with vegetation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Successful implementation of fault-tolerant quantum computation on a system of qubits places severe demands on the hardware used to control the many-qubit state. It is known that an accuracy threshold Pa exists for any quantum gate that is to be used for such a computation to be able to continue for an unlimited number of steps. Specifically, the error probability Pe for such a gate must fall below the accuracy threshold: Pe < Pa. Estimates of Pa vary widely, though Pa ∼ 10−4 has emerged as a challenging target for hardware designers. I present a theoretical framework based on neighboring optimal control that takes as input a good quantum gate and returns a new gate with better performance. I illustrate this approach by applying it to a universal set of quantum gates produced using non-adiabatic rapid passage. Performance improvements are substantial comparing to the original (unimproved) gates, both for ideal and non-ideal controls. Under suitable conditions detailed below, all gate error probabilities fall by 1 to 4 orders of magnitude below the target threshold of 10−4. After applying the neighboring optimal control theory to improve the performance of quantum gates in a universal set, I further apply the general control theory in a two-step procedure for fault-tolerant logical state preparation, and I illustrate this procedure by preparing a logical Bell state fault-tolerantly. The two-step preparation procedure is as follow: Step 1 provides a one-shot procedure using neighboring optimal control theory to prepare a physical qubit state which is a high-fidelity approximation to the Bell state |β01⟩ = 1/√2(|01⟩ + |10⟩). I show that for ideal (non-ideal) control, an approximate |β01⟩ state could be prepared with error probability ϵ ∼ 10−6 (10−5) with one-shot local operations. Step 2 then takes a block of p pairs of physical qubits, each prepared in |β01⟩ state using Step 1, and fault-tolerantly prepares the logical Bell state for the C4 quantum error detection code.