929 resultados para Quantum Error-correction
Resumo:
The Laurentide Ice Sheet (LIS) was a large, dynamic ice sheet in the early Holocene. The glacial events through Hudson Strait leading to its eventual demise are recorded in the well-dated Labrador shelf core, MD99-2236 from the Cartwright Saddle. We develop a detailed history of the timing of ice-sheet discharge events from the Hudson Strait outlet of the LIS during the Holocene using high-resolution detrital carbonate, ice rafted detritus (IRD), d18O, and sediment color data. Eight detrital carbonate peaks (DCPs) associated with IRD peaks and light oxygen isotope events punctuate the MD99-2236 record between 11.5 and 8.0 ka. We use the stratigraphy of the DCPs developed from MD99-2236 to select the appropriate DeltaR to calibrate the ages of recorded glacial events in Hudson Bay and Hudson Strait such that they match the DCPs in MD99-2236. We associate the eight DCPs with H0, Gold Cove advance, Noble Inlet advance, initial retreat of the Hudson Strait ice stream (HSIS) from Hudson Strait, opening of the Tyrrell Sea, and drainage of glacial lakes Agassiz and Ojibway. The opening of Foxe Channel and retreat of glacial ice from Foxe Basin are represented by a shoulder in the carbonate data. DeltaR of 350 years applied to the radiocarbon ages constraining glacial events H0 through the opening of the Tyrell Sea provided the best match with the MD99-2236 DCPs; DeltaR values and ages from the literature are used for the younger events. A very close age match was achieved between the 8.2 ka cold event in the Greenland ice cores, DCP7 (8.15 ka BP), and the drainage of glacial lakes Agassiz and Ojibway. Our stratigraphic comparison between the DCPs in MD99-2236 and the calibrated ages of Hudson Strait/Bay deglacial events shows that the retreat of the HSIS, the opening of the Tyrell Sea, and the catastrophic drainage of glacial lakes Agassiz and Ojibway at 8.2 ka are separate events that have been combined in previous estimates of the timing of the 8.2 ka event from marine records. SW Iceland shelf core MD99-2256 documents freshwater entrainment into the subpolar gyre from the Hudson Strait outlet via the Labrador, North Atlantic, and Irminger currents. The timing of freshwater release from the LIS Hudson Strait outlet in MD99-2236 matches evidence for freshwater forcing and LIS icebergs carrying foreign minerals to the SW Iceland shelf between 11.5 and 8.2 ka. The congruency of these records supports the conclusion of the entrainment of freshwater from the retreat of the LIS through Hudson Strait into the subpolar gyre and provides specific time periods when pulses of LIS freshwater were present to influence climate.
Resumo:
The presence of high phase noise in addition to additive white Gaussian noise in coherent optical systems affects the performance of forward error correction (FEC) schemes. In this paper, we propose a simple scheme for such systems, using block interleavers and binary Bose–Chaudhuri–Hocquenghem (BCH) codes. The block interleavers are specifically optimized for differential quadrature phase shift keying modulation. We propose a method for selecting BCH codes that, together with the interleavers, achieve a target post-FEC bit error rate (BER). This combination of interleavers and BCH codes has very low implementation complexity. In addition, our approach is straightforward, requiring only short pre-FEC simulations to parameterize a model, based on which we select codes analytically. We aim to correct a pre-FEC BER of around (Formula presented.). We evaluate the accuracy of our approach using numerical simulations. For a target post-FEC BER of (Formula presented.), codes selected using our method result in BERs around 3(Formula presented.) target and achieve the target with around 0.2 dB extra signal-to-noise ratio.
Resumo:
This paper will look at the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP). FEC can be used to reduce the number of retransmissions which would usually result from a lost packet. The requirement for TCP to deal with any losses is then greatly reduced. There are however side-effects to using FEC as a countermeasure to packet loss: an additional requirement for bandwidth. When applications such as real-time video conferencing are needed, delay must be kept to a minimum, and retransmissions are certainly not desirable. A balance, therefore, between additional bandwidth and delay due to retransmissions must be struck. Our results show that the throughput of data can be significantly improved when packet loss occurs using a combination of FEC and TCP, compared to relying solely on TCP for retransmissions. Furthermore, a case study applies the result to demonstrate the achievable improvements in the quality of streaming video perceived by end users.
Resumo:
In this work, we present an adaptive unequal loss protection (ULP) scheme for H264/AVC video transmission over lossy networks. This scheme combines erasure coding, H.264/AVC error resilience techniques and importance measures in video coding. The unequal importance of the video packets is identified in the group of pictures (GOP) and the H.264/AVC data partitioning levels. The presented method can adaptively assign unequal amount of forward error correction (FEC) parity across the video packets according to the network conditions, such as the available network bandwidth, packet loss rate and average packet burst loss length. A near optimal algorithm is developed to deal with the FEC assignment for optimization. The simulation results show that our scheme can effectively utilize network resources such as bandwidth, while improving the quality of the video transmission. In addition, the proposed ULP strategy ensures graceful degradation of the received video quality as the packet loss rate increases. © 2010 IEEE.
Resumo:
We develop a framework for estimating the quality of transmission (QoT) of a new lightpath before it is established, as well as for calculating the expected degradation it will cause to existing lightpaths. The framework correlates the QoT metrics of established lightpaths, which are readily available from coherent optical receivers that can be extended to serve as optical performance monitors. Past similar studies used only space (routing) information and thus neglected spectrum, while they focused on oldgeneration noncoherent networks. The proposed framework accounts for correlation in both the space and spectrum domains and can be applied to both fixed-grid wavelength division multiplexing (WDM) and elastic optical networks. It is based on a graph transformation that exposes and models the interference between spectrum-neighboring channels. Our results indicate that our QoT estimates are very close to the actual performance data, that is, to having perfect knowledge of the physical layer. The proposed estimation framework is shown to provide up to 4 × 10-2 lower pre-forward error correction bit error ratio (BER) compared to theworst-case interference scenario,which overestimates the BER. The higher accuracy can be harvested when lightpaths are provisioned with low margins; our results showed up to 47% reduction in required regenerators, a substantial savings in equipment cost.
Resumo:
We quantify the error statistics and patterning effects in a 5x 40 Gbit/s WDM RZ-DBPSK SMF/DCF fibre link using hybrid Raman/EDFA amplification. We propose an adaptive constrained coding for the suppression of errors due to patterning effects. It is established, that this coding technique can greatly reduce the bit error rate (BER) value even for large BER (BER > 101). The proposed approach can be used in the combination with the forward error correction schemes (FEC) to correct the errors even when real channel BER is outside the FEC workspace.
Resumo:
Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.
This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.
In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.
Resumo:
The study examines the short-run and long-run causality running from real economic growth to real foreign direct investment inflows (RFDI). Other variables such as education (involving combination of primary, secondary and tertiary enrolment as a proxy to education), real development finance, unskilled labour, to real RFDI inflows are included in the study. The time series data covering the period of 1983 -2013 are examined. First, I applied Augmented Dicky-Fuller (ADF) technique to test for unit root in variables. Findings shows all variables integrated of order one [I(1)]. Thereafter, Johansen Co-integration Test (JCT) was conducted to establish the relationship among variables. Both trace and maximum Eigen value at 5% level of significance indicate 3 co-integrated equations. Vector error correction method (VECM) was applied to capture short and long-run causality running from education, economic growth, real development finance, and unskilled labour to real foreign direct investment inflows in the Republic of Rwanda. Findings shows no short-run causality running from education, real development finance, real GDP and unskilled labour to real FDI inflows, however there were existence of long-run causality. This can be interpreted that, in the short-run; education, development finance, finance and economic growth does not influence inflows of foreign direct investment in Rwanda; but it does in long-run. From the policy perspective, the Republic of Rwanda should focus more on long term goal of investing in education to improve human capital, undertake policy reforms that promotes economic growth, in addition to promoting good governance to attract development finance – especially from Nordics countries (particularly Norway and Denmark).
Resumo:
In this paper, we show how the polarisation state of a linearly polarised antenna can be recovered through the use of a three-term error correction model. The approach adopted is shown to be robust in situations where some multipath exists and where the sampling channels are imperfect with regard to both their amplitude and phase tracking. In particular, it has been shown that error of the measured polarisation tilt angle can be improved from 33% to 3% and below by applying the proposed calibration method. It is described how one can use a rotating dipole antenna as both the calibration standard and as the polarisation encoder, thus simplifying the physical arrangement of the transmitter. Experimental results are provided in order to show the utility of the approach, which could have a variety of applications including bandwidth conservative polarisation sub-modulation in advanced wireless communications systems.
Resumo:
Data from the World Federation of Exchanges show that Brazil’s Sao Paulo stock exchange is one of the largest worldwide in terms of market value. Thus, the objective of this study is to obtain univariate and bivariate forecasting models based on intraday data from the futures and spot markets of the BOVESPA index. The interest is to verify if there exist arbitrage opportunities in Brazilian financial market. To this end, three econometric forecasting models were built: ARFIMA, vector autoregressive (VAR), and vector error correction (VEC). Furthermore, it presents the results of a Granger causality test for the aforementioned series. This type of study shows that it is important to identify arbitrage opportunities in financial markets and, in particular, in the application of these models on data of this nature. In terms of the forecasts made with these models, VEC showed better results. The causality test shows that futures BOVESPA index Granger causes spot BOVESPA index. This result may indicate arbitrage opportunities in Brazil.
Resumo:
Cette thèse porte sur l’effet du risque de prix sur la décision des agriculteurs et les transformateurs québécois. Elle se divise en trois chapitres. Le premier chapitre revient sur la littérature. Le deuxième chapitre examine l’effet du risque de prix sur la production de trois produits, à savoir le maïs grain, la viande de porc et la viande d’agneau dans la province Québec. Le dernier chapitre est centré sur l’analyse de changement des préférences du transformateur québécois de porc pour ce qui est du choix de marché. Le premier chapitre vise à montrer l’importance de l’effet du risque du prix sur la quantité produite par les agriculteurs, tel que mis en évidence par la littérature. En effet, la littérature révèle l’importance du risque de prix à l’exportation sur le commerce international. Le deuxième chapitre est consacré à l’étude des facteurs du risque (les anticipations des prix et la volatilité des prix) dans la fonction de l’offre. Un modèle d’hétéroscédasticité conditionnelle autorégressive généralisée (GARCH) est utilisé afin de modéliser ces facteurs du risque. Les paramètres du modèle sont estimés par la méthode de l’Information Complète Maximum Vraisemblance (FIML). Les résultats empiriques montrent l’effet négatif de la volatilité du prix sur la production alors que la prévisibilité des prix a un effet positif sur la quantité produite. Comme attendu, nous constatons que l’application du programme d’assurance-stabilisation des revenus agricoles (ASRA) au Québec induit une plus importante sensibilité de l’offre par rapport au prix effectif (le prix incluant la compensation de l’ASRA) que par rapport au prix du marché. Par ailleurs, l’offre est moins sensible au prix des intrants qu’au prix de l’output. La diminution de l’aversion au risque de producteur est une autre conséquence de l’application de ce programme. En outre, l’estimation de la prime marginale relative au risque révèle que le producteur du maïs est le producteur le moins averse au risque (comparativement à celui de porc ou d’agneau). Le troisième chapitre consiste en l’analyse du changement de préférence du transformateur québécois du porc pour ce qui est du choix de marché. Nous supposons que le transformateur a la possibilité de fournir les produits sur deux marchés : étranger et local. Le modèle théorique explique l’offre relative comme étant une fonction à la fois d’anticipation relative et de volatilité relative des prix. Ainsi, ce modèle révèle que la sensibilité de l’offre relative par rapport à la volatilité relative de prix dépend de deux facteurs : d’une part, la part de l’exportation dans la production totale et d’autre part, l’élasticité de substitution entre les deux marchés. Un modèle à correction d’erreurs est utilisé lors d’estimation des paramètres du modèle. Les résultats montrent l’effet positif et significatif de l’anticipation relative du prix sur l’offre relative à court terme. Ces résultats montrent donc qu’une hausse de la volatilité du prix sur le marché étranger par rapport à celle sur le marché local entraine une baisse de l’offre relative sur le marché étranger à long terme. De plus, selon les résultats, les marchés étranger et local sont plus substituables à long terme qu’à court terme.
Resumo:
This research aims to investigate the Hedge Efficiency and Optimal Hedge Ratio for the future market of cattle, coffee, ethanol, corn and soybean. This paper uses the Optimal Hedge Ratio and Hedge Effectiveness through multivariate GARCH models with error correction, attempting to the possible phenomenon of Optimal Hedge Ratio differential during the crop and intercrop period. The Optimal Hedge Ratio must be bigger in the intercrop period due to the uncertainty related to a possible supply shock (LAZZARINI, 2010). Among the future contracts studied in this research, the coffee, ethanol and soybean contracts were not object of this phenomenon investigation, yet. Furthermore, the corn and ethanol contracts were not object of researches which deal with Dynamic Hedging Strategy. This paper distinguishes itself for including the GARCH model with error correction, which it was never considered when the possible Optimal Hedge Ratio differential during the crop and intercrop period were investigated. The commodities quotation were used as future price in the market future of BM&FBOVESPA and as spot market, the CEPEA index, in the period from May 2010 to June 2013 to cattle, coffee, ethanol and corn, and to August 2012 to soybean, with daily frequency. Similar results were achieved for all the commodities. There is a long term relationship among the spot market and future market, bicausality and the spot market and future market of cattle, coffee, ethanol and corn, and unicausality of the future price of soybean on spot price. The Optimal Hedge Ratio was estimated from three different strategies: linear regression by MQO, BEKK-GARCH diagonal model, and BEKK-GARCH diagonal with intercrop dummy. The MQO regression model, pointed out the Hedge inefficiency, taking into consideration that the Optimal Hedge presented was too low. The second model represents the strategy of dynamic hedge, which collected time variations in the Optimal Hedge. The last Hedge strategy did not detect Optimal Hedge Ratio differential between the crop and intercrop period, therefore, unlikely what they expected, the investor do not need increase his/her investment in the future market during the intercrop
Resumo:
This paper analyzes the dynamics ofthe American Depositary Receipt (ADR) of a Colombian bank (Bancolombia) in relation to its pricing factors (underlying (preferred) shares price, exchange rate and the US market index). The aim is to test if there is a long-term relation among these variables that would imply predictability. One cointegrating relation is found allowing the use of a vector error correction model to examine the transmission of shocks to the underlying prices, the exchange rate, and the US market index. The main finding of this paper is that in the short run, the underlying share price seems to adjust after changes in the ADR price, pointing to the fact that the NYSE (trading market for the ADR) leads the Colombian market. However, in the long run, both, the underlying share price and the ADR price, adjust to changes in one another.
Resumo:
International audience
Resumo:
[EN] Since Long's Interaction Hypothesis (Long, 1983) multiple studies have suggested the need of oral interaction for successful second language learning. Within this perspective, a great deal of research has been carried out to investigate the role of corrective feedback in the process of acquiring a second language, but there are still varied open debates about this issue. This comparative study seeks to contribute to the existing literature on corrective feedback in oral interaction by exploring teachers' corrective techniques and students' response to these corrections. Two learning contexts were observed and compared: a traditional English as a foreign language (EFL) classroom and a Content and Language Integrated Learning (CLIL) classroom .The main aim was to see whether our data conform to the Counterbalance Hypothesis proposed by Lyster and Mori (2006). Although results did not show significant differences between the two contexts, a qualitative analysis of the data shed some light on the differences between these two language teaching settings. The findings point to the need for further research on error correction in EFL and CLIL contexts in order to overcome the limitations of the present study.