991 resultados para nonlinear correlation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Decreased heart rate variability (HRV) is related to higher morbidity and mortality. In this study we evaluated the linear and nonlinear indices of the HRV in stable angina patients submitted to coronary angiography. Methods. We studied 77 unselected patients for elective coronary angiography, which were divided into two groups: coronary artery disease (CAD) and non-CAD groups. For analysis of HRV indices, HRV was recorded beat by beat with the volunteers in the supine position for 40 minutes. We analyzed the linear indices in the time (SDNN [standard deviation of normal to normal], NN50 [total number of adjacent RR intervals with a difference of duration greater than 50ms] and RMSSD [root-mean square of differences]) and frequency domains ultra-low frequency (ULF) ≤ 0,003 Hz, very low frequency (VLF) 0,003 - 0,04 Hz, low frequency (LF) (0.04-0.15 Hz), and high frequency (HF) (0.15-0.40 Hz) as well as the ratio between LF and HF components (LF/HF). In relation to the nonlinear indices we evaluated SD1, SD2, SD1/SD2, approximate entropy (-ApEn), α1, α2, Lyapunov Exponent, Hurst Exponent, autocorrelation and dimension correlation. The definition of the cutoff point of the variables for predictive tests was obtained by the Receiver Operating Characteristic curve (ROC). The area under the ROC curve was calculated by the extended trapezoidal rule, assuming as relevant areas under the curve ≥ 0.650. Results: Coronary arterial disease patients presented reduced values of SDNN, RMSSD, NN50, HF, SD1, SD2 and -ApEn. HF ≤ 66 ms§ssup§2§esup§, RMSSD ≤ 23.9 ms, ApEn ≤-0.296 and NN50 ≤ 16 presented the best discriminatory power for the presence of significant coronary obstruction. Conclusion: We suggest the use of Heart Rate Variability Analysis in linear and nonlinear domains, for prognostic purposes in patients with stable angina pectoris, in view of their overall impairment. © 2012 Pivatelli et al.; licensee BioMed Central Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Poincaré plot for heart rate variability analysis is a technique considered geometrical and non-linear, that can be used to assess the dynamics of heart rate variability by a representation of the values of each pair of R-R intervals into a simplified phase space that describes the system's evolution. The aim of the present study was to verify if there is some correlation between SD1, SD2 and SD1/SD2 ratio and heart rate variability nonlinear indexes either in disease or healthy conditions. 114 patients with arterial coronary disease and 65 healthy subjects underwent 30. minute heart rate registration, in supine position and the analyzed indexes were as follows: SD1, SD2, SD1/SD2, Sample Entropy, Lyapunov Exponent, Hurst Exponent, Correlation Dimension, Detrended Fluctuation Analysis, SDNN, RMSSD, LF, HF and LF/HF ratio. Correlation coefficients between SD1, SD2 and SD1/SD2 indexes and the other variables were tested by the Spearman rank correlation test and a regression analysis. We verified high correlation between SD1/SD2 index and HE and DFA (α1) in both groups, suggesting that this ratio can be used as a surrogate variable. © 2013 Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The aim of the present study was to evaluate the effect of pursed-lip breathing (PLB) on cardiac autonomic modulation in individuals with chronic obstructive pulmonary disease (COPD) while at rest. Methods: Thirty-two individuals were allocated to one of two groups: COPD (n = 17; 67.29 +/- 6.87 years of age) and control (n = 15; 63.2 +/- 7.96 years of age). The groups were submitted to a two-stage experimental protocol. The first stage consisted of the characterization of the sample and spirometry. The second stage comprised the analysis of cardiac autonomic modulation through the recording of R-R intervals. This analysis was performed using both nonlinear and linear heart rate variability (HRV). In the statistical analysis, the level of significance was set to 5% (p = 0.05). Results: PLB promoted significant increases in the SD1, SD2, RMSSD and LF (ms(2)) indices as well as an increase in alpha(1) and a reduction in alpha(2) in the COPD group. A greater dispersion of points on the Poincare plots was also observed. The magnitude of the changes produced by PLB differed between groups. Conclusion: PLB led to a loss of fractal correlation properties of heart rate in the direction of linearity in patients with COPD as well as an increase in vagal activity and impact on the spectral analysis. The difference in the magnitude of the changes produced by PLB between groups may be related to the presence of the disease and alterations in the respiration rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An overview is given of the limitations of Luttinger liquid theory in describing the real time equilibrium dynamics of critical one-dimensional systems with nonlinear dispersion relation. After exposing the singularities of perturbation theory in band curvature effects that break the Lorentz invariance of the Tomonaga-Luttinger model, the origin of high frequency oscillations in the long time behaviour of correlation functions is discussed. The notion that correlations decay exponentially at finite temperature is challenged by the effects of diffusion in the density-density correlation due to umklapp scattering in lattice models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyzed the effectiveness of linear short- and long-term variability time domain parameters, an index of sympatho-vagal balance (SDNN/RMSSD) and entropy in differentiating fetal heart rate patterns (fHRPs) on the fetal heart rate (fHR) series of 5, 3 and 2 min duration reconstructed from 46 fetal magnetocardiograms. Gestational age (GA) varied from 21 to 38 weeks. FHRPs were classified based on the fHR standard deviation. In sleep states, we observed that vagal influence increased with GA, and entropy significantly increased (decreased) with GA (SDNN/RMSSD), demonstrating that a prevalence of vagal activity with autonomous nervous system maturation may be associated with increased sleep state complexity. In active wakefulness, we observed a significant negative (positive) correlation of short-term (long-term) variability parameters with SDNN/RMSSD. ANOVA statistics demonstrated that long-term irregularity and standard deviation of normal-to-normal beat intervals (SDNN) best differentiated among fHRPs. Our results confirm that short-and long-term variability parameters are useful to differentiate between quiet and active states, and that entropy improves the characterization of sleep states. All measures differentiated fHRPs more effectively on very short HR series, as a result of the fMCG high temporal resolution and of the intrinsic timescales of the events that originate the different fHRPs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Decreased heart rate variability (HRV) is related to higher morbidity and mortality. In this study we evaluated the linear and nonlinear indices of the HRV in stable angina patients submitted to coronary angiography. Methods We studied 77 unselected patients for elective coronary angiography, which were divided into two groups: coronary artery disease (CAD) and non-CAD groups. For analysis of HRV indices, HRV was recorded beat by beat with the volunteers in the supine position for 40 minutes. We analyzed the linear indices in the time (SDNN [standard deviation of normal to normal], NN50 [total number of adjacent RR intervals with a difference of duration greater than 50ms] and RMSSD [root-mean square of differences]) and frequency domains ultra-low frequency (ULF) ≤ 0,003 Hz, very low frequency (VLF) 0,003 – 0,04 Hz, low frequency (LF) (0.04–0.15 Hz), and high frequency (HF) (0.15–0.40 Hz) as well as the ratio between LF and HF components (LF/HF). In relation to the nonlinear indices we evaluated SD1, SD2, SD1/SD2, approximate entropy (−ApEn), α1, α2, Lyapunov Exponent, Hurst Exponent, autocorrelation and dimension correlation. The definition of the cutoff point of the variables for predictive tests was obtained by the Receiver Operating Characteristic curve (ROC). The area under the ROC curve was calculated by the extended trapezoidal rule, assuming as relevant areas under the curve ≥ 0.650. Results Coronary arterial disease patients presented reduced values of SDNN, RMSSD, NN50, HF, SD1, SD2 and -ApEn. HF ≤ 66 ms2, RMSSD ≤ 23.9 ms, ApEn ≤−0.296 and NN50 ≤ 16 presented the best discriminatory power for the presence of significant coronary obstruction. Conclusion We suggest the use of Heart Rate Variability Analysis in linear and nonlinear domains, for prognostic purposes in patients with stable angina pectoris, in view of their overall impairment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We deal with homogeneous isotropic turbulence and use the two-point velocity correlation tensor field (parametrized by the time variable t) of the velocity fluctuations to equip an affine space K3 of the correlation vectors by a family of metrics. It was shown in Grebenev and Oberlack (J Nonlinear Math Phys 18:109–120, 2011) that a special form of this tensor field generates the so-called semi-reducible pseudo-Riemannian metrics ds2(t) in K3. This construction presents the template for embedding the couple (K3, ds2(t)) into the Euclidean space R3 with the standard metric. This allows to introduce into the consideration the function of length between the fluid particles, and the accompanying important problem to address is to find out which transformations leave the statistic of length to be invariant that presents a basic interest of the paper. Also we classify the geometry of the particles configuration at least locally for a positive Gaussian curvature of this configuration and comment the case of a negative Gaussian curvature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently, a variety of linear and nonlinear measures is in use to investigate spatiotemporal interrelation patterns of multivariate time series. Whereas the former are by definition insensitive to nonlinear effects, the latter detect both nonlinear and linear interrelation. In the present contribution we employ a uniform surrogate-based approach, which is capable of disentangling interrelations that significantly exceed random effects and interrelations that significantly exceed linear correlation. The bivariate version of the proposed framework is explored using a simple model allowing for separate tuning of coupling and nonlinearity of interrelation. To demonstrate applicability of the approach to multivariate real-world time series we investigate resting state functional magnetic resonance imaging (rsfMRI) data of two healthy subjects as well as intracranial electroencephalograms (iEEG) of two epilepsy patients with focal onset seizures. The main findings are that for our rsfMRI data interrelations can be described by linear cross-correlation. Rejection of the null hypothesis of linear iEEG interrelation occurs predominantly for epileptogenic tissue as well as during epileptic seizures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, the aquatic eddy correlation (EC) technique has proven to be a powerful approach for non-invasive measurements of oxygen fluxes across the sediment water interface. Fundamental to the EC approach is the correlation of turbulent velocity and oxygen concentration fluctuations measured with high frequencies in the same sampling volume. Oxygen concentrations are commonly measured with fast responding electrochemical microsensors. However, due to their own oxygen consumption, electrochemical microsensors are sensitive to changes of the diffusive boundary layer surrounding the probe and thus to changes in the ambient flow velocity. The so-called stirring sensitivity of microsensors constitutes an inherent correlation of flow velocity and oxygen sensing and thus an artificial flux which can confound the benthic flux determination. To assess the artificial flux we measured the correlation between the turbulent flow velocity and the signal of oxygen microsensors in a sealed annular flume without any oxygen sinks and sources. Experiments revealed significant correlations, even for sensors designed to have low stirring sensitivities of ~0.7%. The artificial fluxes depended on ambient flow conditions and, counter intuitively, increased at higher velocities because of the nonlinear contribution of turbulent velocity fluctuations. The measured artificial fluxes ranged from 2 - 70 mmol m**-2 d**-1 for weak and very strong turbulent flow, respectively. Further, the stirring sensitivity depended on the sensor orientation towards the flow. Optical microsensors (optodes) that should not exhibit a stirring sensitivity were tested in parallel and did not show any significant correlation between O2 signals and turbulent flow. In conclusion, EC data obtained with electrochemical sensors can be affected by artificial flux and we recommend using optical microsensors in future EC-studies. Flume experiments were conducted in February 2013 at the Institute for Environmental Sciences, University of Koblenz-Landau Landau. Experiments were performed in a closed oval-shaped acrylic glass flume with cross-sectional width of 4 cm and height of 10 cm and total length of 54 cm. The fluid flow was induced by a propeller driven by a motor and mean flow velocities of up to 20 cm s-1 were generated by applying voltages between 0 V and 4 V DC. The flume was completely sealed with an acrylic glass cover. Oxygen sensors were inserted through rubber seal fittings and allowed positioning the sensors with inclinations to the main flow direction of ~60°, ~95° and ~135°. A Clark type electrochemical O2 microsensor with a low stirring sensitivity (0.7%) was tested and a fast-responding needle-type O2 optode (PyroScience GmbH, Germany) was used as reference as optodes should not be stirring sensitive. Instantaneous three-dimensional flow velocities were measured at 7.4 Hz using stereoscopic particle image velocimetry (PIV). The velocity at the sensor tip was extracted. The correlation of the fluctuating O2 sensor signals and the fluctuating velocities was quantified with a cross-correlation analysis. A significant cross-correlation is equivalent to a significant artificial flux. For a total of 18 experiments the flow velocity was adjusted between 1.7 and 19.2 cm s**-1, and 3 different orientations of the electrochemical sensor were tested with inclination angles of ~60°, ~95° and ~135° with respect to the main flow direction. In experiments 16-18, wavelike flow was induced, whereas in all other experiments the motor was driven by constant voltages. In 7 experiments, O2 was additionally measured by optodes. Although performed simultaneously with the electrochemical sensor, optode measurements are listed as separate experiments (denoted by the attached 'op' in the filename), because the velocity time series was extracted at the optode tip, located at a different position in the flume.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On the basis of the BBGKY hierarchy of equations an expression is derived for the response of a fully ionized plasma to a strong, high-frequency electric field in the limit of infinite ion mass. It is found that even in this limit the ionion correlation function is substantially affected by the field. The corrections to earlier nonlinear results for the current density appear to be quite ssential. The validity of the model introduced by Dawson and Oberman to study the response to a vanishingly small field is confirmed for larger values of the field when the eorrect expression for the ion-ion correlations i s introduced; the model by itself does not yield such an expression. The results have interest for the heating of the plasma and for the propagation of a strong electromagnetic wave through the plasma. The theory seems to be valid for any field intensity for which the plasma is stable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The onset of measles vaccination in England and Wales in 1968 coincided with a marked drop in the temporal correlation of epidemic patterns between major cities. We analyze a variety of hypotheses for the mechanisms driving this change. Straightforward stochastic models suggest that the interaction between a lowered susceptible population (and hence increased demographic noise) and nonlinear dynamics is sufficient to cause the observed drop in correlation. The decorrelation of epidemics could potentially lessen the chance of global extinction and so inhibit attempts at measles eradication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: This study aimed to explore methods of assessing interactions between neuronal sources using MEG beamformers. However, beamformer methodology is based on the assumption of no linear long-term source interdependencies [VanVeen BD, vanDrongelen W, Yuchtman M, Suzuki A. Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Trans Biomed Eng 1997;44:867-80; Robinson SE, Vrba J. Functional neuroimaging by synthetic aperture magnetometry (SAM). In: Recent advances in Biomagnetism. Sendai: Tohoku University Press; 1999. p. 302-5]. Although such long-term correlations are not efficient and should not be anticipated in a healthy brain [Friston KJ. The labile brain. I. Neuronal transients and nonlinear coupling. Philos Trans R Soc Lond B Biol Sci 2000;355:215-36], transient correlations seem to underlie functional cortical coordination [Singer W. Neuronal synchrony: a versatile code for the definition of relations? Neuron 1999;49-65; Rodriguez E, George N, Lachaux J, Martinerie J, Renault B, Varela F. Perception's shadow: long-distance synchronization of human brain activity. Nature 1999;397:430-3; Bressler SL, Kelso J. Cortical coordination dynamics and cognition. Trends Cogn Sci 2001;5:26-36]. Methods: Two periodic sources were simulated and the effects of transient source correlation on the spatial and temporal performance of the MEG beamformer were examined. Subsequently, the interdependencies of the reconstructed sources were investigated using coherence and phase synchronization analysis based on Mutual Information. Finally, two interacting nonlinear systems served as neuronal sources and their phase interdependencies were studied under realistic measurement conditions. Results: Both the spatial and the temporal beamformer source reconstructions were accurate as long as the transient source correlation did not exceed 30-40 percent of the duration of beamformer analysis. In addition, the interdependencies of periodic sources were preserved by the beamformer and phase synchronization of interacting nonlinear sources could be detected. Conclusions: MEG beamformer methods in conjunction with analysis of source interdependencies could provide accurate spatial and temporal descriptions of interactions between linear and nonlinear neuronal sources. Significance: The proposed methods can be used for the study of interactions between neuronal sources. © 2005 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate a digital back-propagation simplification method to enable computationally-efficient digital nonlinearity compensation for a coherently-detected 112 Gb/s polarization multiplexed quadrature phase shifted keying transmission over a 1,600 km link (20x80km) with no inline compensation. Through numerical simulation, we report up to 80% reduction in required back-propagation steps to perform nonlinear compensation, in comparison to the standard back-propagation algorithm. This method takes into account the correlation between adjacent symbols at a given instant using a weighted-average approach, and optimization of the position of nonlinear compensator stage to enable practical digital back-propagation.