97 resultados para Cross-amplification
Resumo:
Detailed investigations are undertaken, for the first time, of the transmission performance of recently proposed novel Adaptively Modulated Optical OFDM (AMOOFDM) modems using Subcarrier Modulation (AMOOFDM-SCM) in single-channel, SMF-based IMDD links without optical amplification and chromatic dispersion compensation. The cross-talk effect induced by beatings among subcarriers of various types is a crucial factor limiting the maximum achievable AMOOFDM-SCM performance. By applying single sideband modulation and/or spectral gapping to AMOOFDM-SCM, three AMOOFDM-SCM designs of varying complexity are proposed, which achieve >60Gb/s signal transmission over 20 km, 40 km and 60 km. Such performances are >1.5 times higher than those supported by conventional AMOOFDM modems.
Resumo:
The transmission performance of multi-channel adaptively modulated optical OFDM (AMOOFDM) signals is numerically investigated, for the first time, in optical amplification- and chromatic dispersion compensation-free, intensity-modulation and direct-detection systems incorporating directly modulated DFB lasers (DMLs). It is shown that adaptive modulation not only reduces significantly the nonlinear WDM impairments induced by the effects of cross-phase modulation and four-wave mixing, but also compensates effectively for the DML-induced frequency chirp effect. In comparison with identical modulation, adaptive modulation improves the maximum achievable signal transmission capacity of a central channel by a factor of 1.3 and 3.6 for 40km and 80km SMFs, respectively, with corresponding dynamic input optical power ranges being extended by approximately 5dB. In addition, adaptive modulation also enables cross-channel complementary modulation format mapping, leading to an improved transmission capacity of the entire WDM system. Copyright © 2010 The authors.
Resumo:
Hidden Markov model (HMM)-based speech synthesis systems possess several advantages over concatenative synthesis systems. One such advantage is the relative ease with which HMM-based systems are adapted to speakers not present in the training dataset. Speaker adaptation methods used in the field of HMM-based automatic speech recognition (ASR) are adopted for this task. In the case of unsupervised speaker adaptation, previous work has used a supplementary set of acoustic models to estimate the transcription of the adaptation data. This paper first presents an approach to the unsupervised speaker adaptation task for HMM-based speech synthesis models which avoids the need for such supplementary acoustic models. This is achieved by defining a mapping between HMM-based synthesis models and ASR-style models, via a two-pass decision tree construction process. Second, it is shown that this mapping also enables unsupervised adaptation of HMM-based speech synthesis models without the need to perform linguistic analysis of the estimated transcription of the adaptation data. Third, this paper demonstrates how this technique lends itself to the task of unsupervised cross-lingual adaptation of HMM-based speech synthesis models, and explains the advantages of such an approach. Finally, listener evaluations reveal that the proposed unsupervised adaptation methods deliver performance approaching that of supervised adaptation.