86 resultados para Frequency Modulated Signals, Parameter Estimation, Signal-to-Noise-Ratio, Simulations
Resumo:
Magnetoencephalography (MEG) offers significant opportunities for the localization and characterization of focal and generalized epilepsies, but its potential has so far not been fully exploited, as the evidence for its effectiveness is still anecdotal. This is particularly true for pediatric epilepsy. MEG recordings on school-age children typically rely on the use of MEG systems that were designed for adults and children's smaller head-size and stature can cause significant problems. Reduced signal-to-noise ratio when recording from smaller heads, increased movement, reduced sensor coverage of anterior temporal regions and incomplete insertion into the MEG helmet can all reduce the quality of data collected from children. We summarize these challenges and suggest some practical solutions.
Resumo:
We present the analytical description of a specific nonlinear fibre channel with non-compensated dispersion. Information theory analysis shows that capacity of such nonlinear fibre channel does not decay with growing signal-to-noise ratio. © 2012 IEEE.
Resumo:
A closed-form expression for a lower bound on the per soliton capacity of the nonlinear optical fibre channel in the presence of (optical) amplifier spontaneous emission (ASE) noise is derived. This bound is based on a non-Gaussian conditional probability density function for the soliton amplitude jitter induced by the ASE noise and is proven to grow logarithmically as the signal-to-noise ratio increases.
Resumo:
We propose a long range, high precision optical time domain reflectometry (OTDR) based on an all-fiber supercontinuum source. The source simply consists of a CW pump laser with moderate power and a section of fiber, which has a zero dispersion wavelength near the laser's central wavelength. Spectrum and time domain properties of the source are investigated, showing that the source has great capability in nonlinear optics, such as correlation OTDR due to its ultra-wide-band chaotic behavior, and mm-scale spatial resolution is demonstrated. Then we analyze the key factors limiting the operational range of such an OTDR, e. g., integral Rayleigh backscattering and the fiber loss, which degrades the optical signal to noise ratio at the receiver side, and then the guideline for counter-act such signal fading is discussed. Finally, we experimentally demonstrate a correlation OTDR with 100km sensing range and 8.2cm spatial resolution (1.2 million resolved points), as a verification of theoretical analysis.
Resumo:
In this work we deal with video streams over TCP networks and propose an alternative measurement to the widely used and accepted peak signal to noise ratio (PSNR) due to the limitations of this metric in the presence of temporal errors. A test-bed was created to simulate buffer under-run in scalable video streams and the pauses produced as a result of the buffer under-run were inserted into the video before being employed as the subject of subjective testing. The pause intensity metric proposed in [1] was compared with the subjective results and it was shown that in spite of reductions in frame rate and resolution, a correlation with pause intensity still exists. Due to these conclusions, the metric may be employed in layer selection in scalable video streams. © 2011 IEEE.
Resumo:
Compensation of the detrimental impacts of nonlinearity on long-haul wavelength division multiplexed system performance is discussed, and the difference between transmitter, receiver and in-line compensation analyzed. We demonstrate that ideal compensation of nonlinear noise could result in an increase in the signal-to-noise ratio (measured in dB) of 50%, and that reaches may be more than doubled for higher order modulation formats. The influence of parametric noise amplification is discussed in detail, showing how increased numbers of optical phase conjugators may further increase the received signal-tonoise ratio. Finally the impact of practical real world system imperfections, such as polarization mode dispersion, are outlined.
Resumo:
This thesis presents a large scale numerical investigation of heterogeneous terrestrial optical communications systems and the upgrade of fourth generation terrestrial core to metro legacy interconnects to fifth generation transmission system technologies. Retrofitting (without changing infrastructure) is considered for commercial applications. ROADM are crucial enabling components for future core network developments however their re-routing ability means signals can be switched mid-link onto sub-optimally configured paths which raises new challenges in network management. System performance is determined by a trade-off between nonlinear impairments and noise, where the nonlinear signal distortions depend critically on deployed dispersion maps. This thesis presents a comprehensive numerical investigation into the implementation of phase modulated signals in transparent reconfigurable wavelength division multiplexed fibre optic communication terrestrial heterogeneous networks. A key issue during system upgrades is whether differential phase encoded modulation formats are compatible with the cost optimised dispersion schemes employed in current 10 Gb/s systems. We explore how robust transmission is to inevitable variations in the dispersion mapping and how large the margins are when suboptimal dispersion management is applied. We show that a DPSK transmission system is not drastically affected by reconfiguration from periodic dispersion management to lumped dispersion mapping. A novel DPSK dispersion map optimisation methodology which reduces drastically the optimisation parameter space and the many ways to deploy dispersion maps is also presented. This alleviates strenuous computing requirements in optimisation calculations. This thesis provides a very efficient and robust way to identify high performing lumped dispersion compensating schemes for use in heterogeneous RZ-DPSK terrestrial meshed networks with ROADMs. A modified search algorithm which further reduces this number of configuration combinations is also presented. The results of an investigation of the feasibility of detouring signals locally in multi-path heterogeneous ring networks is also presented.
Resumo:
Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.
Resumo:
Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.
Resumo:
We have investigated information transmission in an array of threshold units that have signal-dependent noise and a common input signal. We demonstrate a phenomenon similar to stochastic resonance and suprathreshold stochastic resonance with additive noise and show that information transmission can be enhanced by a nonzero level of noise. By comparing system performance to one with additive noise we also demonstrate that the information transmission of weak signals is significantly better with signal-dependent noise. Indeed, information rates are not compromised even for arbitrary small input signals. Furthermore, by an appropriate selection of parameters, we observe that the information can be made to be (almost) independent of the level of the noise, thus providing a robust method of transmitting information in the presence of noise. These result could imply that the ability of hair cells to code and transmit sensory information in biological sensory systems is not limited by the level of signal-dependent noise. © 2007 The American Physical Society.
Resumo:
We compare the Q parameter obtained from scalar, semi-analytical and full vector models for realistic transmission systems. One set of systems is operated in the linear regime, while another is using solitons at high peak power. We report in detail on the different results obtained for the same system using different models. Polarisation mode dispersion is also taken into account and a novel method to average Q parameters over several independent simulation runs is described. © 2006 Elsevier B.V. All rights reserved.
Resumo:
Sensory cells usually transmit information to afferent neurons via chemical synapses, in which the level of noise is dependent on an applied stimulus. Taking into account such dependence, we model a sensory system as an array of LIF neurons with a common signal. We show that information transmission is enhanced by a nonzero level of noise. Moreover, we demonstrate a phenomenon similar to suprathreshold stochastic resonance with additive noise. We remark that many properties of information transmission found for the LIF neurons was predicted by us before with simple binary units [Phys. Rev. E 75, 021121 (2007)]. This confirmation of our predictions allows us to point out identical roots of the phenomena found in the simple threshold systems and more complex LIF neurons.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
The major challenge of MEG, the inverse problem, is to estimate the very weak primary neuronal currents from the measurements of extracranial magnetic fields. The non-uniqueness of this inverse solution is compounded by the fact that MEG signals contain large environmental and physiological noise that further complicates the problem. In this paper, we evaluate the effectiveness of magnetic noise cancellation by synthetic gradiometers and the beamformer analysis method of synthetic aperture magnetometry (SAM) for source localisation in the presence of large stimulus-generated noise. We demonstrate that activation of primary somatosensory cortex can be accurately identified using SAM despite the presence of significant stimulus-related magnetic interference. This interference was generated by a contact heat evoked potential stimulator (CHEPS), recently developed for thermal pain research, but which to date has not been used in a MEG environment. We also show that in a reduced shielding environment the use of higher order synthetic gradiometry is sufficient to obtain signal-to-noise ratios (SNRs) that allow for accurate localisation of cortical sensory function.