58 resultados para Turn signals.
Resumo:
The fundamental problem faced by noninvasive neuroimaging techniques such as EEG/MEG1 is to elucidate functionally important aspects of the microscopic neuronal network dynamics from macroscopic aggregate measurements. Due to the mixing of the activities of large neuronal populations in the observed macroscopic aggregate, recovering the underlying network that generates the signal in the absence of any additional information represents a considerable challenge. Recent MEG studies have shown that macroscopic measurements contain sufficient information to allow the differentiation between patterns of activity, which are likely to represent different stimulus-specific collective modes in the underlying network (Hadjipapas, A., Adjamian, P., Swettenham, J.B., Holliday, I.E., Barnes, G.R., 2007. Stimuli of varying spatial scale induce gamma activity with distinct temporal characteristics in human visual cortex. NeuroImage 35, 518–530). The next question arising in this context is whether aspects of collective network activity can be recovered from a macroscopic aggregate signal. We propose that this issue is most appropriately addressed if MEG/EEG signals are to be viewed as macroscopic aggregates arising from networks of coupled systems as opposed to aggregates across a mass of largely independent neural systems. We show that collective modes arising in a network of simulated coupled systems can be indeed recovered from the macroscopic aggregate. Moreover, we show that nonlinear state space methods yield a good approximation of the number of effective degrees of freedom in the network. Importantly, information about hidden variables, which do not directly contribute to the aggregate signal, can also be recovered. Finally, this theoretical framework can be applied to experimental MEG/EEG data in the future, enabling the inference of state dependent changes in the degree of local synchrony in the underlying network.
Resumo:
This study investigates plagiarism detection, with an application in forensic contexts. Two types of data were collected for the purposes of this study. Data in the form of written texts were obtained from two Portuguese Universities and from a Portuguese newspaper. These data are analysed linguistically to identify instances of verbatim, morpho-syntactical, lexical and discursive overlap. Data in the form of survey were obtained from two higher education institutions in Portugal, and another two in the United Kingdom. These data are analysed using a 2 by 2 between-groups Univariate Analysis of Variance (ANOVA), to reveal cross-cultural divergences in the perceptions of plagiarism. The study discusses the legal and social circumstances that may contribute to adopting a punitive approach to plagiarism, or, conversely, reject the punishment. The research adopts a critical approach to plagiarism detection. On the one hand, it describes the linguistic strategies adopted by plagiarists when borrowing from other sources, and, on the other hand, it discusses the relationship between these instances of plagiarism and the context in which they appear. A focus of this study is whether plagiarism involves an intention to deceive, and, in this case, whether forensic linguistic evidence can provide clues to this intentionality. It also evaluates current computational approaches to plagiarism detection, and identifies strategies that these systems fail to detect. Specifically, a method is proposed to translingual plagiarism. The findings indicate that, although cross-cultural aspects influence the different perceptions of plagiarism, a distinction needs to be made between intentional and unintentional plagiarism. The linguistic analysis demonstrates that linguistic elements can contribute to finding clues for the plagiarist’s intentionality. Furthermore, the findings show that translingual plagiarism can be detected by using the method proposed, and that plagiarism detection software can be improved using existing computer tools.
Resumo:
This paper explores a new method of analysing muscle fatigue within the muscles predominantly used during microsurgery. The captured electromyographic (EMG) data retrieved from these muscles are analysed for any defining patterns relating to muscle fatigue. The analysis consists of dynamically embedding the EMG signals from a single muscle channel into an embedded matrix. The muscle fatigue is determined by defining its entropy characterized by the singular values of the dynamically embedded (DE) matrix. The paper compares this new method with the traditional method of using mean frequency shifts in the EMG signal's power spectral density. Linear regressions are fitted to the results from both methods, and the coefficients of variation of both their slope and point of intercept are determined. It is shown that the complexity method is slightly more robust in that the coefficient of variation for the DE method has lower variability than the conventional method of mean frequency analysis.
Resumo:
Teacher-fronted interaction is generally seen to place limitations on the contributions that learners can make to classroom discourse and the conclusion is that learners are unable to experiment with, for example, turn-taking mechanisms. This article looks at teacher-fronted interaction in the language classroom from the perspective of learner talk by examining how learners might take the initiative during this apparently more rigid form of interaction. Detailed microanalysis of classroom episodes, using a conversation analysis institutional discourse approach, shows how learners orient to the institutional context to make sophisticated and effective use of turn-taking mechanisms to take the initiative and direct the interaction, even in the controlled environment of teacher-fronted talk. The article describes some of the functions of such learner initiative, examines how learners and teachers co-construct interaction and how learners can create learning opportunities for themselves. It also briefly looks at teacher reactions to such initiative. The article concludes that learner initiative in teacher-fronted interaction may constitute a significant opportunity for learning and that teachers should find ways of encouraging such interaction patterns.
Resumo:
We propose a novel 16-quadrature amplitude modulation (QAM) transmitter based on two cascaded IQ modulators driven by four separate binary electrical signals. The proposed 16-QAM transmitter features scalable configuration and stable performance with simple bias-control. Generation of 16-QAM signals at 40 Gbaud is experimentally demonstrated for the first time and visualized with a high speed constellation analyzer. The proposed modulator is also compared to two other schemes. We investigate the modulator bandwidth requirements and tolerance to accumulated chromatic dispersion through numerical simulations, and the minimum theoretical insertion attenuation is calculated analytically.
Resumo:
We report a novel real-time homodyne coherent receiver based on a DPSK optical-electrical-optical (OEO) regenerator used to extract a carrier from carrier-less phase modulated signals based on feed-forward based modulation stripping. The performance of this non-DSP based coherent receiver was evaluated for 10.66Gbit/s BPSK signals. Self-homodyne coherent detection and homodyne detection with an injection-locked local oscillator laser was demonstrated. The performance was evaluated by measuring the electrical signal-to-noise (SNR) and recording the eye diagrams. Using injection-locking for the LO improves the performance and enables homodyne detection with optical injection-locking to operate with carrier-less BPSK signals without the need for polarization multiplexed pilot-tones.
Resumo:
We present a phase locking scheme that enables the demonstration of a practical dual pump degenerate phase sensitive amplifier for 10 Gbit/s non-return to zero amplitude shift keying signals. The scheme makes use of cascaded Mach Zehnder modulators for creating the pump frequencies as well as of injection locking for extracting the signal carrier and synchronizing the local lasers. An in depth optimization study has been performed, based on measured error rate performance, and the main degradation factors have been identified.
Resumo:
Future high capacity optical links will have to make use of frequent signal regeneration to enable long distance transmission. In this respect, the role of all-optical signal processing becomes increasingly important because of its potential to mitigate signal impairments at low cost and power consumption. More substantial benefits are expected if regeneration is achieved simultaneously on a multiple signal band. Until recently, this had been achieved only for on-off keying modulation formats. However, as in future transmission links the information will be encoded also in the phase for enhancing the spectral efficiency, novel subsystem concepts will be needed for multichannel processing of such advanced signal formats. In this paper we show that phase sensitive amplifiers can be an ideal technology platform for developing such regenerators and we discuss our recent demonstration of the first multi-channel regenerator for phase encoded signals.
Resumo:
Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play.
Resumo:
Removing noise from signals which are piecewise constant (PWC) is a challenging signal processing problem that arises in many practical scientific and engineering contexts. In the first paper (part I) of this series of two, we presented background theory building on results from the image processing community to show that the majority of these algorithms, and more proposed in the wider literature, are each associated with a special case of a generalized functional, that, when minimized, solves the PWC denoising problem. It shows how the minimizer can be obtained by a range of computational solver algorithms. In this second paper (part II), using this understanding developed in part I, we introduce several novel PWC denoising methods, which, for example, combine the global behaviour of mean shift clustering with the local smoothing of total variation diffusion, and show example solver algorithms for these new methods. Comparisons between these methods are performed on synthetic and real signals, revealing that our new methods have a useful role to play. Finally, overlaps between the generalized methods of these two papers and others such as wavelet shrinkage, hidden Markov models, and piecewise smooth filtering are touched on.
Resumo:
This thesis presents a large scale numerical investigation of heterogeneous terrestrial optical communications systems and the upgrade of fourth generation terrestrial core to metro legacy interconnects to fifth generation transmission system technologies. Retrofitting (without changing infrastructure) is considered for commercial applications. ROADM are crucial enabling components for future core network developments however their re-routing ability means signals can be switched mid-link onto sub-optimally configured paths which raises new challenges in network management. System performance is determined by a trade-off between nonlinear impairments and noise, where the nonlinear signal distortions depend critically on deployed dispersion maps. This thesis presents a comprehensive numerical investigation into the implementation of phase modulated signals in transparent reconfigurable wavelength division multiplexed fibre optic communication terrestrial heterogeneous networks. A key issue during system upgrades is whether differential phase encoded modulation formats are compatible with the cost optimised dispersion schemes employed in current 10 Gb/s systems. We explore how robust transmission is to inevitable variations in the dispersion mapping and how large the margins are when suboptimal dispersion management is applied. We show that a DPSK transmission system is not drastically affected by reconfiguration from periodic dispersion management to lumped dispersion mapping. A novel DPSK dispersion map optimisation methodology which reduces drastically the optimisation parameter space and the many ways to deploy dispersion maps is also presented. This alleviates strenuous computing requirements in optimisation calculations. This thesis provides a very efficient and robust way to identify high performing lumped dispersion compensating schemes for use in heterogeneous RZ-DPSK terrestrial meshed networks with ROADMs. A modified search algorithm which further reduces this number of configuration combinations is also presented. The results of an investigation of the feasibility of detouring signals locally in multi-path heterogeneous ring networks is also presented.
Resumo:
We propose a novel technique of doubling optical pulses in both frequency and time domains based on a combination of cross-phase modulation induced by a triangular pump pulse in a nonlinear Kerr medium and subsequent propagation in a dispersive medium.
Resumo:
We present a simplified model for a simple estimation of the eye-closure penalty for amplitude noise-degraded signals. Using a typical 40-Gbit/s return-to-zero amplitude-shift-keying transmission, we demonstrate agreement between the model predictions and the results obtained from the conventional numerical estimation method over several thousand kilometers.
Resumo:
Detection and interpretation of adverse signals during preclinical and clinical stages of drug development inform the benefit-risk assessment that determines suitability for use in real-world situations. This review considers some recent signals associated with diabetes therapies, illustrating the difficulties in ascribing causality and evaluating absolute risk, predictability, prevention, and containment. Individual clinical trials are necessarily restricted for patient selection, number, and duration; they can introduce allocation and ascertainment bias and they often rely on biomarkers to estimate long-term clinical outcomes. In diabetes, the risk perspective is inevitably confounded by emergent comorbid conditions and potential interactions that limit therapeutic choice, hence the need for new therapies and better use of existing therapies to address the consequences of protracted glucotoxicity. However, for some therapies, the adverse effects may take several years to emerge, and it is evident that faint initial signals under trial conditions cannot be expected to foretell all eventualities. Thus, as information and experience accumulate with time, it should be accepted that benefit-risk deliberations will be refined, and adjustments to prescribing indications may become appropriate. © 2013 by the American Diabetes Association.