22 resultados para Vector error correction model

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigated the role of visual feedback of task performance in visuomotor adaptation. Participants produced novel two degrees of freedom movements (elbow flexion-extension, forearm pronation-supination) to move a cursor towards visual targets. Following trials with no rotation, participants were exposed to a 60A degrees visuomotor rotation, before returning to the non-rotated condition. A colour cue on each trial permitted identification of the rotated/non-rotated contexts. Participants could not see their arm but received continuous and concurrent visual feedback (CF) of a cursor representing limb position or post-trial visual feedback (PF) representing the movement trajectory. Separate groups of participants who received CF were instructed that online modifications of their movements either were, or were not, permissible as a means of improving performance. Feedforward-mediated performance improvements occurred for both CF and PF groups in the rotated environment. Furthermore, for CF participants this adaptation occurred regardless of whether feedback modifications of motor commands were permissible. Upon re-exposure to the non-rotated environment participants in the CF, but not PF, groups exhibited post-training aftereffects, manifested as greater angular deviations from a straight initial trajectory, with respect to the pre-rotation trials. Accordingly, the nature of the performance improvements that occurred was dependent upon the timing of the visual feedback of task performance. Continuous visual feedback of task performance during task execution appears critical in realising automatic visuomotor adaptation through a recalibration of the visuomotor mapping that transforms visual inputs into appropriate motor commands.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze the effect of a quantum error correcting code on the entanglement of encoded logical qubits in the presence of a dephasing interaction with a correlated environment. Such correlated reservoir introduces entanglement between physical qubits. We show that for short times the quantum error correction interprets such entanglement as errors and suppresses it. However, for longer time, although quantum error correction is no longer able to correct errors, it enhances the rate of entanglement production due to the interaction with the environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The authors studied pattern stability and error correction during in-phase and antiphase 4-ball fountain juggling. To obtain ball trajectories, they made and digitized high-speed film recordings of 4 highly skilled participants juggling at 3 different heights (and thus different frequencies). From those ball trajectories, the authors determined and analyzed critical events (i.e., toss, zenith, catch, and toss onset) in terms of variability of point estimates of relative phase and temporal correlations. Contrary to common findings on basic instances of rhythmic interlimb coordination, in-phase and antiphase patterns were equally variable (i.e., stable). Consistent with previous findings, however, pattern stability decreased with increasing frequency. In contrast to previous results for 3-ball cascade juggling, negative lag-one correlations for catch-catch intervals were absent, but the authors obtained evidence for error corrections between catches and toss onsets. That finding may have reflected participants' high skill level, which yielded smaller errors that allowed for corrections later in the hand cycle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we show how the polarisation state of a linearly polarised antenna can be recovered through the use of a three-term error correction model. The approach adopted is shown to be robust in situations where some multipath exists and where the sampling channels are imperfect with regard to both their amplitude and phase tracking. In particular, it has been shown that error of the measured polarisation tilt angle can be improved from 33% to 3% and below by applying the proposed calibration method. It is described how one can use a rotating dipole antenna as both the calibration standard and as the polarisation encoder, thus simplifying the physical arrangement of the transmitter. Experimental results are provided in order to show the utility of the approach, which could have a variety of applications including bandwidth conservative polarisation sub-modulation in advanced wireless communications systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of 'quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inherently error-resilient applications in areas such as signal processing, machine learning and data analytics provide opportunities for relaxing reliability requirements, and thereby reducing the overhead incurred by conventional error correction schemes. In this paper, we exploit the tolerable imprecision of such applications by designing an energy-efficient fault-mitigation scheme for unreliable data memories to meet target yield. The proposed approach uses a bit-shuffling mechanism to isolate faults into bit locations with lower significance. This skews the bit-error distribution towards the low order bits, substantially limiting the output error magnitude. By controlling the granularity of the shuffling, the proposed technique enables trading-off quality for power, area, and timing overhead. Compared to error-correction codes, this can reduce the overhead by as much as 83% in read power, 77% in read access time, and 89% in area, when applied to various data mining applications in 28nm process technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous studies using low frequency (1 Hz) rTMS over the motor and premotor cortex have examined repetitive movements, but focused either on motor aspects of performance such as movement speed, or on variability of the produced intervals. A novel question is whether TMS affects the synchronization of repetitive movements with an external cue (sensorimotor synchronization). In the present study participants synchronized finger taps with the tones of an auditory metronome. The aim of the study was to examine whether motor and premotor cortical inhibition induced by rTMS affects timing aspects of synchronization performance such as the coupling between the tap and the tone and error correction after a metronome perturbation. Metronome sequences included perturbations corresponding to a change in the duration of a single interval (phase shifts) that were either small and below the threshold for conscious perception (10 ms) or large and perceivable (50 ms). Both premotor and motor cortex stimulation induced inhibition, as reflected in a lengthening of the silent period. Neither motor nor premotor cortex rTMS altered error correction after a phase shift. However, motor cortex stimulation made participants tap closer to the tone, yielding a decrease in tap-tone asynchrony. This provides the first neurophysiological demonstration of a dissociation between error correction and tap-tone asynchrony in sensorimotor synchronization. We discuss the results in terms of current theories of timing and error correction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper reports data from an on-line peer tutoring project. In the project 78, 9–12-year-old students from Scotland and Catalonia peer tutored each other in English and Spanish via a managed on-line envi- ronment. Significant gains in first language (Catalonian pupils) modern language (Scottish pupils) and attitudes towards modern languages (both Catalonian and Scottish pupils) were reported for the exper- imental group as compared to the control group. Results indicated that pupils tutored each other in using Piagetian techniques of error correction during the project. Error correction provided by tutors to tutees focussed on morph syntaxys, more specifically the correction of verbs. Peer support provided via the on- line environment was predominantly based on the tutor giving the right answer to the tutee. High rates of impact on tutee corrected messages were observed. The implications for peer tutoring initiative taking place via on-line environments are discussed. Implications for policy and practice are explored

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The proliferation problem of video streaming applications and mobile devices has prompted wireless network operators to put more efforts into improving quality of experience (QoE) while saving resources that are needed for high transmission rate and large size of video streaming. To deal with this problem, we propose an energy-aware rate and description allocation optimization method for video streaming in cellular network assisted device-to-device (D2D) communications. In particular, we allocate the optimal bit rate to each layer of video segments and packetize the segments into multiple descriptions with embedded forward error correction (FEC) for realtime streaming without retransmission. Simultaneously, the optimal number of descriptions is allocated to each D2D helper for transmission. The two allocation processes are done according to the access rate of segments, channel state information (CSI) of D2D requester, and remaining energy of helpers, to gain the highest optimization performance. Simulation results demonstrate that our proposed method (named OPT) significantly enhances the performance of video streaming in terms of high QoE and energy saving.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study introduces an inexact, but ultra-low power, computing architecture devoted to the embedded analysis of bio-signals. The platform operates at extremely low voltage supply levels to minimise energy consumption. In this scenario, the reliability of static RAM (SRAM) memories cannot be guaranteed when using conventional 6-transistor implementations. While error correction codes and dedicated SRAM implementations can ensure correct operations in this near-threshold regime, they incur in significant area and energy overheads, and should therefore be employed judiciously. Herein, the authors propose a novel scheme to design inexact computing architectures that selectively protects memory regions based on their significance, i.e. their impact on the end-to-end quality of service, as dictated by the bio-signal application characteristics. The authors illustrate their scheme on an industrial benchmark application performing the power spectrum analysis of electrocardiograms. Experimental evidence showcases that a significance-based memory protection approach leads to a small degradation in the output quality with respect to an exact implementation, while resulting in substantial energy gains, both in the memory and the processing subsystem.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study explores using artificial neural networks to predict the rheological and mechanical properties of underwater concrete (UWC) mixtures and to evaluate the sensitivity of such properties to variations in mixture ingredients. Artificial neural networks (ANN) mimic the structure and operation of biological neurons and have the unique ability of self-learning, mapping, and functional approximation. Details of the development of the proposed neural network model, its architecture, training, and validation are presented in this study. A database incorporating 175 UWC mixtures from nine different studies was developed to train and test the ANN model. The data are arranged in a patterned format. Each pattern contains an input vector that includes quantity values of the mixture variables influencing the behavior of UWC mixtures (that is, cement, silica fume, fly ash, slag, water, coarse and fine aggregates, and chemical admixtures) and a corresponding output vector that includes the rheological or mechanical property to be modeled. Results show that the ANN model thus developed is not only capable of accurately predicting the slump, slump-flow, washout resistance, and compressive strength of underwater concrete mixtures used in the training process, but it can also effectively predict the aforementioned properties for new mixtures designed within the practical range of the input parameters used in the training process with an absolute error of 4.6, 10.6, 10.6, and 4.4%, respectively.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper proposes a new hierarchical learning structure, namely the holistic triple learning (HTL), for extending the binary support vector machine (SVM) to multi-classification problems. For an N-class problem, a HTL constructs a decision tree up to a depth of A leaf node of the decision tree is allowed to be placed with a holistic triple learning unit whose generalisation abilities are assessed and approved. Meanwhile, the remaining nodes in the decision tree each accommodate a standard binary SVM classifier. The holistic triple classifier is a regression model trained on three classes, whose training algorithm is originated from a recently proposed implementation technique, namely the least-squares support vector machine (LS-SVM). A major novelty with the holistic triple classifier is the reduced number of support vectors in the solution. For the resultant HTL-SVM, an upper bound of the generalisation error can be obtained. The time complexity of training the HTL-SVM is analysed, and is shown to be comparable to that of training the one-versus-one (1-vs.-1) SVM, particularly on small-scale datasets. Empirical studies show that the proposed HTL-SVM achieves competitive classification accuracy with a reduced number of support vectors compared to the popular 1-vs-1 alternative.