871 resultados para error correction
Resumo:
An increasing number of applications, such as distributed interactive simulation, live auctions, distributed games and collaborative systems, require the network to provide a reliable multicast service. This service enables one sender to reliably transmit data to multiple receivers. Reliability is traditionally achieved by having receivers send negative acknowledgments (NACKs) to request from the sender the retransmission of lost (or missing) data packets. However, this Automatic Repeat reQuest (ARQ) approach results in the well-known NACK implosion problem at the sender. Many reliable multicast protocols have been recently proposed to reduce NACK implosion. But, the message overhead due to NACK requests remains significant. Another approach, based on Forward Error Correction (FEC), requires the sender to encode additional redundant information so that a receiver can independently recover from losses. However, due to the lack of feedback from receivers, it is impossible for the sender to determine how much redundancy is needed. In this paper, we propose a new reliable multicast protocol, called ARM for Adaptive Reliable Multicast. Our protocol integrates ARQ and FEC techniques. The objectives of ARM are (1) reduce the message overhead due to NACK requests, (2) reduce the amount of data transmission, and (3) reduce the time it takes for all receivers to receive the data intact (without loss). During data transmission, the sender periodically informs the receivers of the number of packets that are yet to be transmitted. Based on this information, each receiver predicts whether this amount is enough to recover its losses. Only if it is not enough, that the receiver requests the sender to encode additional redundant packets. Using ns simulations, we show the superiority of our hybrid ARQ-FEC protocol over the well-known Scalable Reliable Multicast (SRM) protocol.
Resumo:
This article describes two neural network modules that form part of an emerging theory of how adaptive control of goal-directed sensory-motor skills is achieved by humans and other animals. The Vector-Integration-To-Endpoint (VITE) model suggests how synchronous multi-joint trajectories are generated and performed at variable speeds. The Factorization-of-LEngth-and-TEnsion (FLETE) model suggests how outflow movement commands from a VITE model may be performed at variable force levels without a loss of positional accuracy. The invariance of positional control under speed and force rescaling sheds new light upon a familiar strategy of motor skill development: Skill learning begins with performance at low speed and low limb compliance and proceeds to higher speeds and compliances. The VITE model helps to explain many neural and behavioral data about trajectory formation, including data about neural coding within the posterior parietal cortex, motor cortex, and globus pallidus, and behavioral properties such as Woodworth's Law, Fitts Law, peak acceleration as a function of movement amplitude and duration, isotonic arm movement properties before and after arm-deafferentation, central error correction properties of isometric contractions, motor priming without overt action, velocity amplification during target switching, velocity profile invariance across different movement distances, changes in velocity profile asymmetry across different movement durations, staggered onset times for controlling linear trajectories with synchronous offset times, changes in the ratio of maximum to average velocity during discrete versus serial movements, and shared properties of arm and speech articulator movements. The FLETE model provides new insights into how spina-muscular circuits process variable forces without a loss of positional control. These results explicate the size principle of motor neuron recruitment, descending co-contractive compliance signals, Renshaw cells, Ia interneurons, fast automatic reactive control by ascending feedback from muscle spindles, slow adaptive predictive control via cerebellar learning using muscle spindle error signals to train adaptive movement gains, fractured somatotopy in the opponent organization of cerebellar learning, adaptive compensation for variable moment-arms, and force feedback from Golgi tendon organs. More generally, the models provide a computational rationale for the use of nonspecific control signals in volitional control, or "acts of will", and of efference copies and opponent processing in both reactive and adaptive motor control tasks.
Resumo:
In this thesis a novel transmission format, named Coherent Wavelength Division Multiplexing (CoWDM) for use in high information spectral density optical communication networks is proposed and studied. In chapter I a historical view of fibre optic communication systems as well as an overview of state of the art technology is presented to provide an introduction to the subject area. We see that, in general the aim of modern optical communication system designers is to provide high bandwidth services while reducing the overall cost per transmitted bit of information. In the remainder of the thesis a range of investigations, both of a theoretical and experimental nature are carried out using the CoWDM transmission format. These investigations are designed to consider features of CoWDM such as its dispersion tolerance, compatibility with forward error correction and suitability for use in currently installed long haul networks amongst others. A high bit rate optical test bed constructed at the Tyndall National Institute facilitated most of the experimental work outlined in this thesis and a collaboration with France Telecom enabled long haul transmission experiments using the CoWDM format to be carried out. An amount of research was also carried out on ancillary topics such as optical comb generation, forward error correction and phase stabilisation techniques. The aim of these investigations is to verify the suitability of CoWDM as a cost effective solution for use in both current and future high bit rate optical communication networks
Resumo:
Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.
Resumo:
Bandwidth constriction and datagram loss are prominent issues that affect the perceived quality of streaming video over lossy networks, such as wireless. The use of layered video coding seems attractive as a means to alleviate these issues, but its adoption has been held back in large part by the inherent priority assigned to the critical lower layers and the consequences for quality that result from their loss. The proposed use of forward error correction (FEC) as a solution only further burdens the bandwidth availability and can negate the perceived benefits of increased stream quality. In this paper, we propose Adaptive Layer Distribution (ALD) as a novel scalable media delivery technique that optimises the tradeoff between the streaming bandwidth and error resiliency. ALD is based on the principle of layer distribution, in which the critical stream data is spread amongst all datagrams thus lessening the impact on quality due to network losses. Additionally, ALD provides a parameterised mechanism for dynamic adaptation of the scalable video, while providing increased resilience to the highest quality layers. Our experimental results show that ALD improves the perceived quality and also reduces the bandwidth demand by up to 36% in comparison to the well-known Multiple Description Coding (MDC) technique.
Resumo:
The equilibrium structure of the hydrogen bonded complex H2O HF has been calculated ab initio using the CCSD(T) method with basis sets up to sextuple- quality with diffuse functions and taking into account the basis set superposition error correction. The calculations carried out confirm the importance of diffuse functions and of counterpoise correction to obtain an accurate geometry. The most important point is that the basis set convergence is extremely slow and, for this reason an accurate ab initio structure requires a very large basis set. Nevertheless, the ab initio structure is significantly different from the experimental r0 and rm structures. Analysis of the basis set convergence and of the approximations used for the determination of the experimental structures indicates that the ab initio structure is expected to be more reliable.
Resumo:
Previous studies using low frequency (1 Hz) rTMS over the motor and premotor cortex have examined repetitive movements, but focused either on motor aspects of performance such as movement speed, or on variability of the produced intervals. A novel question is whether TMS affects the synchronization of repetitive movements with an external cue (sensorimotor synchronization). In the present study participants synchronized finger taps with the tones of an auditory metronome. The aim of the study was to examine whether motor and premotor cortical inhibition induced by rTMS affects timing aspects of synchronization performance such as the coupling between the tap and the tone and error correction after a metronome perturbation. Metronome sequences included perturbations corresponding to a change in the duration of a single interval (phase shifts) that were either small and below the threshold for conscious perception (10 ms) or large and perceivable (50 ms). Both premotor and motor cortex stimulation induced inhibition, as reflected in a lengthening of the silent period. Neither motor nor premotor cortex rTMS altered error correction after a phase shift. However, motor cortex stimulation made participants tap closer to the tone, yielding a decrease in tap-tone asynchrony. This provides the first neurophysiological demonstration of a dissociation between error correction and tap-tone asynchrony in sensorimotor synchronization. We discuss the results in terms of current theories of timing and error correction.
Resumo:
The paper reports data from an on-line peer tutoring project. In the project 78, 9–12-year-old students from Scotland and Catalonia peer tutored each other in English and Spanish via a managed on-line envi- ronment. Significant gains in first language (Catalonian pupils) modern language (Scottish pupils) and attitudes towards modern languages (both Catalonian and Scottish pupils) were reported for the exper- imental group as compared to the control group. Results indicated that pupils tutored each other in using Piagetian techniques of error correction during the project. Error correction provided by tutors to tutees focussed on morph syntaxys, more specifically the correction of verbs. Peer support provided via the on- line environment was predominantly based on the tutor giving the right answer to the tutee. High rates of impact on tutee corrected messages were observed. The implications for peer tutoring initiative taking place via on-line environments are discussed. Implications for policy and practice are explored
Resumo:
We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of 'quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N.
Energy-Aware Rate and Description Allocation Optimized Video Streaming for Mobile D2D Communications
Resumo:
The proliferation problem of video streaming applications and mobile devices has prompted wireless network operators to put more efforts into improving quality of experience (QoE) while saving resources that are needed for high transmission rate and large size of video streaming. To deal with this problem, we propose an energy-aware rate and description allocation optimization method for video streaming in cellular network assisted device-to-device (D2D) communications. In particular, we allocate the optimal bit rate to each layer of video segments and packetize the segments into multiple descriptions with embedded forward error correction (FEC) for realtime streaming without retransmission. Simultaneously, the optimal number of descriptions is allocated to each D2D helper for transmission. The two allocation processes are done according to the access rate of segments, channel state information (CSI) of D2D requester, and remaining energy of helpers, to gain the highest optimization performance. Simulation results demonstrate that our proposed method (named OPT) significantly enhances the performance of video streaming in terms of high QoE and energy saving.
Resumo:
This study introduces an inexact, but ultra-low power, computing architecture devoted to the embedded analysis of bio-signals. The platform operates at extremely low voltage supply levels to minimise energy consumption. In this scenario, the reliability of static RAM (SRAM) memories cannot be guaranteed when using conventional 6-transistor implementations. While error correction codes and dedicated SRAM implementations can ensure correct operations in this near-threshold regime, they incur in significant area and energy overheads, and should therefore be employed judiciously. Herein, the authors propose a novel scheme to design inexact computing architectures that selectively protects memory regions based on their significance, i.e. their impact on the end-to-end quality of service, as dictated by the bio-signal application characteristics. The authors illustrate their scheme on an industrial benchmark application performing the power spectrum analysis of electrocardiograms. Experimental evidence showcases that a significance-based memory protection approach leads to a small degradation in the output quality with respect to an exact implementation, while resulting in substantial energy gains, both in the memory and the processing subsystem.
Resumo:
This paper addresses the impact of the CO2 opportunity cost on the wholesale electricity price in the context of the Iberian electricity market (MIBEL), namely on the Portuguese system, for the period corresponding to the Phase II of the European Union Emission Trading Scheme (EU ETS). In the econometric analysis a vector error correction model (VECM) is specified to estimate both long–run equilibrium relations and short–run interactions between the electricity price and the fuel (natural gas and coal) and carbon prices. The model is estimated using daily spot market prices and the four commodities prices are jointly modelled as endogenous variables. Moreover, a set of exogenous variables is incorporated in order to account for the electricity demand conditions (temperature) and the electricity generation mix (quantity of electricity traded according the technology used). The outcomes for the Portuguese electricity system suggest that the dynamic pass–through of carbon prices into electricity prices is strongly significant and a long–run elasticity was estimated (equilibrium relation) that is aligned with studies that have been conducted for other markets.
Resumo:
The European Union Emissions Trading Scheme (EU ETS) is a cornerstone of the European Union's policy to combat climate change and its key tool for reducing industrial greenhouse gas emissions cost-effectively. The purpose of the present work is to evaluate the influence of CO2 opportunity cost on the Spanish wholesale electricity price. Our sample includes all Phase II of the EU ETS and the first year of Phase III implementation, from January 2008 to December 2013. A vector error correction model (VECM) is applied to estimate not only long-run equilibrium relations, but also short-run interactions between the electricity price and the fuel (natural gas and coal) and carbon prices. The four commodities prices are modeled as joint endogenous variables with air temperature and renewable energy as exogenous variables. We found a long-run relationship (cointegration) between electricity price, carbon price, and fuel prices. By estimating the dynamic pass-through of carbon price into electricity price for different periods of our sample, it is possible to observe the weakening of the link between carbon and electricity prices as a result from the collapse on CO2 prices, therefore compromising the efficacy of the system to reach proposed environmental goals. This conclusion is in line with the need to shape new policies within the framework of the EU ETS that prevent excessive low prices for carbon over extended periods of time.
Resumo:
This work was developed in the context of the MIT Portugal Program, area of Bioengineering Systems, in collaboration with the Champalimaud Research Programme, Champalimaud Center for the Unknown, Lisbon, Portugal. The project entitled Dynamics of serotonergic neurons revealed by fiber photometry was carried out at Instituto Gulbenkian de Ciência, Oeiras, Portugal and at the Champalimaud Research Programme, Champalimaud Center for the Unknown, Lisbon, Portugal
Resumo:
Among the largest resources for biological sequence data is the large amount of expressed sequence tags (ESTs) available in public and proprietary databases. ESTs provide information on transcripts but for technical reasons they often contain sequencing errors. Therefore, when analyzing EST sequences computationally, such errors must be taken into account. Earlier attempts to model error prone coding regions have shown good performance in detecting and predicting these while correcting sequencing errors using codon usage frequencies. In the research presented here, we improve the detection of translation start and stop sites by integrating a more complex mRNA model with codon usage bias based error correction into one hidden Markov model (HMM), thus generalizing this error correction approach to more complex HMMs. We show that our method maintains the performance in detecting coding sequences.