972 resultados para ERROR PROPAGATION
Resumo:
During R/V Meteor-cruise no. 30 4 moorings with 17 current meters were placed on the continental slope of Sierra Leone at depths between 81 and 1058 meters. The observation period started on March 8, 1973, 16.55 hours GMT and lasted 19 days for moorings M30_068MOOR, M30_069MOOR, M30_070MOOR on the slope and 9 days for M30_067MOOR on the shelf. One current meter recorded at location M30_067MOOR for 22 days. Hydrographic data were collected at 32 stations by means of the "Kieler Multi-Meeressonde". Harmonic analysis is applied to the first 15 days of the time series to determine the M2 and S2 tides. By vertically averaging of the Fourier coefficients the field of motion is separated into its barotropic and its baroclinic component. The expected error generated by white Gaussian noise is estimated. To estimate the influence of the particular vertical distribution of the current meters, the barotropic M2 tide is calculated by ommitting and interchanging time series of different moorings. It is shown that only the data of moorings M30_069MOOR, M30_070MOOR and M30_067MOOR can be used. The results for the barotropic M2 tide agree well with the previous publications of other authors. On the slope at a depth of 1000 m there is a free barotropic wave under the influence of the Coriolis-force propagating along the slope with an amplitude of 3.4 cm S**-1. On the shelf, the maximum current is substantially greater (5.8 cm s**-1) and the direction of propagation is perpendicular to the slope. As for the continental slope a separation into different baroclinic modes using vertical eigenmodes is not reasonable, an interpretation of the total baroclinic wave field is tried by means of the method of characteristis. Assuming the continental slope to generate several linear waves, which superpose, baroclinic tidal ellipses are calculated. The scattering of the direction of the major axes M30_069MOOR is in contrast to M30_070MOOR, where they are bundled within an angle of 60°. This is presumably caused by the different character of the bottom topography in the vicinity of the two moorings. A detailed discussion of M30_069MOOR is renounced since the accuracy of the bathymetric chart is not sufficient to prove any relation between waves and topography. The bundeling of the major axes at M30_070MOOR can be explained by the longslope changes of the slope, which cause an energy transfer from the longslope barotropic component to the downslope baroclinic component. The maximum amplitude is found at a depth of 245 m where it is expected from the characteristics originating at the shelf edge. Because of the dominating barotropic tide high coherence is found between most of the current meters. To show the influence of the baroclinic tidal waves, the effect of the mean current is considered. There are two periods nearly opposite longshore mean current. For 128 hours during each of these periods, starting on March 11, 05.00, and March 21, 08.30, the coherences and energy spectra are calculated. The changes in the slope of the characteristics are found in agreement with the changes of energy and coherence. Because of the short periods of nearly constant mean current, some of the calculated differences of energy and coherence are not statistically significant. For the M2 tide a calculation of the ratios of vertically integrated total baroclinic energy and vertically integrated barotropic kinetic energy is carried out. Taking into account both components (along and perpendicular to the slope) the obtained values are 0.75 and 0.98 at the slope and 0.38 at the shelf. If each component is considered separately, the ratios are 0.39 and 1.16 parallel to the slope and 5.1 and 15.85 for the component perpendicular to it. Taking the energy transfer from the longslope component to the doenslope component into account, a simple model yields an energy-ratio of 2.6. Considering the limited application of the theory to the real conditions, the obtained are in agreement with the values calculated by Sandstroem.
Resumo:
Distributed target tracking in wireless sensor networks (WSN) is an important problem, in which agreement on the target state can be achieved using conventional consensus methods, which take long to converge. We propose distributed particle filtering based on belief propagation (DPF-BP) consensus, a fast method for target tracking. According to our simulations, DPF-BP provides better performance than DPF based on standard belief consensus (DPF-SBC) in terms of disagreement in the network. However, in terms of root-mean square error, it can outperform DPF-SBC only for a specific number of consensus iterations.
Resumo:
The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method.
Resumo:
We investigate the performance of parity check codes using the mapping onto spin glasses proposed by Sourlas. We study codes where each parity check comprises products of K bits selected from the original digital message with exactly C parity checks per message bit. We show, using the replica method, that these codes saturate Shannon's coding bound for K?8 when the code rate K/C is finite. We then examine the finite temperature case to asses the use of simulated annealing methods for decoding, study the performance of the finite K case and extend the analysis to accommodate different types of noisy channels. The analogy between statistical physics methods and decoding by belief propagation is also discussed.
Resumo:
The performance of Gallager's error-correcting code is investigated via methods of statistical physics. In this method, the transmitted codeword comprises products of the original message bits selected by two randomly-constructed sparse matrices; the number of non-zero row/column elements in these matrices constitutes a family of codes. We show that Shannon's channel capacity is saturated for many of the codes while slightly lower performance is obtained for others which may be of higher practical relevance. Decoding aspects are considered by employing the TAP approach which is identical to the commonly used belief-propagation-based decoding.
Resumo:
An exact solution to a family of parity check error-correcting codes is provided by mapping the problem onto a Husimi cactus. The solution obtained in the thermodynamic limit recovers the replica-symmetric theory results and provides a very good approximation to finite systems of moderate size. The probability propagation decoding algorithm emerges naturally from the analysis. A phase transition between decoding success and failure phases is found to coincide with an information-theoretic upper bound. The method is employed to compare Gallager and MN codes.
Resumo:
We employ the methods presented in the previous chapter for decoding corrupted codewords, encoded using sparse parity check error correcting codes. We show the similarity between the equations derived from the TAP approach and those obtained from belief propagation, and examine their performance as practical decoding methods.
Resumo:
We analyse Gallager codes by employing a simple mean-field approximation that distorts the model geometry and preserves important interactions between sites. The method naturally recovers the probability propagation decoding algorithm as a minimization of a proper free-energy. We find a thermodynamical phase transition that coincides with information theoretical upper-bounds and explain the practical code performance in terms of the free-energy landscape.
Resumo:
Error free propagation of a single polarisation optical time division multiplexed 40 Gbit/s dispersion managed pulsed data stream over dispersion (non-shifted) fibre. This distance is twice the previous record at this data rate.
Resumo:
Error free propagation of a single polarisation optical time division multiplexed 40Gbit/s dispersion managed pulse data stream over 509km has been achieved in standard (non-dispersion shifted) fibre. Dispersion compensating fibre was used after each amplifier to reduce the high local dispersion of the standard fibre. © IEE 1999.
Resumo:
In this thesis we use statistical physics techniques to study the typical performance of four families of error-correcting codes based on very sparse linear transformations: Sourlas codes, Gallager codes, MacKay-Neal codes and Kanter-Saad codes. We map the decoding problem onto an Ising spin system with many-spins interactions. We then employ the replica method to calculate averages over the quenched disorder represented by the code constructions, the arbitrary messages and the random noise vectors. We find, as the noise level increases, a phase transition between successful decoding and failure phases. This phase transition coincides with upper bounds derived in the information theory literature in most of the cases. We connect the practical decoding algorithm known as probability propagation with the task of finding local minima of the related Bethe free-energy. We show that the practical decoding thresholds correspond to noise levels where suboptimal minima of the free-energy emerge. Simulations of practical decoding scenarios using probability propagation agree with theoretical predictions of the replica symmetric theory. The typical performance predicted by the thermodynamic phase transitions is shown to be attainable in computation times that grow exponentially with the system size. We use the insights obtained to design a method to calculate the performance and optimise parameters of the high performance codes proposed by Kanter and Saad.
Resumo:
There are been a resurgence of interest in the neural networks field in recent years, provoked in part by the discovery of the properties of multi-layer networks. This interest has in turn raised questions about the possibility of making neural network behaviour more adaptive by automating some of the processes involved. Prior to these particular questions, the process of determining the parameters and network architecture required to solve a given problem had been a time consuming activity. A number of researchers have attempted to address these issues by automating these processes, concentrating in particular on the dynamic selection of an appropriate network architecture.The work presented here specifically explores the area of automatic architecture selection; it focuses upon the design and implementation of a dynamic algorithm based on the Back-Propagation learning algorithm. The algorithm constructs a single hidden layer as the learning process proceeds using individual pattern error as the basis of unit insertion. This algorithm is applied to several problems of differing type and complexity and is found to produce near minimal architectures that are shown to have a high level of generalisation ability.
Resumo:
We report the impact of longitudinal signal power profile on the transmission performance of coherently-detected 112 Gb/s m-ary polarization multiplexed quadrature amplitude modulation system after compensation of deterministic nonlinear fibre impairments. Performance improvements up to 0.6 dB (Q(eff)) are reported for a non-uniform transmission link power profile. Further investigation reveals that the evolution of the transmission performance with power profile management is fully consistent with the parametric amplification of the amplified spontaneous emission by the signal through four-wave mixing. In particular, for a non-dispersion managed system, a single-step increment of 4 dB in the amplifier gain, with respect to a uniform gain profile, at similar to 2/3(rd) of the total reach considerably improves the transmission performance for all the formats studied. In contrary a negative-step profile, emulating a failure (gain decrease or loss increase), significantly degrades the bit-error rate.
Resumo:
Recently underwater sensor networks (UWSN) attracted large research interests. Medium access control (MAC) is one of the major challenges faced by UWSN due to the large propagation delay and narrow channel bandwidth of acoustic communications used for UWSN. Widely used slotted aloha (S-Aloha) protocol suffers large performance loss in UWSNs, which can only achieve performance close to pure aloha (P-Aloha). In this paper we theoretically model the performances of S-Aloha and P-Aloha protocols and analyze the adverse impact of propagation delay. According to the observation on the performances of S-Aloha protocol we propose two enhanced S-Aloha protocols in order to minimize the adverse impact of propagation delay on S-Aloha protocol. The first enhancement is a synchronized arrival S-Aloha (SA-Aloha) protocol, in which frames are transmitted at carefully calculated time to align the frame arrival time with the start of time slots. Propagation delay is taken into consideration in the calculation of transmit time. As estimation error on propagation delay may exist and can affect network performance, an improved SA-Aloha (denoted by ISA-Aloha) is proposed, which adjusts the slot size according to the range of delay estimation errors. Simulation results show that both SA-Aloha and ISA-Aloha perform remarkably better than S-Aloha and P-Aloha for UWSN, and ISA-Aloha is more robust even when the propagation delay estimation error is large. © 2011 IEEE.
Resumo:
Error free propagation of a single polarisation optical time division multiplexed 40 Gbit/s dispersion managed pulsed data stream over dispersion (non-shifted) fibre. This distance is twice the previous record at this data rate.