846 resultados para Bit error rate
Resumo:
The ambiguity acceptance test is an important quality control procedure in high precision GNSS data processing. Although the ambiguity acceptance test methods have been extensively investigated, its threshold determine method is still not well understood. Currently, the threshold is determined with the empirical approach or the fixed failure rate (FF-) approach. The empirical approach is simple but lacking in theoretical basis, while the FF-approach is theoretical rigorous but computationally demanding. Hence, the key of the threshold determination problem is how to efficiently determine the threshold in a reasonable way. In this study, a new threshold determination method named threshold function method is proposed to reduce the complexity of the FF-approach. The threshold function method simplifies the FF-approach by a modeling procedure and an approximation procedure. The modeling procedure uses a rational function model to describe the relationship between the FF-difference test threshold and the integer least-squares (ILS) success rate. The approximation procedure replaces the ILS success rate with the easy-to-calculate integer bootstrapping (IB) success rate. Corresponding modeling error and approximation error are analysed with simulation data to avoid nuisance biases and unrealistic stochastic model impact. The results indicate the proposed method can greatly simplify the FF-approach without introducing significant modeling error. The threshold function method makes the fixed failure rate threshold determination method feasible for real-time applications.
Resumo:
Ambiguity validation as an important procedure of integer ambiguity resolution is to test the correctness of the fixed integer ambiguity of phase measurements before being used for positioning computation. Most existing investigations on ambiguity validation focus on test statistic. How to determine the threshold more reasonably is less understood, although it is one of the most important topics in ambiguity validation. Currently, there are two threshold determination methods in the ambiguity validation procedure: the empirical approach and the fixed failure rate (FF-) approach. The empirical approach is simple but lacks of theoretical basis. The fixed failure rate approach has a rigorous probability theory basis, but it employs a more complicated procedure. This paper focuses on how to determine the threshold easily and reasonably. Both FF-ratio test and FF-difference test are investigated in this research and the extensive simulation results show that the FF-difference test can achieve comparable or even better performance than the well-known FF-ratio test. Another benefit of adopting the FF-difference test is that its threshold can be expressed as a function of integer least-squares (ILS) success rate with specified failure rate tolerance. Thus, a new threshold determination method named threshold function for the FF-difference test is proposed. The threshold function method preserves the fixed failure rate characteristic and is also easy-to-apply. The performance of the threshold function is validated with simulated data. The validation results show that with the threshold function method, the impact of the modelling error on the failure rate is less than 0.08%. Overall, the threshold function for the FF-difference test is a very promising threshold validation method and it makes the FF-approach applicable for the real-time GNSS positioning applications.
Resumo:
The Lagrangian particle tracking provides an effective method for simulating the deposition of nano- particles as well as micro-particles as it accounts for the particle inertia effect as well as the Brownian excitation. However, using the Lagrangian approach for simulating ultrafine particles has been limited due to computational cost and numerical difficulties. The aim of this paper is to study the deposition of nano-particles in cylindrical tubes under laminar condition using the Lagrangian particle tracking method. The commercial Fluent software is used to simulate the fluid flow in the pipes and to study the deposition and dispersion of nano-particles. Different particle diameters as well as different pipe lengths and flow rates are examined. The results show good agreement between the calculated deposition efficiency and different analytic correlations in the literature. Furthermore, for the nano-particles with higher diameters and when the effect of inertia has a higher importance, the calculated deposition efficiency by the Lagrangian method is less than the analytic correlations based on Eulerian method due to statistical error or the inertia effect.
Resumo:
Management of the commercial harvest of kangaroos relies on quotas set annually as a proportion of regular estimates of population size. Surveys to generate these estimates are expensive and, in the larger states, logistically difficult; a cheaper alternative is desirable. Rainfall is a disappointingly poor predictor of kangaroo rate of increase in many areas, but harvest statistics (sex ratio, carcass weight, skin size and animals shot per unit time) potentially offer cost-effective indirect monitoring of population abundance (and therefore trend) and status (i.e. under-or overharvest). Furthermore, because harvest data are collected continuously and throughout the harvested areas, they offer the promise of more intensive and more representative coverage of harvest areas than aerial surveys do. To be useful, harvest statistics would need to have a close and known relationship with either population size or harvest rate. We assessed this using longterm (11-22 years) data for three kangaroo species (Macropus rufus, M. giganteus and M. fuliginosus) and common wallaroos (M. robustus) across South Australia, New South Wales and Queensland. Regional variation in kangaroo body size, population composition, shooter efficiency and selectivity required separate analyses in different regions. Two approaches were taken. First, monthly harvest statistics were modelled as a function of a number of explanatory variables, including kangaroo density, harvest rate and rainfall. Second, density and harvest rate were modelled as a function of harvest statistics. Both approaches incorporated a correlated error structure. Many but not all regions had relationships with sufficient precision to be useful for indirect monitoring. However, there was no single relationship that could be applied across an entire state or across species. Combined with rainfall-driven population models and applied at a regional level, these relationships could be used to reduce the frequency of aerial surveys without compromising decisions about harvest management.
Resumo:
In this paper, we present the design and characterization of a vibratory yaw rate MEMS sensor that uses in-plane motion for both actuation and sensing. The design criterion for the rate sensor is based on a high sensitivity and low bandwidth. The required sensitivity of the yawrate sensor is attained by using the inplane motion in which the dominant damping mechanism is the fluid loss due to slide film damping i.e. two-three orders of magnitude less than the squeeze-film damping in other rate sensors with out-of-plane motion. The low bandwidth is achieved by matching the drive and the sense mode frequencies. Based on these factors, the yaw rate sensor is designed and finally realized using surface micromachining. The inplane motion of the sensor is experimentally characterized to determine the sense and the drive mode frequencies, and corresponding damping ratios. It is found that the experimental results match well with the numerical and the analytical models with less than 5% error in frequencies measurements. The measured quality factor of the sensor is approximately 467, which is two orders of magnitude higher than that for a similar rate sensor with out-of-plane sense direction.
Resumo:
The problem of constructing space-time (ST) block codes over a fixed, desired signal constellation is considered. In this situation, there is a tradeoff between the transmission rate as measured in constellation symbols per channel use and the transmit diversity gain achieved by the code. The transmit diversity is a measure of the rate of polynomial decay of pairwise error probability of the code with increase in the signal-to-noise ratio (SNR). In the setting of a quasi-static channel model, let n(t) denote the number of transmit antennas and T the block interval. For any n(t) <= T, a unified construction of (n(t) x T) ST codes is provided here, for a class of signal constellations that includes the familiar pulse-amplitude (PAM), quadrature-amplitude (QAM), and 2(K)-ary phase-shift-keying (PSK) modulations as special cases. The construction is optimal as measured by the rate-diversity tradeoff and can achieve any given integer point on the rate-diversity tradeoff curve. An estimate of the coding gain realized is given. Other results presented here include i) an extension of the optimal unified construction to the multiple fading block case, ii) a version of the optimal unified construction in which the underlying binary block codes are replaced by trellis codes, iii) the providing of a linear dispersion form for the underlying binary block codes, iv) a Gray-mapped version of the unified construction, and v) a generalization of construction of the S-ary case corresponding to constellations of size S-K. Items ii) and iii) are aimed at simplifying the decoding of this class of ST codes.
Resumo:
Denoising of images in compressed wavelet domain has potential application in transmission technology such as mobile communication. In this paper, we present a new image denoising scheme based on restoration of bit-planes of wavelet coefficients in compressed domain. It exploits the fundamental property of wavelet transform - its ability to analyze the image at different resolution levels and the edge information associated with each band. The proposed scheme relies on the fact that noise commonly manifests itself as a fine-grained structure in image and wavelet transform allows the restoration strategy to adapt itself according to directional features of edges. The proposed approach shows promising results when compared with conventional unrestored scheme, in context of error reduction and has capability to adapt to situations where noise level in the image varies. The applicability of the proposed approach has implications in restoration of images due to noisy channels. This scheme, in addition, to being very flexible, tries to retain all the features, including edges of the image. The proposed scheme is computationally efficient.
Resumo:
Denoising of medical images in wavelet domain has potential application in transmission technologies such as teleradiology. This technique becomes all the more attractive when we consider the progressive transmission in a teleradiology system. The transmitted images are corrupted mainly due to noisy channels. In this paper, we present a new real time image denoising scheme based on limited restoration of bit-planes of wavelet coefficients. The proposed scheme exploits the fundamental property of wavelet transform - its ability to analyze the image at different resolution levels and the edge information associated with each sub-band. The desired bit-rate control is achieved by applying the restoration on a limited number of bit-planes subject to the optimal smoothing. The proposed method adapts itself to the preference of the medical expert; a single parameter can be used to balance the preservation of (expert-dependent) relevant details against the degree of noise reduction. The proposed scheme relies on the fact that noise commonly manifests itself as a fine-grained structure in image and wavelet transform allows the restoration strategy to adapt itself according to directional features of edges. The proposed approach shows promising results when compared with unrestored case, in context of error reduction. It also has capability to adapt to situations where noise level in the image varies and with the changing requirements of medical-experts. The applicability of the proposed approach has implications in restoration of medical images in teleradiology systems. The proposed scheme is computationally efficient.
Resumo:
Denoising of images in compressed wavelet domain has potential application in transmission technology such as mobile communication. In this paper, we present a new image denoising scheme based on restoration of bit-planes of wavelet coefficients in compressed domain. It exploits the fundamental property of wavelet transform - its ability to analyze the image at different resolution levels and the edge information associated with each band. The proposed scheme relies on the fact that noise commonly manifests itself as a fine-grained structure in image and wavelet transform allows the restoration strategy to adapt itself according to directional features of edges. The proposed approach shows promising results when compared with conventional unrestored scheme, in context of error reduction and has capability to adapt to situations where noise level in the image varies. The applicability of the proposed approach has implications in restoration of images due to noisy channels. This scheme, in addition, to being very flexible, tries to retain all the features, including edges of the image. The proposed scheme is computationally efficient.
Resumo:
A posteriori error estimation and adaptive refinement technique for fracture analysis of 2-D/3-D crack problems is the state-of-the-art. The objective of the present paper is to propose a new a posteriori error estimator based on strain energy release rate (SERR) or stress intensity factor (SIF) at the crack tip region and to use this along with the stress based error estimator available in the literature for the region away from the crack tip. The proposed a posteriori error estimator is called the K-S error estimator. Further, an adaptive mesh refinement (h-) strategy which can be used with K-S error estimator has been proposed for fracture analysis of 2-D crack problems. The performance of the proposed a posteriori error estimator and the h-adaptive refinement strategy have been demonstrated by employing the 4-noded, 8-noded and 9-noded plane stress finite elements. The proposed error estimator together with the h-adaptive refinement strategy will facilitate automation of fracture analysis process to provide reliable solutions.
Resumo:
The constructional details of an 18-bit binary inductive voltage divider (IVD) for a.c. bridge applications is described. Simplified construction with less number of windings, interconnection of winding through SPDT solid state relays instead of DPDT relays, improves reliability of IVD. High accuracy for most precision measurement achieved without D/A converters. The checks for self consistency in voltage division shows that the error is less than 2 counts in 2(18).
Resumo:
The minimum distance of linear block codes is one of the important parameter that indicates the error performance of the code. When the code rate is less than 1/2, efficient algorithms are available for finding minimum distance using the concept of information sets. When the code rate is greater than 1/2, only one information set is available and efficiency suffers. In this paper, we investigate and propose a novel algorithm to find the minimum distance of linear block codes with the code rate greater than 1/2. We propose to reverse the roles of information set and parity set to get virtually another information set to improve the efficiency. This method is 67.7 times faster than the minimum distance algorithm implemented in MAGMA Computational Algebra System for a (80, 45) linear block code.
Resumo:
For an n(t) transmit, n(r) receive antenna system (n(t) x nr system), a full-rate space time block code (STBC) transmits min(n(t), n(r)) complex symbols per channel use. In this paper, a scheme to obtain a full-rate STBC for 4 transmit antennas and any n(r), with reduced ML-decoding complexity is presented. The weight matrices of the proposed STBC are obtained from the unitary matrix representations of a Clifford Algebra. By puncturing the symbols of the STBC, full rate designs can be obtained for n(r) < 4. For any value of n(r), the proposed design offers the least ML-decoding complexity among known codes. The proposed design is comparable in error performance to the well known Perfect code for 4 transmit antennas while offering lower ML-decoding complexity. Further, when n(r) < 4, the proposed design has higher ergodic capacity than the punctured Perfect code. Simulation results which corroborate these claims are presented.
Resumo:
Convolutional network-error correcting codes (CNECCs) are known to provide error correcting capability in acyclic instantaneous networks within the network coding paradigm under small field size conditions. In this work, we investigate the performance of CNECCs under the error model of the network where the edges are assumed to be statistically independent binary symmetric channels, each with the same probability of error pe(0 <= p(e) < 0.5). We obtain bounds on the performance of such CNECCs based on a modified generating function (the transfer function) of the CNECCs. For a given network, we derive a mathematical condition on how small p(e) should be so that only single edge network-errors need to be accounted for, thus reducing the complexity of evaluating the probability of error of any CNECC. Simulations indicate that convolutional codes are required to possess different properties to achieve good performance in low p(e) and high p(e) regimes. For the low p(e) regime, convolutional codes with good distance properties show good performance. For the high p(e) regime, convolutional codes that have a good slope ( the minimum normalized cycle weight) are seen to be good. We derive a lower bound on the slope of any rate b/c convolutional code with a certain degree.
Resumo:
We consider a time division duplex multiple-input multiple-output (nt × nr MIMO). Using channel state information (CSI) at the transmitter, singular value decomposition (SVD) of the channel matrix is performed. This transforms the MIMO channel into parallel subchannels, but has a low overall diversity order. Hence, we propose X-Codes which achieve a higher diversity order by pairing the subchannels, prior to SVD preceding. In particular, each pair of information symbols is encoded by a fixed 2 × 2 real rotation matrix. X-Codes can be decoded using nr very low complexity two-dimensional real sphere decoders. Error probability analysis for X-Codes enables us to choose the optimal pairing and the optimal rotation angle for each pair. Finally, we show that our new scheme outperforms other low complexity precoding schemes.