51 resultados para Classification error rate
Resumo:
A beamforming algorithm is introduced based on the general objective function that approximates the bit error rate for the wireless systems with binary phase shift keying and quadrature phase shift keying modulation schemes. The proposed minimum approximate bit error rate (ABER) beamforming approach does not rely on the Gaussian assumption of the channel noise. Therefore, this approach is also applicable when the channel noise is non-Gaussian. The simulation results show that the proposed minimum ABER solution improves the standard minimum mean squares error beamforming solution, in terms of a smaller achievable system's bit error rate.
Resumo:
One of the enablers for new consumer electronics based products to be accepted in to the market is the availability of inexpensive, flexible and multi-standard chipsets and services. DVB-T, the principal standard for terrestrial broadcast of digital video in Europe, has been extremely successful in leading to governments reconsidering their targets for analogue television broadcast switch-off. To enable one further small step in creating increasingly cost effective chipsets, the ODFM deterministic equalizer has been presented before with its application to DVB-T. This paper discusses the test set-up of a DVB-T compliant baseband simulation that includes the deterministic equalizer and DVB-T standard propagation channels. This is then followed by a presentation of the found inner and outer Bit Error Rate (BER) results using various modulation levels, coding rates and propagation channels in order to ascertain the actual performance of the deterministic equalizer(1).
Resumo:
The study examined: (a) the role of phonological, grammatical, and rapid automatized naming (RAN) skills in reading and spelling development; and (b) the component processes of early narrative writing skills. Fifty-seven Turkish-speaking children were followed from Grade 1 to Grade 2. RAN was the most powerful longitudinal predictor of reading speed and its effect was evident even when previous reading skills were taken into account. Broadly, the phonological and grammatical skills made reliable contributions to spelling performance but their effects were completely mediated by previous spelling skills. Different aspects of the narrative writing skills were related to different processing skills. While handwriting speed predicted writing fluency, spelling accuracy predicted spelling error rate. Vocabulary and working memory were the only reliable longitudinal predictors of the quality of composition content. The overall model, however, failed to explain any reliable variance in the structural quality of the compositions
Resumo:
Little has so far been reported on the performance of the near-far resistant CDMA detectors in the presence of the synchronization errors. Starting with the general mathematical model of matched filters, this paper examines the effects of three classes of synchronization errors (i.e. time-delay errors, carrier phase errors, and carrier frequency errors) on the performance (bit error rate and near-far resistance) of an emerging type of near-far resistant coherent DS/SSMA detectors, i.e. the linear decorrelating detector (LDD). For comparison, the corresponding results for the conventional detector are also presented. It is shown that the LDD can still maintain a considerable performance advantage over the conventional detector even when some synchronization errors exist. Finally, several computer simulations are carried out to verify the theoretical conclusions.
Resumo:
The existing dual-rate blind linear detectors, which operate at either the low-rate (LR) or the high-rate (HR) mode, are not strictly blind at the HR mode and lack theoretical analysis. This paper proposes the subspace-based LR and HR blind linear detectors, i.e., bad decorrelating detectors (BDD) and blind MMSE detectors (BMMSED), for synchronous DS/CDMA systems. To detect an LR data bit at the HR mode, an effective weighting strategy is proposed. The theoretical analyses on the performance of the proposed detectors are carried out. It has been proved that the bit-error-rate of the LR-BDD is superior to that of the HR-BDD and the near-far resistance of the LR blind linear detectors outperforms that of its HR counterparts. The extension to asynchronous systems is also described. Simulation results show that the adaptive dual-rate BMMSED outperform the corresponding non-blind dual-rate decorrelators proposed by Saquib, Yates and Mandayam (see Wireless Personal Communications, vol. 9, p.197-216, 1998).
Resumo:
In 1997, the UK implemented the worlds first commercial digital terrestrial television system. Under the ETS 300 744 standard, the chosen modulation method, COFDM, is assumed to be multipath resilient. Previous work has shown that this is not necessarily the case. It has been shown that the local oscillator required for demodulation from intermediate-frequency to baseband must be very accurate. This paper shows that under multipath conditions, standard methods for obtaining local oscillator phase lock may not be adequate. This paper demonstrates a set of algorithms designed for use with a simple local oscillator circuit which will allow correction for local oscillator phase offset to maintain a low bit error rate with multipath present.
Resumo:
Motivation: In order to enhance genome annotation, the fully automatic fold recognition method GenTHREADER has been improved and benchmarked. The previous version of GenTHREADER consisted of a simple neural network which was trained to combine sequence alignment score, length information and energy potentials derived from threading into a single score representing the relationship between two proteins, as designated by CATH. The improved version incorporates PSI-BLAST searches, which have been jumpstarted with structural alignment profiles from FSSP, and now also makes use of PSIPRED predicted secondary structure and bi-directional scoring in order to calculate the final alignment score. Pairwise potentials and solvation potentials are calculated from the given sequence alignment which are then used as inputs to a multi-layer, feed-forward neural network, along with the alignment score, alignment length and sequence length. The neural network has also been expanded to accommodate the secondary structure element alignment (SSEA) score as an extra input and it is now trained to learn the FSSP Z-score as a measurement of similarity between two proteins. Results: The improvements made to GenTHREADER increase the number of remote homologues that can be detected with a low error rate, implying higher reliability of score, whilst also increasing the quality of the models produced. We find that up to five times as many true positives can be detected with low error rate per query. Total MaxSub score is doubled at low false positive rates using the improved method.
Resumo:
Single-carrier frequency division multiple access (SC-FDMA) has appeared to be a promising technique for high data rate uplink communications. Aimed at SC-FDMA applications, a cyclic prefixed version of the offset quadrature amplitude modulation based OFDM (OQAM-OFDM) is first proposed in this paper. We show that cyclic prefixed OQAMOFDM CP-OQAM-OFDM) can be realized within the framework of the standard OFDM system, and perfect recovery condition in the ideal channel is derived. We then apply CP-OQAMOFDM to SC-FDMA transmission in frequency selective fading channels. Signal model and joint widely linear minimum mean square error (WLMMSE) equalization using a prior information with low complexity are developed. Compared with the existing DFTS-OFDM based SC-FDMA, the proposed SC-FDMA can significantly reduce envelope fluctuation (EF) of the transmitted signal while maintaining the bandwidth efficiency. The inherent structure of CP-OQAM-OFDM enables low-complexity joint equalization in the frequency domain to combat both the multiple access interference and the intersymbol interference. The joint WLMMSE equalization using a prior information guarantees optimal MMSE performance and supports Turbo receiver for improved bit error rate (BER) performance. Simulation resultsconfirm the effectiveness of the proposed SC-FDMA in termsof EF (including peak-to-average power ratio, instantaneous-toaverage power ratio and cubic metric) and BER performances.
Resumo:
This paper presents an adaptive frame length mechanism based on a cross-layer analysis of intrinsic relations between the MAC frame length, bit error rate (BER) of the wireless link and normalized goodput. The proposed mechanism selects the optimal frame length that keeps the service normalized goodput at required levels while satisfying the lowest requirement on the BER, thus increasing the transmission reliability. Numerical results are provided and show that an optimal frame length satisfying the lowest BER requirement does indeed exist. The performance of BER requirement as a function of the MAC frame length is evaluated and compared for transmission scenarios with and without automatic repeat request (ARQ). Furthermore, issues related to the MAC overhead length are also discussed to illuminate the functionality and performance of the proposed mechanism.
Resumo:
This paper introduces a new adaptive nonlinear equalizer relying on a radial basis function (RBF) model, which is designed based on the minimum bit error rate (MBER) criterion, in the system setting of the intersymbol interference channel plus a co-channel interference. Our proposed algorithm is referred to as the on-line mixture of Gaussians estimator aided MBER (OMG-MBER) equalizer. Specifically, a mixture of Gaussians based probability density function (PDF) estimator is used to model the PDF of the decision variable, for which a novel on-line PDF update algorithm is derived to track the incoming data. With the aid of this novel on-line mixture of Gaussians based sample-by-sample updated PDF estimator, our adaptive nonlinear equalizer is capable of updating its equalizer’s parameters sample by sample to aim directly at minimizing the RBF nonlinear equalizer’s achievable bit error rate (BER). The proposed OMG-MBER equalizer significantly outperforms the existing on-line nonlinear MBER equalizer, known as the least bit error rate equalizer, in terms of both the convergence speed and the achievable BER, as is confirmed in our simulation study
Resumo:
Whole-genome sequencing (WGS) could potentially provide a single platform for extracting all the information required to predict an organism’s phenotype. However, its ability to provide accurate predictions has not yet been demonstrated in large independent studies of specific organisms. In this study, we aimed to develop a genotypic prediction method for antimicrobial susceptibilities. The whole genomes of 501 unrelated Staphylococcus aureus isolates were sequenced, and the assembled genomes were interrogated using BLASTn for a panel of known resistance determinants (chromosomal mutations and genes carried on plasmids). Results were compared with phenotypic susceptibility testing for 12 commonly used antimicrobial agents (penicillin, methicillin, erythromycin, clindamycin, tetracycline, ciprofloxacin, vancomycin, trimethoprim, gentamicin, fusidic acid, rifampin, and mupirocin) performed by the routine clinical laboratory. We investigated discrepancies by repeat susceptibility testing and manual inspection of the sequences and used this information to optimize the resistance determinant panel and BLASTn algorithm. We then tested performance of the optimized tool in an independent validation set of 491 unrelated isolates, with phenotypic results obtained in duplicate by automated broth dilution (BD Phoenix) and disc diffusion. In the validation set, the overall sensitivity and specificity of the genomic prediction method were 0.97 (95% confidence interval [95% CI], 0.95 to 0.98) and 0.99 (95% CI, 0.99 to 1), respectively, compared to standard susceptibility testing methods. The very major error rate was 0.5%, and the major error rate was 0.7%. WGS was as sensitive and specific as routine antimicrobial susceptibility testing methods. WGS is a promising alternative to culture methods for resistance prediction in S. aureus and ultimately other major bacterial pathogens.
Resumo:
Cognitive functions such as attention and memory are known to be impaired in End Stage Renal Disease (ESRD), but the sites of the neural changes underlying these impairments are uncertain. Patients and controls took part in a latent learning task, which had previously shown a dissociation between patients with Parkinson’s disease and those with medial temporal damage. ESRD patients (n=24) and age and education-matched controls (n=24) were randomly assigned to either an exposed or unexposed condition. In Phase 1 of the task, participants learned that a cue (word) on the back of a schematic head predicted that the subsequently seen face would be smiling. For the exposed (but not unexposed) condition, an additional (irrelevant) colour cue was shown during presentation. In Phase 2, a different association, between colour and facial expression, was learned. Instructions were the same for each phase: participants had to predict whether the subsequently viewed face was going to be happy or sad. No difference in error rate between the groups was found in Phase 1, suggesting that patients and controls learned at a similar rate. However, in Phase 2, a significant interaction was found between group and condition, with exposed controls performing significantly worse than unexposed (therefore demonstrating learned irrelevance). In contrast, exposed patients made a similar number of errors to unexposed in Phase 2. The pattern of results in ESRD was different from that previously found in Parkinson’s disease, suggesting a different neural origin.
Resumo:
Seamless phase II/III clinical trials in which an experimental treatment is selected at an interim analysis have been the focus of much recent research interest. Many of the methods proposed are based on the group sequential approach. This paper considers designs of this type in which the treatment selection can be based on short-term endpoint information for more patients than have primary endpoint data available. We show that in such a case, the familywise type I error rate may be inflated if previously proposed group sequential methods are used and the treatment selection rule is not specified in advance. A method is proposed to avoid this inflation by considering the treatment selection that maximises the conditional error given the data available at the interim analysis. A simulation study is reported that illustrates the type I error rate inflation and compares the power of the new approach with two other methods: a combination testing approach and a group sequential method that does not use the short-term endpoint data, both of which also strongly control the type I error rate. The new method is also illustrated through application to a study in Alzheimer's disease. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Resumo:
The present study investigates the growth of error in baroclinic waves. It is found that stable or neutral waves are particularly sensitive to errors in the initial condition. Short stable waves are mainly sensitive to phase errors and the ultra long waves to amplitude errors. Analysis simulation experiments have indicated that the amplitudes of the very long waves become usually too small in the free atmosphere, due to the sparse and very irregular distribution of upper air observations. This also applies to the four-dimensional data assimilation experiments, since the amplitudes of the very long waves are usually underpredicted. The numerical experiments reported here show that if the very long waves have these kinds of amplitude errors in the upper troposphere or lower stratosphere the error is rapidly propagated (within a day or two) to the surface and to the lower troposphere.
Resumo:
A method to estimate the size and liquid water content of drizzle drops using lidar measurements at two wavelengths is described. The method exploits the differential absorption of infrared light by liquid water at 905 nm and 1.5 μm, which leads to a different backscatter cross section for water drops larger than ≈50 μm. The ratio of backscatter measured from drizzle samples below cloud base at these two wavelengths (the colour ratio) provides a measure of the median volume drop diameter D0. This is a strong effect: for D0=200 μm, a colour ratio of ≈6 dB is predicted. Once D0 is known, the measured backscatter at 905 nm can be used to calculate the liquid water content (LWC) and other moments of the drizzle drop distribution. The method is applied to observations of drizzle falling from stratocumulus and stratus clouds. High resolution (32 s, 36 m) profiles of D0, LWC and precipitation rate R are derived. The main sources of error in the technique are the need to assume a value for the dispersion parameter μ in the drop size spectrum (leading to at most a 35% error in R) and the influence of aerosol returns on the retrieval (≈10% error in R for the cases considered here). Radar reflectivities are also computed from the lidar data, and compared to independent measurements from a colocated cloud radar, offering independent validation of the derived drop size distributions.