819 resultados para Classification error rate


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The presence of high phase noise in addition to additive white Gaussian noise in coherent optical systems affects the performance of forward error correction (FEC) schemes. In this paper, we propose a simple scheme for such systems, using block interleavers and binary Bose–Chaudhuri–Hocquenghem (BCH) codes. The block interleavers are specifically optimized for differential quadrature phase shift keying modulation. We propose a method for selecting BCH codes that, together with the interleavers, achieve a target post-FEC bit error rate (BER). This combination of interleavers and BCH codes has very low implementation complexity. In addition, our approach is straightforward, requiring only short pre-FEC simulations to parameterize a model, based on which we select codes analytically. We aim to correct a pre-FEC BER of around (Formula presented.). We evaluate the accuracy of our approach using numerical simulations. For a target post-FEC BER of (Formula presented.), codes selected using our method result in BERs around 3(Formula presented.) target and achieve the target with around 0.2 dB extra signal-to-noise ratio.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we propose the design of communication systems based on using periodic nonlinear Fourier transform (PNFT), following the introduction of the method in the Part I. We show that the famous "eigenvalue communication" idea [A. Hasegawa and T. Nyu, J. Lightwave Technol. 11, 395 (1993)] can also be generalized for the PNFT application: In this case, the main spectrum attributed to the PNFT signal decomposition remains constant with the propagation down the optical fiber link. Therefore, the main PNFT spectrum can be encoded with data in the same way as soliton eigenvalues in the original proposal. The results are presented in terms of the bit-error rate (BER) values for different modulation techniques and different constellation sizes vs. the propagation distance, showing a good potential of the technique.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We quantify the error statistics and patterning effects in a 5x 40 Gbit/s WDM RZ-DBPSK SMF/DCF fibre link using hybrid Raman/EDFA amplification. We propose an adaptive constrained coding for the suppression of errors due to patterning effects. It is established, that this coding technique can greatly reduce the bit error rate (BER) value even for large BER (BER > 101). The proposed approach can be used in the combination with the forward error correction schemes (FEC) to correct the errors even when real channel BER is outside the FEC workspace.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The necessity of elemental analysis techniques to solve forensic problems continues to expand as the samples collected from crime scenes grow in complexity. Laser ablation ICP-MS (LA-ICP-MS) has been shown to provide a high degree of discrimination between samples that originate from different sources. In the first part of this research, two laser ablation ICP-MS systems were compared, one using a nanosecond laser and another a femtosecond laser source for the forensic analysis of glass. The results showed that femtosecond LA-ICP-MS did not provide significant improvements in terms of accuracy, precision and discrimination, however femtosecond LA-ICP-MS did provide lower detection limits. In addition, it was determined that even for femtosecond LA-ICP-MS an internal standard should be utilized to obtain accurate analytical results for glass analyses. In the second part, a method using laser induced breakdown spectroscopy (LIBS) for the forensic analysis of glass was shown to provide excellent discrimination for a glass set consisting of 41 automotive fragments. The discrimination power was compared to two of the leading elemental analysis techniques, µXRF and LA-ICP-MS, and the results were similar; all methods generated >99% discrimination and the pairs found indistinguishable were similar. An extensive data analysis approach for LIBS glass analyses was developed to minimize Type I and II errors en route to a recommendation of 10 ratios to be used for glass comparisons. Finally, a LA-ICP-MS method for the qualitative analysis and discrimination of gel ink sources was developed and tested for a set of ink samples. In the first discrimination study, qualitative analysis was used to obtain 95.6% discrimination for a blind study consisting of 45 black gel ink samples provided by the United States Secret Service. A 0.4% false exclusion (Type I) error rate and a 3.9% false inclusion (Type II) error rate was obtained for this discrimination study. In the second discrimination study, 99% discrimination power was achieved for a black gel ink pen set consisting of 24 self collected samples. The two pairs found to be indistinguishable came from the same source of origin (the same manufacturer and type of pen purchased in different locations). It was also found that gel ink from the same pen, regardless of the age, was indistinguishable as were gel ink pens (four pens) originating from the same pack.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Two direct sampling correlator-type receivers for differential chaos shift keying (DCSK) communication systems under frequency non-selective fading channels are proposed. These receivers operate based on the same hardware platform with different architectures. In the first scheme, namely sum-delay-sum (SDS) receiver, the sum of all samples in a chip period is correlated with its delayed version. The correlation value obtained in each bit period is then compared with a fixed threshold to decide the binary value of recovered bit at the output. On the other hand, the second scheme, namely delay-sum-sum (DSS) receiver, calculates the correlation value of all samples with its delayed version in a chip period. The sum of correlation values in each bit period is then compared with the threshold to recover the data. The conventional DCSK transmitter, frequency non-selective Rayleigh fading channel, and two proposed receivers are mathematically modelled in discrete-time domain. The authors evaluated the bit error rate performance of the receivers by means of both theoretical analysis and numerical simulation. The performance comparison shows that the two proposed receivers can perform well under the studied channel, where the performances get better when the number of paths increases and the DSS receiver outperforms the SDS one.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Situational awareness is achieved naturally by the human senses of sight and hearing in combination. Automatic scene understanding aims at replicating this human ability using microphones and cameras in cooperation. In this paper, audio and video signals are fused and integrated at different levels of semantic abstractions. We detect and track a speaker who is relatively unconstrained, i.e., free to move indoors within an area larger than the comparable reported work, which is usually limited to round table meetings. The system is relatively simple: consisting of just 4 microphone pairs and a single camera. Results show that the overall multimodal tracker is more reliable than single modality systems, tolerating large occlusions and cross-talk. System evaluation is performed on both single and multi-modality tracking. The performance improvement given by the audio–video integration and fusion is quantified in terms of tracking precision and accuracy as well as speaker diarisation error rate and precision–recall (recognition). Improvements vs. the closest works are evaluated: 56% sound source localisation computational cost over an audio only system, 8% speaker diarisation error rate over an audio only speaker recognition unit and 36% on the precision–recall metric over an audio–video dominant speaker recognition method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Although most gastrointestinal stromal tumours (GIST) carry oncogenic mutations in KIT exons 9, 11, 13 and 17, or in platelet-derived growth factor receptor alpha (PDGFRA) exons 12, 14 and 18, around 10% of GIST are free of these mutations. Genotyping and accurate detection of KIT/PDGFRA mutations in GIST are becoming increasingly useful for clinicians in the management of the disease. METHOD: To evaluate and improve laboratory practice in GIST mutation detection, we developed a mutational screening quality control program. Eleven laboratories were enrolled in this program and 50 DNA samples were analysed, each of them by four different laboratories, giving 200 mutational reports. RESULTS: In total, eight mutations were not detected by at least one laboratory. One false positive result was reported in one sample. Thus, the mean global rate of error with clinical implication based on 200 reports was 4.5%. Concerning specific polymorphisms detection, the rate varied from 0 to 100%, depending on the laboratory. The way mutations were reported was very heterogeneous, and some errors were detected. CONCLUSION: This study demonstrated that such a program was necessary for laboratories to improve the quality of the analysis, because an error rate of 4.5% may have clinical consequences for the patient.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study a multiuser multicarrier downlink communication system in which the base station (BS) employs a large number of antennas. By assuming frequency-division duplex operation, we provide a beam domain channel model as the number of BS antennas grows asymptotically large. With this model, we first derive a closed-form upper bound on the achievable ergodic sum-rate before developing necessary conditions to asymptotically maximize the upper bound, with only statistical channel state information at the BS. Inspired by these conditions, we propose a beam division multiple access (BDMA) transmission scheme, where the BS communicates with users via different beams. For BDMA transmission, we design user scheduling to select users within non-overlapping beams, work out an optimal pilot design under a minimum mean square error criterion, and provide optimal pilot sequences by utilizing the Zadoff-Chu sequences. The proposed BDMA scheme reduces significantly the pilot overhead, as well as, the processing complexity at transceivers. Simulations demonstrate the high spectral efficiency of BDMA transmission and the advantages in the bit error rate performance of the proposed pilot sequences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In Czech schools two teaching methods of reading are used: the analytic-synthetic (conventional) and genetic (created in the 1990s). They differ in theoretical foundations and in methodology. The aim of this paper is to describe the above mentioned theoretical approaches and present the results of study that followed the differences in the development of initial reading skills between these methods. A total of 452 first grade children (age 6-8) were assessed by a battery of reading tests at the beginning and at the end of the first grade and at the beginning of the second grade. 350 pupils participated all three times. Based on data analysis the developmental dynamics of reading skills in both methods and the main differences in several aspects of reading abilities (e.g. the speed of reading, reading technique, error rate in reading) are described. The main focus is on the reading comprehension development. Results show that pupils instructed using genetic approach scored significantly better on used reading comprehension tests, especially in the first grade. Statistically significant differences occurred between classes independently of each method. Therefore, other factors such as teacher´s role and class composition are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Statistical association between a single nucleotide polymorphism (SNP) genotype and a quantitative trait in genome-wide association studies is usually assessed using a linear regression model, or, in the case of non-normally distributed trait values, using the Kruskal-Wallis test. While linear regression models assume an additive mode of inheritance via equi-distant genotype scores, Kruskal-Wallis test merely tests global differences in trait values associated with the three genotype groups. Both approaches thus exhibit suboptimal power when the underlying inheritance mode is dominant or recessive. Furthermore, these tests do not perform well in the common situations when only a few trait values are available in a rare genotype category (disbalance), or when the values associated with the three genotype categories exhibit unequal variance (variance heterogeneity). We propose a maximum test based on Marcus-type multiple contrast test for relative effect sizes. This test allows model-specific testing of either dominant, additive or recessive mode of inheritance, and it is robust against variance heterogeneity. We show how to obtain mode-specific simultaneous confidence intervals for the relative effect sizes to aid in interpreting the biological relevance of the results. Further, we discuss the use of a related all-pairwise comparisons contrast test with range preserving confidence intervals as an alternative to Kruskal-Wallis heterogeneity test. We applied the proposed maximum test to the Bogalusa Heart Study dataset, and gained a remarkable increase in the power to detect association, particularly for rare genotypes. Our simulation study also demonstrated that the proposed non-parametric tests control family-wise error rate in the presence of non-normality and variance heterogeneity contrary to the standard parametric approaches. We provide a publicly available R library nparcomp that can be used to estimate simultaneous confidence intervals or compatible multiplicity-adjusted p-values associated with the proposed maximum test.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis focuses on digital equalization of nonlinear fiber impairments for coherent optical transmission systems. Building from well-known physical models of signal propagation in single-mode optical fibers, novel nonlinear equalization techniques are proposed, numerically assessed and experimentally demonstrated. The structure of the proposed algorithms is strongly driven by the optimization of the performance versus complexity tradeoff, envisioning the near-future practical application in commercial real-time transceivers. The work is initially focused on the mitigation of intra-channel nonlinear impairments relying on the concept of digital backpropagation (DBP) associated with Volterra-based filtering. After a comprehensive analysis of the third-order Volterra kernel, a set of critical simplifications are identified, culminating in the development of reduced complexity nonlinear equalization algorithms formulated both in time and frequency domains. The implementation complexity of the proposed techniques is analytically described in terms of computational effort and processing latency, by determining the number of real multiplications per processed sample and the number of serial multiplications, respectively. The equalization performance is numerically and experimentally assessed through bit error rate (BER) measurements. Finally, the problem of inter-channel nonlinear compensation is addressed within the context of 400 Gb/s (400G) superchannels for long-haul and ultra-long-haul transmission. Different superchannel configurations and nonlinear equalization strategies are experimentally assessed, demonstrating that inter-subcarrier nonlinear equalization can provide an enhanced signal reach while requiring only marginal added complexity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider an LTE network where a secondary user acts as a relay, transmitting data to the primary user using a decode-and-forward mechanism, transparent to the base-station (eNodeB). Clearly, the relay can decode symbols more reliably if the employed precoder matrix indicators (PMIs) are known. However, for closed loop spatial multiplexing (CLSM) transmit mode, this information is not always embedded in the downlink signal, leading to a need for effective methods to determine the PMI. In this thesis, we consider 2x2 MIMO and 4x4 MIMO downlink channels corresponding to CLSM and formulate two techniques to estimate the PMI at the relay using a hypothesis testing framework. We evaluate their performance via simulations for various ITU channel models over a range of SNR and for different channel quality indicators (CQIs). We compare them to the case when the true PMI is known at the relay and show that the performance of the proposed schemes are within 2 dB at 10% block error rate (BLER) in almost all scenarios. Furthermore, the techniques add minimal computational overhead over existent receiver structure. Finally, we also identify scenarios when using the proposed precoder detection algorithms in conjunction with the cooperative decode-and-forward relaying mechanism benefits the PUE and improves the BLER performance for the PUE. Therefore, we conclude from this that the proposed algorithms as well as the cooperative relaying mechanism at the CMR can be gainfully employed in a variety of real-life scenarios in LTE networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Human immunodeficiency virus (HIV) rapidly evolves through generation and selection of mutants that can escape drug therapy. This process is fueled, in part, by the presumably highly error prone polymerase reverse transcriptase (RT). Fidelity of polymerases can be influenced by cation co-factors. Physiologically, magnesium (Mg2+) is used as a co-factor by RT to perform catalysis, however, alternative cations including manganese (Mn2+), cobalt (Co2+), and zinc (Zn2+) can also be used. I demonstrate here that fidelity and inhibition of HIV RT can be influenced differently, in vitro, by divalent cations depending on their concentration. The reported mutation frequency for purified HIV RT in vitro is typically in the 10-4 range (per nucleotide addition), making the enzyme several-fold less accurate than most polymerases. Paradoxically, results examining HIV replication in cells indicate an error frequency that is ~10 times lower than the error rate obtained in the test tube. Here, I reconcile, at least in part, these discrepancies by showing that HIV RT fidelity in vitro is in the same range as cellular results, in physiological concentrations of free Mg2+ (~0.25 mM). At low Mg2+, mutation rates were 5-10 times lower compared to high Mg2+ conditions (5-10 mM). Alternative divalent cations also have a concentration-dependent effect on RT fidelity. Presumed promutagenic cations Mn2+ and Co2+ decreases the fidelity of RT only at elevated concentrations, and Zn2+, when present in low concentration, increases the fidelity of HIV-1 RT by ~2.5 fold compared to Mg2+. HIV-1 and HIV-2 RT inhibition by nucleoside (NRTIs) and non-nucleoside RT inhibitors (NNRTIs) in vitro is also affected by the Mg2+ concentration. NRTIs lacking 3'-OH group inhibited both enzymes less efficiently in low Mg2+ than in high Mg2+; whereas inhibition by the “translocation defective RT inhibitor”, which retains the 3ʹ-OH, was unaffected by Mg2+ concentration, suggesting that NRTIs with a 3ʹ-OH group may be more potent than other NRTIs. In contrast, NNRTIs were more effective in low vs. high Mg2+ conditions. Overall, the studies presented reveal strategies for designing novel RT inhibitors and strongly emphasize the need for studying HIV RT and RT inhibitors in physiologically relevant low Mg2+ conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The preparation and administration of medications is one of the most common and relevant functions of nurses, demanding great responsibility. Incorrect administration of medication, currently constitutes a serious problem in health services, and is considered one of the main adverse effects suffered by hospitalized patients. Objectives: Identify the major errors in the preparation and administration of medication by nurses in hospitals and know what factors lead to the error occurred in the preparation and administration of medication. Methods: A systematic review of the literature. Deined as inclusion criteria: original scientiic papers, complete, published in the period 2011 to May 2016, the SciELO and LILACS databases, performed in a hospital environment, addressing errors in preparation and administration of medication by nurses and in Portuguese language. After application of the inclusion criteria obtained a sample of 7 articles. Results: The main errors identiied in the pr eparation and administration of medication were wrong dose 71.4%, wrong time 71.4%, 57.2% dilution inadequate, incorrect selection of the patient 42.8% and 42.8% via inadequate. The factors that were most commonly reported by the nursing staff, as the cause of the error was the lack of human appeal 57.2%, inappropriate locations for the preparation of medication 57.2%, the presence of noise and low brightness in preparation location 57, 2%, professionals untrained 42.8%, fatigue and stress 42.8% and inattention 42.8%. Conclusions: The literature shows a high error rate in the preparation and administration of medication for various reasons, making it important that preventive measures of this occurrence are implemented.