835 resultados para Error correction codes


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Target transformation factor analysis was used to correct spectral interference in inductively coupled plasma atomic emission spectrometry (ICP-BES) for the determination of rare earth impurities in high purity thulium oxide. Data matrix was constructed with pure and mixture vectors and background vector. A method based on an error evaluation function was proposed to optimize the peak position, so the influence of the peak position shift in spectral scans on the determination was eliminated or reduced. Satisfactory results were obtained using factor analysis and the proposed peak position optimization method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present paper reports some definite evidence for the significance of wavelength positioning accuracy in multicomponent analysis techniques for the correction of line interferences in inductively coupled plasma atomic emission spectrometry (ICP-AES). Using scanning spectrometers commercially available today, a large relative error, DELTA(A) may occur in the estimated analyte concentration, owing to wavelength positioning errors, unless a procedure for data processing can eliminate the problem of optical instability. The emphasis is on the effect of the positioning error (deltalambda) in a model scan, which is evaluated theoretically and determined experimentally. A quantitative relation between DELTA(A) and deltalambda, the peak distance, and the effective widths of the analysis and interfering lines is established under the assumption of Gaussian line profiles. The agreement between calculated and experimental DELTA(A) is also illustrated. The DELTA(A) originating from deltalambda is independent of the net analyte/interferent signal ratio; this contrasts with the situation for the positioning error (dlambda) in a sample scan, where DELTA(A) decreases with an increase in the ratio. Compared with dlambda, the effect of deltalambda is generally less significant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present paper deals with the evaluation of the relative error (DELTA(A)) in estimated analyte concentrations originating from the wavelength positioning error in a sample scan when multicomponent analysis (MCA) techniques are used for correcting line interferences in inductively coupled plasma atomic emission spectrometry. In the theoretical part, a quantitative relation of DELTA(A) with the extent of line overlap, bandwidth and the magnitude of the positioning error is developed under the assumption of Gaussian line profiles. The measurements of eleven samples covering various typical line interferences showed that the calculated DELTA(A) generally agrees well with the experimental one. An expression of the true detection limit associated with MCA techniques was thus formulated. With MCA techniques, the determination of the analyte and interferent concentrations depend on each other while with conventional correction techniques, such as the three-point method, the estimate of interfering signals is independent of the analyte signals. Therefore. a given positioning error results in a larger DELTA(A) and hence a higher true detection limit in the case of MCA techniques than that in the case of conventional correction methods. although the latter could be a reasonable approximation of the former when the peak distance expressed in the effective width of the interfering line is larger than 0.4. In the light of the effect of wavelength positioning errors, MCA techniques have no advantages over conventional correction methods unless the former can bring an essential reduction ot the positioning error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lee, M., Barnes, D. P., Hardy, N. (1985). Research into error recovery for sensory robots. Sensor Review, 5 (4), 194-197.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scientific computation has unavoidable approximations built into its very fabric. One important source of error that is difficult to detect and control is round-off error propagation which originates from the use of finite precision arithmetic. We propose that there is a need to perform regular numerical `health checks' on scientific codes in order to detect the cancerous effect of round-off error propagation. This is particularly important in scientific codes that are built on legacy software. We advocate the use of the CADNA library as a suitable numerical screening tool. We present a case study to illustrate the practical use of CADNA in scientific codes that are of interest to the Computer Physics Communications readership. In doing so we hope to stimulate a greater awareness of round-off error propagation and present a practical means by which it can be analyzed and managed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work investigates the end-to-end performance of randomized distributed space-time codes with complex Gaussian distribution, when employed in a wireless relay network. The relaying nodes are assumed to adopt a decode-and-forward strategy and transmissions are affected by small and large scale fading phenomena. Extremely tight, analytical approximations of the end-to-end symbol error probability and of the end-to-end outage probability are derived and successfully validated through Monte-Carlo simulation. For the high signal-to-noise ratio regime, a simple, closed-form expression for the symbol error probability is further provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we introduce a statistical data-correction framework that aims at improving the DSP system performance in presence of unreliable memories. The proposed signal processing framework implements best-effort error mitigation for signals that are corrupted by defects in unreliable storage arrays using a statistical correction function extracted from the signal statistics, a data-corruption model, and an application-specific cost function. An application example to communication systems demonstrates the efficacy of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To evaluate visual acuity, visual function, and prevalence of refractive error among Chinese secondary-school children in a cross-sectional school-based study. METHODS: Uncorrected, presenting, and best corrected visual acuity, cycloplegic autorefraction with refinement, and self-reported visual function were assessed in a random, cluster sample of rural secondary school students in Xichang, China. RESULTS: Among the 1892 subjects (97.3% of the consenting children, 84.7% of the total sample), mean age was 14.7 +/- 0.8 years, 51.2% were female, and 26.4% were wearing glasses. The proportion of children with uncorrected, presenting, and corrected visual disability (< or = 6/12 in the better eye) was 41.2%, 19.3%, and 0.5%, respectively. Myopia < -0.5, < -2.0, and < -6.0 D in both eyes was present in 62.3%, 31.1%, and 1.9% of the subjects, respectively. Among the children with visual disability when tested without correction, 98.7% was due to refractive error, while only 53.8% (414/770) of these children had appropriate correction. The girls had significantly (P < 0.001) more presenting visual disability and myopia < -2.0 D than did the boys. More myopic refractive error was associated with worse self-reported visual function (ANOVA trend test, P < 0.001). CONCLUSIONS: Visual disability in this population was common, highly correctable, and frequently uncorrected. The impact of refractive error on self-reported visual function was significant. Strategies and studies to understand and remove barriers to spectacle wear are needed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To study spectacle wear among rural Chinese children. METHODS: Visual acuity, refraction, spectacle wear, and visual function were measured. RESULTS: Among 1892 subjects (84.7% of the sample), the mean (SD) age was 14.7 (0.8) years. Among 948 children (50.1%) potentially benefiting from spectacle wear, 368 (38.8%) did not own them. Among 580 children owning spectacles, 17.9% did not wear them at school. Among 476 children wearing spectacles, 25.0% had prescriptions that could not improve their visual acuity to better than 6/12. Therefore, 62.3% (591 of 948) of children needing spectacles did not benefit from appropriate correction. Children not owning and not wearing spectacles had better self-reported visual function but worse visual acuity at initial examination than children wearing spectacles and had a mean (SD) refractive error of -2.06 (1.15) diopter (D) and -2.78 (1.32) D, respectively. Girls (P < .001) and older children (P = .03) were more likely to be wearing their spectacles. A common reason for nonwear (17.0%) was the belief that spectacles weaken the eyes. Among children without spectacles, 79.3% said their families would pay for them (mean, US $15). CONCLUSIONS: Although half of the children could benefit from spectacle wear, 62.3% were not wearing appropriate correction. These children have significant uncorrected refractive errors. There is potential to support programs through spectacle sales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de mestrado integrado em Engenharia Biomédica e Biofísica, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The UMTS turbo encoder is composed of parallel concatenation of two Recursive Systematic Convolutional (RSC) encoders which start and end at a known state. This trellis termination directly affects the performance of turbo codes. This paper presents performance analysis of multi-point trellis termination of turbo codes which is to terminate RSC encoders at more than one point of the current frame while keeping the interleaver length the same. For long interleaver lengths, this approach provides dividing a data frame into sub-frames which can be treated as independent blocks. A novel decoding architecture using multi-point trellis termination and collision-free interleavers is presented. Collision-free interleavers are used to solve memory collision problems encountered by parallel decoding of turbo codes. The proposed parallel decoding architecture reduces the decoding delay caused by the iterative nature and forward-backward metric computations of turbo decoding algorithms. Our simulations verified that this turbo encoding and decoding scheme shows Bit Error Rate (BER) performance very close to that of the UMTS turbo coding while providing almost %50 time saving for the 2-point termination and %80 time saving for the 5-point termination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Turbo codes experience a significant decoding delay because of the iterative nature of the decoding algorithms, the high number of metric computations and the complexity added by the (de)interleaver. The extrinsic information is exchanged sequentially between two Soft-Input Soft-Output (SISO) decoders. Instead of this sequential process, a received frame can be divided into smaller windows to be processed in parallel. In this paper, a novel parallel processing methodology is proposed based on the previous parallel decoding techniques. A novel Contention-Free (CF) interleaver is proposed as part of the decoding architecture which allows using extrinsic Log-Likelihood Ratios (LLRs) immediately as a-priori LLRs to start the second half of the iterative turbo decoding. The simulation case studies performed in this paper show that our parallel decoding method can provide %80 time saving compared to the standard decoding and %30 time saving compared to the previous parallel decoding methods at the expense of 0.3 dB Bit Error Rate (BER) performance degradation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the present study was to determine which augmented sensory modality would best develop subjective error-detection capabilities of learners performing a spatial-temporal task when using a touch screen monitor. Participants were required to learn a 5-digit key-pressing task in a goal time of 2550 ms over 100 acquisition trials on a touch screen. Participants were randomized into 1 of 4 groups: 1) visual-feedback (colour change of button when selected), 2) auditory-feedback (click sound when button was selected), 3) visual-auditory feedback (both colour change and click sound when button was selected), and 4) no-feedback (no colour change or click sound when button was selected). Following each trial, participants were required to provide a subjective estimate regarding their performance time in relation to the actual time it took for them complete the 5-digit sequence. A no-KR retention test was conducted approximately 24-hours after the last completed acquisition trial. Results showed that practicing a timing task on a touch screen augmented with both visual and auditory information may have differentially impacted motor skill acquisition such that removal of one or both sources of augmented feedback did not result in a severe detriment to timing performance or error detection capabilities of the learner. The present study reflects the importance of multimodal augmented feedback conditions to maximize cognitive abilities for developing a stronger motor memory for subjective error-detection and correction capabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relevance of the fragment relaxation energy term and the effect of the basis set superposition error on the geometry of the BF3⋯NH3 and C2H4⋯SO2 van der Waals dimers have been analyzed. Second-order Møller-Plesset perturbation theory calculations with the d95(d,p) basis set have been used to calculate the counterpoise-corrected barrier height for the internal rotations. These barriers have been obtained by relocating the stationary points on the counterpoise-corrected potential energy surface of the processes involved. The fragment relaxation energy can have a large influence on both the intermolecular parameters and barrier height. The counterpoise correction has proved to be important for these systems