986 resultados para SAMPLE ERROR
Resumo:
A method has been developed for determining of heavy metal ions by field-amplified sample injection capillary electrophoresis with contactless conductivity detection. The effects of the 2-N-morpholinoethanesulfonic acid/histidine (MES/His) concentration in the sample matrix, the injection time and organic additives on the enrichment factor were studied. The results showed that MES/His with a low concentration in the sample matrix, an increase of the injection time and the addition of acetonitrile improved the enrichment factor. Four heavy metal ions (Zn2+, Co2+, Cu2+ and Ni2+) were dissolved in deionized water, separated in a 10 mM MES/His running buffer at pH 4.9 and detected by contactless conductivity detection. The detection sensitivity was enhanced by about three orders of magnitude with respect to the non-stacking injection mode. The limits of detection were in the range from 5 nM (Zn2+) to 30 nM (Cu2+). The method has been used to determine heavy metal ions in tap water.
Resumo:
N.W. Hardy and M.H. Lee. The effect of the product cost factor on error handling in industrial robots. In Maria Gini, editor, Detecting and Resolving Errors in Manufacturing Systems. Papers from the 1994 AAAI Spring Symposium Series, pages 59-64, Menlo Park, CA, March 1994. The AAAI Press. Technical Report SS-94-04, ISBN 0-929280-60-1.
Resumo:
Lee, M., Barnes, D. P., Hardy, N. (1985). Research into error recovery for sensory robots. Sensor Review, 5 (4), 194-197.
Resumo:
Lee, M., Hardy, N., & Barnes, D. P. (1984). Research into automatic error recovery. 65-69. Paper presented at 4th International Conference on Robot Vision and Sensory Controls, London, London, United Kingdom.
Resumo:
Lee, M., Hardy, N., & Barnes, D. P. (1983). Error recovery in robot applications. 217-222. Paper presented at 6th British Robot Association Annual Conference, Birmingham, Birmingham, United Kingdom.
Resumo:
M. H. Lee, D. P. Barnes, and N. W. Hardy. Knowledge based error recovery in industrial robots. In Proc. 8th. Int. Joint Conf. Artificial Intelligence, pages 824-826, Karlsruhe, FDR., 1983.
Resumo:
Meng Q. and Lee M.H., Automatic Error Recovery in Behaviour-Based Assistive Robots with Learning from Experience, in Proc. INES 2001, 5th IEEE Int. Conf. on Intelligent Engineering Systems, Helsinki, Finland, Sept 2001, pp291-296.
Resumo:
Pritchard, L., Corne, D., Kell, D.B., Rowland, J. & Winson, M. (2005) A general model of error-prone PCR. Journal of Theoretical Biology 234, 497-509.
Resumo:
Q. Meng and M.H. Lee, 'Error-driven active learning in growing radial basis function networks for early robot learning', 2006 IEEE International Conference on Robotics and Automation (IEEE ICRA 2006), 2984-90, Orlando, Florida, USA.
Resumo:
Hallux valgus (HV) is a highly-prevalent forefoot deformity associated with progressive subluxation and osteoarthritis of the first metatarsophalangeal joint; it is believed to be associated with depression. The aim of the present study was to determine the association of patients with varying degrees of HV involvement to depression using the Beck Depression Inventory (BDI). The sample consisted of 102 participants (mean age: 45.1±1.6), who attended an outpatient centre where self-report data were recorded. The degree of HV deformity was determined in both feet, and the scores on the BDI were compared. A total of 38.24% of the sample had depression, with an average BDI score of 10.55±12.36 points. There was a statistically-significant association between the degree and presence of HV in both feet (P=0.0001). People with a greater degree of HV deformity in any foot also have a significant increase in depression based on BDI scores, regardless of sex.
Resumo:
Background Single nucleotide polymorphisms (SNPs) have been used extensively in genetics and epidemiology studies. Traditionally, SNPs that did not pass the Hardy-Weinberg equilibrium (HWE) test were excluded from these analyses. Many investigators have addressed possible causes for departure from HWE, including genotyping errors, population admixture and segmental duplication. Recent large-scale surveys have revealed abundant structural variations in the human genome, including copy number variations (CNVs). This suggests that a significant number of SNPs must be within these regions, which may cause deviation from HWE. Results We performed a Bayesian analysis on the potential effect of copy number variation, segmental duplication and genotyping errors on the behavior of SNPs. Our results suggest that copy number variation is a major factor of HWE violation for SNPs with a small minor allele frequency, when the sample size is large and the genotyping error rate is 0~1%. Conclusions Our study provides the posterior probability that a SNP falls in a CNV or a segmental duplication, given the observed allele frequency of the SNP, sample size and the significance level of HWE testing.
Resumo:
We consider a mobile sensor network monitoring a spatio-temporal field. Given limited cache sizes at the sensor nodes, the goal is to develop a distributed cache management algorithm to efficiently answer queries with a known probability distribution over the spatial dimension. First, we propose a novel distributed information theoretic approach in which the nodes locally update their caches based on full knowledge of the space-time distribution of the monitored phenomenon. At each time instant, local decisions are made at the mobile nodes concerning which samples to keep and whether or not a new sample should be acquired at the current location. These decisions account for minimizing an entropic utility function that captures the average amount of uncertainty in queries given the probability distribution of query locations. Second, we propose a different correlation-based technique, which only requires knowledge of the second-order statistics, thus relaxing the stringent constraint of having a priori knowledge of the query distribution, while significantly reducing the computational overhead. It is shown that the proposed approaches considerably improve the average field estimation error by maintaining efficient cache content. It is further shown that the correlation-based technique is robust to model mismatch in case of imperfect knowledge of the underlying generative correlation structure.
Resumo:
Two classes of techniques have been developed to whiten the quantization noise in digital delta-sigma modulators (DDSMs): deterministic and stochastic. In this two-part paper, a design methodology for reduced-complexity DDSMs is presented. The design methodology is based on error masking. Rules for selecting the word lengths of the stages in multistage architectures are presented. We show that the hardware requirement can be reduced by up to 20% compared with a conventional design, without sacrificing performance. Simulation and experimental results confirm theoretical predictions. Part I addresses MultistAge noise SHaping (MASH) DDSMs; Part II focuses on single-quantizer DDSMs..
Resumo:
For two multinormal populations with equal covariance matrices the likelihood ratio discriminant function, an alternative allocation rule to the sample linear discriminant function when n1 ≠ n2 ,is studied analytically. With the assumption of a known covariance matrix its distribution is derived and the expectation of its actual and apparent error rates evaluated and compared with those of the sample linear discriminant function. This comparison indicates that the likelihood ratio allocation rule is robust to unequal sample sizes. The quadratic discriminant function is studied, its distribution reviewed and evaluation of its probabilities of misclassification discussed. For known covariance matrices the distribution of the sample quadratic discriminant function is derived. When the known covariance matrices are proportional exact expressions for the expectation of its actual and apparent error rates are obtained and evaluated. The effectiveness of the sample linear discriminant function for this case is also considered. Estimation of true log-odds for two multinormal populations with equal or unequal covariance matrices is studied. The estimative, Bayesian predictive and a kernel method are compared by evaluating their biases and mean square errors. Some algebraic expressions for these quantities are derived. With equal covariance matrices the predictive method is preferable. Where it derives this superiority is investigated by considering its performance for various levels of fixed true log-odds. It is also shown that the predictive method is sensitive to n1 ≠ n2. For unequal but proportional covariance matrices the unbiased estimative method is preferred. Product Normal kernel density estimates are used to give a kernel estimator of true log-odds. The effect of correlation in the variables with product kernels is considered. With equal covariance matrices the kernel and parametric estimators are compared by simulation. For moderately correlated variables and large dimension sizes the product kernel method is a good estimator of true log-odds.
Resumo:
The analysis of energy detector systems is a well studied topic in the literature: numerous models have been derived describing the behaviour of single and multiple antenna architectures operating in a variety of radio environments. However, in many cases of interest, these models are not in a closed form and so their evaluation requires the use of numerical methods. In general, these are computationally expensive, which can cause difficulties in certain scenarios, such as in the optimisation of device parameters on low cost hardware. The problem becomes acute in situations where the signal to noise ratio is small and reliable detection is to be ensured or where the number of samples of the received signal is large. Furthermore, due to the analytic complexity of the models, further insight into the behaviour of various system parameters of interest is not readily apparent. In this thesis, an approximation based approach is taken towards the analysis of such systems. By focusing on the situations where exact analyses become complicated, and making a small number of astute simplifications to the underlying mathematical models, it is possible to derive novel, accurate and compact descriptions of system behaviour. Approximations are derived for the analysis of energy detectors with single and multiple antennae operating on additive white Gaussian noise (AWGN) and independent and identically distributed Rayleigh, Nakagami-m and Rice channels; in the multiple antenna case, approximations are derived for systems with maximal ratio combiner (MRC), equal gain combiner (EGC) and square law combiner (SLC) diversity. In each case, error bounds are derived describing the maximum error resulting from the use of the approximations. In addition, it is demonstrated that the derived approximations require fewer computations of simple functions than any of the exact models available in the literature. Consequently, the regions of applicability of the approximations directly complement the regions of applicability of the available exact models. Further novel approximations for other system parameters of interest, such as sample complexity, minimum detectable signal to noise ratio and diversity gain, are also derived. In the course of the analysis, a novel theorem describing the convergence of the chi square, noncentral chi square and gamma distributions towards the normal distribution is derived. The theorem describes a tight upper bound on the error resulting from the application of the central limit theorem to random variables of the aforementioned distributions and gives a much better description of the resulting error than existing Berry-Esseen type bounds. A second novel theorem, providing an upper bound on the maximum error resulting from the use of the central limit theorem to approximate the noncentral chi square distribution where the noncentrality parameter is a multiple of the number of degrees of freedom, is also derived.