915 resultados para standard error
Resumo:
BACKGROUND:
The prevalence of angle-closure glaucoma (ACG) is greater for Eskimos/Inuit than it is for any other ethnic group in the world. Although it has been suggested that this prevalence may be due to a population tendency toward shallower anterior chamber angles, available evidence for other populations such as Chinese with high rates of ACG has not consistently demonstrated such a tendency.
METHODS:
A reticule, slit-lamp, and standard Goldmann one-mirror goniolens were used to make measurements in the anterior chamber (AC) angle according to a previously reported protocol for biometric gonioscopy (BG) (Ophthalmology 1999;106:2161-7). Measurements were made in all four quadrants of one eye among 133 phakic Alaskan Eskimos aged 40 years and older. Automatic refraction, dilated examination of the anterior segment and optic nerve, and A-scan measurements of AC depth, lens thickness, and axial length were also carried out for all subjects.
RESULTS:
Both central and peripheral AC measurements for the Eskimo subjects were significantly lower than those previously reported by us for Chinese, blacks, and whites under the identical protocol. Eskimos also seemed to have somewhat more hyperopia. There were no differences in biometric measurements between men and women in this Eskimo population. Angle measurements by BG seemed to decline more rapidly over life among Eskimos and Chinese than blacks or whites. Although there was a significant apparent decrease in AC depth, increase in lens thickness, and increase in hyperopia with age among Eskimos, all of these trends seemed to reverse in the seventh decade and beyond.
CONCLUSIONS:
Eskimos do seem to have shallower ACs than do other racial groups. Measurements of the AC angle seem to decline more rapidly over life among Eskimos than among blacks or whites, a phenomenon also observed by us among Chinese, another group with high ACG prevalence. This apparent more rapid decline may be due to a cohort effect with higher prevalence of myopia and resulting wider angles among younger Eskimos and Chinese.
Resumo:
PURPOSE: To evaluate the hypothesis that changes in nutritional status could be partly responsible for observed increases in myopia prevalence among Chinese children. DESIGN: Cross-sectional cohort study. METHODS: Rural Chinese secondary school children participating in a study of interventions to promote spectacle use were randomly sampled (20% of children with uncorrected vision >6/12 bilaterally, and 100% of remaining children) and underwent cycloplegic refraction with subjective refinement and measurement of height and weight. Stunting was defined according to the World Health Organization standard population. RESULTS: Among 3226 children in the sample, 2905 (90.0%) took part. Among 1477 children undergoing refraction, 1371 (92.8%) had height and weight measurements. These children had a mean age of 14.5 +/- 1.4 years, 59.8% were girls, and mean spherical equivalent refraction was -1.93 +/- 1.82 diopters. Stunting was present in 87 children (6.4%). While height was inversely associated with refractive error (RE) (taller children were more myopic) among boys (r = -0.147, P = .001), this disappeared when adjusting for age, and no such association was observed among girls. Neither girls nor boys with stunting differed significantly in refraction from children without stunting, and neither stunting nor height was associated with RE when adjusting for age, height, and parental education. The power of this study to have detected a 0.75 diopters difference in RE between children with and without stunting was 0.96. CONCLUSION: Results from this cross-sectional study are not consistent with the hypothesis that nutritional status is a determinant of RE in this setting.
Resumo:
One of the tasks of teaching (Ball, Thames, & Phelps, 2008) concerns the work of interpreting student error and evaluating alternative algorithms used by students. Teachers’ abilities to understand nonstandard student work affects their instructional decisions, the explanations they provide in the classroom, the way they guide their students, and how they conduct mathematical discussions. However, their knowledge or their perceptions of the knowledge may not correspond to the actual level of knowledge that will support flexibility and fluency in a mathematics classroom. In this paper, we focus on Norwegian and Portuguese teachers’ reflections when trying to give sense to students’ use of nonstandard subtraction algorithms and of the mathematics imbedded in such. By discussing teachers’ mathematical knowledge associated with these situations and revealed in their reflections, we can perceive the difficulties teachers have in making sense of students’ solutions that differ from those most commonly reached.
Resumo:
The Joint Video Team, composed by the ISO/IEC Moving Picture Experts Group (MPEG) and the ITU-T Video Coding Experts Group (VCEG), has standardized a scalable extension of the H.264/AVC video coding standard called Scalable Video Coding (SVC). H.264/SVC provides scalable video streams which are composed by a base layer and one or more enhancement layers. Enhancement layers may improve the temporal, the spatial or the signal-to-noise ratio resolutions of the content represented by the lower layers. One of the applications, of this standard is related to video transmission in both wired and wireless communication systems, and it is therefore important to analyze in which way packet losses contribute to the degradation of quality, and which mechanisms could be used to improve that quality. This paper provides an analysis and evaluation of H.264/SVC in error prone environments, quantifying the degradation caused by packet losses in the decoded video. It also proposes and analyzes the consequences of QoS-based discarding of packets through different marking solutions.
Resumo:
In this paper we carry out a detailed performance analysis of a novel blind-source-seperation (BSS) based DSP algorithm that tackles the carrier phase synchronization error problem. The results indicate that the mismatch can be effectively compensated during the normal operation as well as in the rapidly changing environments. Since the compensation is carried out before any modulation specific processing, the proposed method works with all standard modulation formats and lends itself to efficient real-time custom integrated hardware or software implementations.
Resumo:
While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting
Resumo:
The paper considers meta-analysis of diagnostic studies that use a continuous score for classification of study participants into healthy or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might be confounded by a potentially unknown variation of the cut-off value. To cope with this phenomena it is suggested to use, instead, an overall estimate of the misclassification error previously suggested and used as Youden’s index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel–Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden’s index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.
Resumo:
The paper considers meta-analysis of diagnostic studies that use a continuous Score for classification of study participants into healthy, or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between Studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might he confounded by a potentially unknown variation of the cut-off Value. To cope with this phenomena it is suggested to use, instead an overall estimate of the misclassification error previously suggested and used as Youden's index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel-Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden's index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.
Resumo:
The convergence speed of the standard Least Mean Square adaptive array may be degraded in mobile communication environments. Different conventional variable step size LMS algorithms were proposed to enhance the convergence speed while maintaining low steady state error. In this paper, a new variable step LMS algorithm, using the accumulated instantaneous error concept is proposed. In the proposed algorithm, the accumulated instantaneous error is used to update the step size parameter of standard LMS is varied. Simulation results show that the proposed algorithm is simpler and yields better performance than conventional variable step LMS.
Resumo:
We analyze a fully discrete spectral method for the numerical solution of the initial- and periodic boundary-value problem for two nonlinear, nonlocal, dispersive wave equations, the Benjamin–Ono and the Intermediate Long Wave equations. The equations are discretized in space by the standard Fourier–Galerkin spectral method and in time by the explicit leap-frog scheme. For the resulting fully discrete, conditionally stable scheme we prove an L2-error bound of spectral accuracy in space and of second-order accuracy in time.
Resumo:
Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.
Resumo:
With the development of convection-permitting numerical weather prediction the efficient use of high resolution observations in data assimilation is becoming increasingly important. The operational assimilation of these observations, such as Dopplerradar radial winds, is now common, though to avoid violating the assumption of un- correlated observation errors the observation density is severely reduced. To improve the quantity of observations used and the impact that they have on the forecast will require the introduction of the full, potentially correlated, error statistics. In this work, observation error statistics are calculated for the Doppler radar radial winds that are assimilated into the Met Office high resolution UK model using a diagnostic that makes use of statistical averages of observation-minus-background and observation-minus-analysis residuals. This is the first in-depth study using the diagnostic to estimate both horizontal and along-beam correlated observation errors. By considering the new results obtained it is found that the Doppler radar radial wind error standard deviations are similar to those used operationally and increase as the observation height increases. Surprisingly the estimated observation error correlation length scales are longer than the operational thinning distance. They are dependent on both the height of the observation and on the distance of the observation away from the radar. Further tests show that the long correlations cannot be attributed to the use of superobservations or the background error covariance matrix used in the assimilation. The large horizontal correlation length scales are, however, in part, a result of using a simplified observation operator.
Resumo:
We estimate the conditions for detectability of two planets in a 2/1 mean-motion resonance from radial velocity data, as a function of their masses, number of observations and the signal-to-noise ratio. Even for a data set of the order of 100 observations and standard deviations of the order of a few meters per second, we find that Jovian-size resonant planets are difficult to detect if the masses of the planets differ by a factor larger than similar to 4. This is consistent with the present population of real exosystems in the 2/1 commensurability, most of which have resonant pairs with similar minimum masses, and could indicate that many other resonant systems exist, but are currently beyond the detectability limit. Furthermore, we analyze the error distribution in masses and orbital elements of orbital fits from synthetic data sets for resonant planets in the 2/1 commensurability. For various mass ratios and number of data points we find that the eccentricity of the outer planet is systematically overestimated, although the inner planet`s eccentricity suffers a much smaller effect. If the initial conditions correspond to small-amplitude oscillations around stable apsidal corotation resonances, the amplitudes estimated from the orbital fits are biased toward larger amplitudes, in accordance to results found in real resonant extrasolar systems.
Resumo:
Estimates of evapotranspiration on a local scale is important information for agricultural and hydrological practices. However, equations to estimate potential evapotranspiration based only on temperature data, which are simple to use, are usually less trustworthy than the Food and Agriculture Organization (FAO)Penman-Monteith standard method. The present work describes two correction procedures for potential evapotranspiration estimates by temperature, making the results more reliable. Initially, the standard FAO-Penman-Monteith method was evaluated with a complete climatologic data set for the period between 2002 and 2006. Then temperature-based estimates by Camargo and Jensen-Haise methods have been adjusted by error autocorrelation evaluated in biweekly and monthly periods. In a second adjustment, simple linear regression was applied. The adjusted equations have been validated with climatic data available for the Year 2001. Both proposed methodologies showed good agreement with the standard method indicating that the methodology can be used for local potential evapotranspiration estimates.
Resumo:
We describe a method for evaluating an ensemble of predictive models given a sample of observations comprising the model predictions and the outcome event measured with error. Our formulation allows us to simultaneously estimate measurement error parameters, true outcome — aka the gold standard — and a relative weighting of the predictive scores. We describe conditions necessary to estimate the gold standard and for these estimates to be calibrated and detail how our approach is related to, but distinct from, standard model combination techniques. We apply our approach to data from a study to evaluate a collection of BRCA1/BRCA2 gene mutation prediction scores. In this example, genotype is measured with error by one or more genetic assays. We estimate true genotype for each individual in the dataset, operating characteristics of the commonly used genotyping procedures and a relative weighting of the scores. Finally, we compare the scores against the gold standard genotype and find that Mendelian scores are, on average, the more refined and better calibrated of those considered and that the comparison is sensitive to measurement error in the gold standard.