945 resultados para ERROR AUTOCORRELATION
Resumo:
BACKGROUND: We aimed to determine the prevalence and associations of refractive error on Norfolk Island. DESIGN: Population-based study on Norfolk Island, South Pacific. PARTICIPANTS: All permanent residents on Norfolk Island aged ≥ 15 years were invited to participate. METHODS: Patients underwent non-cycloplegic autorefraction, slit-lamp biomicroscope examination and biometry assessment. Only phakic eyes were analysed. MAIN OUTCOME MEASURES: Prevalence and multivariate associations of refractive error and myopia. RESULTS: There were 677 people (645 right phakic eyes, 648 left phakic eyes) aged ≥ 15 years were included in this study. Mean age of participants was 51.1 (standard deviation 15.7; range 15-81). Three hundred and seventy-six people (55.5%) were female. Adjusted to the 2006 Norfolk Island population, prevalence estimates of refractive error were as follows: myopia (mean spherical equivalent ≥ -1.0 D) 10.1%, hypermetropia (mean spherical equivalent ≥ 1.0 D) 36.6%, and astigmatism 17.7%. Significant independent predictors of myopia in the multivariate model were lower age (P < 0.001), longer axial length (P < 0.001), shallower anterior chamber depth (P = 0.031) and increased corneal curvature (P < 0.001). Significant independent predictors of refractive error were increasing age (P < 0.001), male gender (P = 0.009), Pitcairn ancestry (P = 0.041), cataract (P < 0.001), longer axial length (P < 0.001) and decreased corneal curvature (P < 0.001). CONCLUSIONS: The prevalence of myopia on Norfolk Island is lower than on mainland Australia, and the Norfolk Island population demonstrates ethnic differences in the prevalence estimates. Given the significant associations between refractive error and several ocular biometry characteristics, Norfolk Island may be a useful population in which to find the genetic basis of refractive error.
Resumo:
PURPOSE Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary care. METHODS Eleven nominal group interviews of patients and primary health care professionals were held in Auckland, New Zealand, during late 2007. Group members reported and helped to classify types of potential error by patients. We synthesized the ideas that emerged from the nominal groups into a taxonomy of patient error. RESULTS Our taxonomy is a 3-level system encompassing 70 potential types of patient error. The first level classifies 8 categories of error into 2 main groups: action errors and mental errors. The action errors, which result in part or whole from patient behavior, are attendance errors, assertion errors, and adherence errors. The mental errors, which are errors in patient thought processes, comprise memory errors, mindfulness errors, misjudgments, and—more distally—knowledge deficits and attitudes not conducive to health. CONCLUSION The taxonomy is an early attempt to understand and recognize how patients may err and what clinicians should aim to influence so they can help patients act safely. This approach begins to balance perspectives on error but requires further research. There is a need to move beyond seeing patient, clinician, and system errors as separate categories of error. An important next step may be research that attempts to understand how patients, clinicians, and systems interact to cocreate and reduce errors.
Resumo:
Purpose: To examine between eye differences in corneal higher order aberrations and topographical characteristics in a range of refractive error groups. Methods: One hundred and seventy subjects were recruited including; 50 emmetropic isometropes, 48 myopic isometropes (spherical equivalent anisometropia ≤ 0.75 D), 50 myopic anisometropes (spherical equivalent anisometropia ≥ 1.00 D) and 22 keratoconics. The corneal topography of each eye was captured using the E300 videokeratoscope (Medmont, Victoria, Australia) and analyzed using custom written software. All left eye data were rotated about the vertical midline to account for enantiomorphism. Corneal height data were used to calculate the corneal wavefront error using a ray tracing procedure and fit with Zernike polynomials (up to and including the eighth radial order). The wavefront was centred on the line of sight by using the pupil offset value from the pupil detection function in the videokeratoscope. Refractive power maps were analysed to assess corneal sphero-cylindrical power vectors. Differences between the more myopic (or more advanced eye for keratoconics) and the less myopic (advanced) eye were examined. Results: Over a 6 mm diameter, the cornea of the more myopic eye was significantly steeper (refractive power vector M) compared to the fellow eye in both anisometropes (0.10 ± 0.27 D steeper, p = 0.01) and keratoconics (2.54 ± 2.32 D steeper, p < 0.001) while no significant interocular difference was observed for isometropic emmetropes (-0.03 ± 0.32 D) or isometropic myopes (0.02 ± 0.30 D) (both p > 0.05). In keratoconic eyes, the between eye difference in corneal refractive power was greatest inferiorly (associated with cone location). Similarly, in myopic anisometropes, the more myopic eye displayed a central region of significant inferior corneal steepening (0.15 ± 0.42 D steeper) relative to the fellow eye (p = 0.01). Significant interocular differences in higher order aberrations were only observed in the keratoconic group for; vertical trefoil C(3,-3), horizontal coma C(3,1) secondary astigmatism along 45 C(4, -2) (p < 0.05) and vertical coma C(3,-1) (p < 0.001). The interocular difference in vertical pupil decentration (relative to the corneal vertex normal) increased with between eye asymmetry in refraction (isometropia 0.00 ± 0.09, anisometropia 0.03 ± 0.15 and keratoconus 0.08 ± 0.16 mm) as did the interocular difference in corneal vertical coma C (3,-1) (isometropia -0.006 ± 0.142, anisometropia -0.037 ± 0.195 and keratoconus -1.243 ± 0.936 μm) but only reached statistical significance for pair-wise comparisons between the isometropic and keratoconic groups. Conclusions: There is a high degree of corneal symmetry between the fellow eyes of myopic and emmetropic isometropes. Interocular differences in corneal topography and higher order aberrations are more apparent in myopic anisometropes and keratoconics due to regional (primarily inferior) differences in topography and between eye differences in vertical pupil decentration relative to the corneal vertex normal. Interocular asymmetries in corneal optics appear to be associated with anisometropic refractive development.
Resumo:
Black et al. (2004) identified a systematic difference between LA–ICP–MS and TIMS measurements of 206Pb/238U in zircons, which they correlated with the incompatible trace element content of the zircon. We show that the offset between the LA–ICP–MS and TIMS measured 206Pb/238U correlates more strongly with the total radiogenic Pb than with any incompatible trace element. This suggests that the cause of the 206Pb/238U offset is related to differences in the radiation damage (alpha dose) between the reference and unknowns. We test this hypothesis in two ways. First, we show that there is a strong correlation between the difference in the LA–ICP–MS and TIMS measured 206Pb/238U and the difference in the alpha dose received by unknown and reference zircons. The LA–ICP–MS ages for the zircons we have dated can be as much as 5.1% younger than their TIMS age to 2.1% older, depending on whether the unknown or reference received the higher alpha dose. Second, we show that by annealing both reference and unknown zircons at 850 °C for 48 h in air we can eliminate the alpha-dose-induced differences in measured 206Pb/238U. This was achieved by analyzing six reference zircons a minimum of 16 times in two round robin experiments: the first consisting of unannealed zircons and the second of annealed grains. The maximum offset between the LA–ICP–MS and TIMS measured 206Pb/238U for the unannealed zircons was 2.3%, which reduced to 0.5% for the annealed grains, as predicted by within-session precision based on counting statistics. Annealing unknown zircons and references to the same state prior to analysis holds the promise of reducing the 3% external error for the measurement of 206Pb/238U of zircon by LA–ICP–MS, indicated by Klötzli et al. (2009), to better than 1%, but more analyses of annealed zircons by other laboratories are required to evaluate the true potential of the annealing method.
Resumo:
Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.
Resumo:
Barmah Forest virus (BFV) disease is the second most common mosquito-borne disease in Australia but few data are available on the risk factors. We assessed the impact of spatial climatic, socioeconomic and ecological factors on the transmission of BFV disease in Queensland, Australia, using spatial regression. All our analyses indicate that spatial lag models provide a superior fit to the data compared to spatial error and ordinary least square models. The residuals of the spatial lag models were found to be uncorrelated, indicating that the models adequately account for spatial and temporal autocorrelation. Our results revealed that minimum temperature, distance from coast and low tide were negatively and rainfall was positively associated with BFV disease in coastal areas, whereas minimum temperature and high tide were negatively and rainfall was positively associated with BFV disease (all P-value.
Resumo:
Bayesian networks (BNs) are graphical probabilistic models used for reasoning under uncertainty. These models are becoming increasing popular in a range of fields including ecology, computational biology, medical diagnosis, and forensics. In most of these cases, the BNs are quantified using information from experts, or from user opinions. An interest therefore lies in the way in which multiple opinions can be represented and used in a BN. This paper proposes the use of a measurement error model to combine opinions for use in the quantification of a BN. The multiple opinions are treated as a realisation of measurement error and the model uses the posterior probabilities ascribed to each node in the BN which are computed from the prior information given by each expert. The proposed model addresses the issues associated with current methods of combining opinions such as the absence of a coherent probability model, the lack of the conditional independence structure of the BN being maintained, and the provision of only a point estimate for the consensus. The proposed model is applied an existing Bayesian Network and performed well when compared to existing methods of combining opinions.
Resumo:
Bounds on the expectation and variance of errors at the output of a multilayer feedforward neural network with perturbed weights and inputs are derived. It is assumed that errors in weights and inputs to the network are statistically independent and small. The bounds obtained are applicable to both digital and analogue network implementations and are shown to be of practical value.
Resumo:
Integration of biometrics is considered as an attractive solution for the issues associated with password based human authentication as well as for secure storage and release of cryptographic keys which is one of the critical issues associated with modern cryptography. However, the widespread popularity of bio-cryptographic solutions are somewhat restricted by the fuzziness associated with biometric measurements. Therefore, error control mechanisms must be adopted to make sure that fuzziness of biometric inputs can be sufficiently countered. In this paper, we have outlined such existing techniques used in bio-cryptography while explaining how they are deployed in different types of solutions. Finally, we have elaborated on the important facts to be considered when choosing appropriate error correction mechanisms for a particular biometric based solution.
Resumo:
We find in complementary experiments and event-driven simulations of sheared inelastic hard spheres that the velocity autocorrelation function psi(t) decays much faster than t(-3/2) obtained for a fluid of elastic spheres at equilibrium. Particle displacements are measured in experiments inside a gravity-driven flow sheared by a rough wall. The average packing fraction obtained in the experiments is 0.59, and the packing fraction in the simulations is varied between 0.5 and 0.59. The motion is observed to be diffusive over long times except in experiments where there is layering of particles parallel to boundaries, and diffusion is inhibited between layers. Regardless, a rapid decay of psi(t) is observed, indicating that this is a feature of the sheared dissipative fluid, and is independent of the details of the relative particle arrangements. An important implication of our study is that the non-analytic contribution to the shear stress may not be present in a sheared inelastic fluid, leading to a wider range of applicability of kinetic theory approaches to dense granular matter.
Resumo:
Melanopsin containing intrinsically photosensitive Retinal Ganglion cells (ipRGCs) mediate the pupil light reflex (PLR) during light onset and at light offset (the post-illumination pupil response, PIPR). Recent evidence shows that the PLR and PIPR can provide non-invasive, objective markers of age-related retinal and optic nerve disease, however there is no consensus on the effects of healthy ageing or refractive error on the ipRGC mediated pupil function. Here we isolated melanopsin contributions to the pupil control pathway in 59 human participants with no ocular pathology across a range of ages and refractive errors. We show that there is no effect of age or refractive error on ipRGC inputs to the human pupil control pathway. The stability of the ipRGC mediated pupil response across the human lifespan provides a functional correlate of their robustness observed during ageing in rodent models.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.
Resumo:
Error estimates for the error reproducing kernel method (ERKM) are provided. The ERKM is a mesh-free functional approximation scheme [A. Shaw, D. Roy, A NURBS-based error reproducing kernel method with applications in solid mechanics, Computational Mechanics (2006), to appear (available online)], wherein a targeted function and its derivatives are first approximated via non-uniform rational B-splines (NURBS) basis function. Errors in the NURBS approximation are then reproduced via a family of non-NURBS basis functions, constructed using a polynomial reproduction condition, and added to the NURBS approximation of the function obtained in the first step. In addition to the derivation of error estimates, convergence studies are undertaken for a couple of test boundary value problems with known exact solutions. The ERKM is next applied to a one-dimensional Burgers equation where, time evolution leads to a breakdown of the continuous solution and the appearance of a shock. Many available mesh-free schemes appear to be unable to capture this shock without numerical instability. However, given that any desired order of continuity is achievable through NURBS approximations, the ERKM can even accurately approximate functions with discontinuous derivatives. Moreover, due to the variation diminishing property of NURBS, it has advantages in representing sharp changes in gradients. This paper is focused on demonstrating this ability of ERKM via some numerical examples. Comparisons of some of the results with those via the standard form of the reproducing kernel particle method (RKPM) demonstrate the relative numerical advantages and accuracy of the ERKM.
Resumo:
For a wide class of semi-Markov decision processes the optimal policies are expressible in terms of the Gittins indices, which have been found useful in sequential clinical trials and pharmaceutical research planning. In general, the indices can be approximated via calibration based on dynamic programming of finite horizon. This paper provides some results on the accuracy of such approximations, and, in particular, gives the error bounds for some well known processes (Bernoulli reward processes, normal reward processes and exponential target processes).