53 resultados para ERROR AUTOCORRELATION
Resumo:
Integration of biometrics is considered as an attractive solution for the issues associated with password based human authentication as well as for secure storage and release of cryptographic keys which is one of the critical issues associated with modern cryptography. However, the widespread popularity of bio-cryptographic solutions are somewhat restricted by the fuzziness associated with biometric measurements. Therefore, error control mechanisms must be adopted to make sure that fuzziness of biometric inputs can be sufficiently countered. In this paper, we have outlined such existing techniques used in bio-cryptography while explaining how they are deployed in different types of solutions. Finally, we have elaborated on the important facts to be considered when choosing appropriate error correction mechanisms for a particular biometric based solution.
Resumo:
Melanopsin containing intrinsically photosensitive Retinal Ganglion cells (ipRGCs) mediate the pupil light reflex (PLR) during light onset and at light offset (the post-illumination pupil response, PIPR). Recent evidence shows that the PLR and PIPR can provide non-invasive, objective markers of age-related retinal and optic nerve disease, however there is no consensus on the effects of healthy ageing or refractive error on the ipRGC mediated pupil function. Here we isolated melanopsin contributions to the pupil control pathway in 59 human participants with no ocular pathology across a range of ages and refractive errors. We show that there is no effect of age or refractive error on ipRGC inputs to the human pupil control pathway. The stability of the ipRGC mediated pupil response across the human lifespan provides a functional correlate of their robustness observed during ageing in rodent models.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.
Resumo:
For a wide class of semi-Markov decision processes the optimal policies are expressible in terms of the Gittins indices, which have been found useful in sequential clinical trials and pharmaceutical research planning. In general, the indices can be approximated via calibration based on dynamic programming of finite horizon. This paper provides some results on the accuracy of such approximations, and, in particular, gives the error bounds for some well known processes (Bernoulli reward processes, normal reward processes and exponential target processes).
Resumo:
The decision of Henry J in Ginn & Anor v Ginn; ex parte Absolute Law Lawyers & Attorneys [2015] QSC 49 provides clarification of the approach to be taken on a default costs assessment under r708 of the Uniform Civil Procedure Rules 1999
Resumo:
A 59-year-old man was mistakenly prescribed Slow-Na instead of Slow-K due to incorrect selection from a drop-down list in the prescribing software. This error was identified by a pharmacist during a home medicine review (HMR) before the patient began taking the supplement. The reported error emphasizes the need for vigilance due to the emergence of novel look-alike, sound-alike (LASA) drug pairings. This case highlights the important role of pharmacists in medication safety.
Resumo:
The paper presents an innovative approach to modelling the causal relationships of human errors in rail crack incidents (RCI) from a managerial perspective. A Bayesian belief network is developed to model RCI by considering the human errors of designers, manufactures, operators and maintainers (DMOM) and the causal relationships involved. A set of dependent variables whose combinations express the relevant functions performed by each DMOM participant is used to model the causal relationships. A total of 14 RCI on Hong Kong’s mass transit railway (MTR) from 2008 to 2011 are used to illustrate the application of the model. Bayesian inference is used to conduct an importance analysis to assess the impact of the participants’ errors. Sensitivity analysis is then employed to gauge the effect the increased probability of occurrence of human errors on RCI. Finally, strategies for human error identification and mitigation of RCI are proposed. The identification of ability of maintainer in the case study as the most important factor influencing the probability of RCI implies the priority need to strengthen the maintenance management of the MTR system and that improving the inspection ability of the maintainer is likely to be an effective strategy for RCI risk mitigation.