964 resultados para Roundness errors


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: Older driver research has mostly focused on identifying that small proportion of older drivers who are unsafe. Little is known about how normal cognitive changes in aging affect driving in the wider population of adults who drive regularly. We evaluated the association of cognitive function and age, with driving errors. Method: A sample of 266 drivers aged 70 to 88 years were assessed on abilities that decline in normal aging (visual attention, processing speed, inhibition, reaction time, task switching) and the UFOV® which is a validated screening instrument for older drivers. Participants completed an on-road driving test. Generalized linear models were used to estimate the associations of cognitive factor with specific driving errors and number of errors in self-directed and instructor navigated conditions. Results: All errors types increased with chronological age. Reaction time was not associated with driving errors in multivariate analyses. A cognitive factor measuring Speeded Selective Attention and Switching was uniquely associated with the most errors types. The UFOV predicted blindspot errors and errors on dual carriageways. After adjusting for age, education and gender the cognitive factors explained 7% of variance in the total number of errors in the instructor navigated condition and 4% of variance in the self-navigated condition. Conclusion: We conclude that among older drivers errors increase with age and are associated with speeded selective attention particularly when that requires attending to the stimuli in the periphery of the visual field, task switching, errors inhibiting responses and visual discrimination. These abilities should be the target of cognitive training.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To demonstrate that relatively simple third-order theory can provide a framework which shows how peripheral refraction can be manipulated by altering the forms of spectacle lenses. Method: Third-order equations were used to yield lens forms that correct peripheral power errors, either for the lenses alone or in combination with typical peripheral refractions of myopic eyes. These results were compared with those of finite ray-tracing. Results: The approximate forms of spherical and conicoidal lenses provided by third-order theory were flatter over a moderate myopic range than the forms obtained by rigorous raytracing. Lenses designed to correct peripheral refractive errors produced large errors when used with foveal vision and a rotating eye. Correcting astigmatism tended to give large errors in mean oblique error and vice versa. When only spherical lens forms are used, correction of the relative hypermetropic peripheral refractions of myopic eyes which are observed experimentally, or the provision of relative myopic peripheral refractions in such eyes, seems impossible in the majority of cases. Conclusion: The third-order spectacle lens design approach can readily be used to show trends in peripheral refraction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fusion techniques have received considerable attention for achieving lower error rates with biometrics. A fused classifier architecture based on sequential integration of multi-instance and multi-sample fusion schemes allows controlled trade-off between false alarms and false rejects. Expressions for each type of error for the fused system have previously been derived for the case of statistically independent classifier decisions. It is shown in this paper that the performance of this architecture can be improved by modelling the correlation between classifier decisions. Correlation modelling also enables better tuning of fusion model parameters, ‘N’, the number of classifiers and ‘M’, the number of attempts/samples, and facilitates the determination of error bounds for false rejects and false accepts for each specific user. Error trade-off performance of the architecture is evaluated using HMM based speaker verification on utterances of individual digits. Results show that performance is improved for the case of favourable correlated decisions. The architecture investigated here is directly applicable to speaker verification from spoken digit strings such as credit card numbers in telephone or voice over internet protocol based applications. It is also applicable to other biometric modalities such as finger prints and handwriting samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fusion techniques have received considerable attention for achieving performance improvement with biometrics. While a multi-sample fusion architecture reduces false rejects, it also increases false accepts. This impact on performance also depends on the nature of subsequent attempts, i.e., random or adaptive. Expressions for error rates are presented and experimentally evaluated in this work by considering the multi-sample fusion architecture for text-dependent speaker verification using HMM based digit dependent speaker models. Analysis incorporating correlation modeling demonstrates that the use of adaptive samples improves overall fusion performance compared to randomly repeated samples. For a text dependent speaker verification system using digit strings, sequential decision fusion of seven instances with three random samples is shown to reduce the overall error of the verification system by 26% which can be further reduced by 6% for adaptive samples. This analysis novel in its treatment of random and adaptive multiple presentations within a sequential fused decision architecture, is also applicable to other biometric modalities such as finger prints and handwriting samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical dependence between classifier decisions is often shown to improve performance over statistically independent decisions. Though the solution for favourable dependence between two classifier decisions has been derived, the theoretical analysis for the general case of 'n' client and impostor decision fusion has not been presented before. This paper presents the expressions developed for favourable dependence of multi-instance and multi-sample fusion schemes that employ 'AND' and 'OR' rules. The expressions are experimentally evaluated by considering the proposed architecture for text-dependent speaker verification using HMM based digit dependent speaker models. The improvement in fusion performance is found to be higher when digit combinations with favourable client and impostor decisions are used for speaker verification. The total error rate of 20% for fusion of independent decisions is reduced to 2.1% for fusion of decisions that are favourable for both client and impostors. The expressions developed here are also applicable to other biometric modalities, such as finger prints and handwriting samples, for reliable identity verification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stormwater quality modelling results is subject to uncertainty. The variability of input parameters is an important source of overall model error. An in-depth understanding of the variability associated with input parameters can provide knowledge on the uncertainty associated with these parameters and consequently assist in uncertainty analysis of stormwater quality models and the decision making based on modelling outcomes. This paper discusses the outcomes of a research study undertaken to analyse the variability related to pollutant build-up parameters in stormwater quality modelling. The study was based on the analysis of pollutant build-up samples collected from 12 road surfaces in residential, commercial and industrial land uses. It was found that build-up characteristics vary appreciably even within the same land use. Therefore, using land use as a lumped parameter would contribute significant uncertainties in stormwater quality modelling. Additionally, it was also found that the variability in pollutant build-up can also be significant depending on the pollutant type. This underlines the importance of taking into account specific land use characteristics and targeted pollutant species when undertaking uncertainty analysis of stormwater quality models or in interpreting the modelling outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reporting of medication administration errors (MAEs) is one means by which health care facilities monitor their practice in an attempt to maintain the safest patient environment. This study examined the likelihood of registered nurses (RNs) reporting MAEs when working in Saudi Arabia. It also attempted to identify potential barriers in the reporting of MAE. This study found that 63% of RNs raised concerns about reporting of MAEs in Saudi Arabia—nursing administration was the largest impediment affecting nurses' willingness to report MAEs. Changing attitude to a non-blame system and implementation of anonymous reporting systems may encourage a greater reporting of MAEs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The future emergence of many types of airborne vehicles and unpiloted aircraft in the national airspace means collision avoidance is of primary concern in an uncooperative airspace environment. The ability to replicate a pilot’s see and avoid capability using cameras coupled with vision based avoidance control is an important part of an overall collision avoidance strategy. But unfortunately without range collision avoidance has no direct way to guarantee a level of safety. Collision scenario flight tests with two aircraft and a monocular camera threat detection and tracking system were used to study the accuracy of image-derived angle measurements. The effect of image-derived angle errors on reactive vision-based avoidance performance was then studied by simulation. The results show that whilst large angle measurement errors can significantly affect minimum ranging characteristics across a variety of initial conditions and closing speeds, the minimum range is always bounded and a collision never occurs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have previously reported a preliminary taxonomy of patient error. However, approaches to managing patients' contribution to error have received little attention in the literature. This paper aims to assess how patients and primary care professionals perceive the relative importance of different patient errors as a threat to patient safety. It also attempts to suggest what these groups believe may be done to reduce the errors, and how. It addresses these aims through original research that extends the nominal group analysis used to generate the error taxonomy. Interviews were conducted with 11 purposively selected groups of patients and primary care professionals in Auckland, New Zealand, during late 2007. The total number of participants was 83, including 64 patients. Each group ranked the importance of possible patient errors identified through the nominal group exercise. Approaches to managing the most important errors were then discussed. There was considerable variation among the groups in the importance rankings of the errors. Our general inductive analysis of participants' suggestions revealed the content of four inter-related actions to manage patient error: Grow relationships; Enable patients and professionals to recognise and manage patient error; be Responsive to their shared capacity for change; and Motivate them to act together for patient safety. Cultivation of this GERM of safe care was suggested to benefit from 'individualised community care'. In this approach, primary care professionals individualise, in community spaces, population health messages about patient safety events. This approach may help to reduce patient error and the tension between personal and population health-care.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Every year a number of pedestrians are struck by trains resulting in death and serious injury. While much research has been conducted on train-vehicle collisions, very little is currently known about the aetiology of train-pedestrian collisions. To date, scant research has been undertaken to investigate the demographics of rule breakers, the frequency of deliberate violation versus error making and the influence of the classic deterrence approach on subsequent behaviours. Aim This study aimed to to identify pedestrians’ self-reported reasons for engaging in violations at crossing, the frequency and nature of rule breaking and whether the threat of sanctions influence such events. Method A questionnaire was administered to 511 participants of all ages. Results Analysis revealed that pedestrians (particularly younger groups) were more likely to commit deliberate violations rather than make crossing errors e.g., mistakes. The most frequent reasons given for deliberate violations were participants were running late and did not want to miss their train or participants believed that the gate was taking too long to open so may be malfunctioning. In regards to classical deterrence, an examination of the perceived threat of being apprehended and fined for a crossing violation revealed participants reported the highest mean scores for swiftness of punishment, which suggests they were generally aware that they would receive an “on the spot” fine. However, the overall mean scores for certainty and severity of sanctions (for violating the rules) indicate that the participants did not perceive the certainty and severity of sanctions as very high. This paper will further discuss the research findings in regards to the development of interventions designed to improve pedestrian crossing safety.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose To design and manufacture lenses to correct peripheral refraction along the horizontal meridian and to determine whether these resulted in noticeable improvements in visual performance. Method Subjective refraction of a low myope was determined on the basis of best peripheral detection acuity along the horizontal visual field out to ±30° for both horizontal and vertical gratings. Subjective refraction was compared to objective refractions using a COAS-HD aberrometer. Special lenses were made to correct peripheral refraction, based on designs optimized with and without smoothing across a 3 mm diameter square aperture. Grating detection was retested with these lenses. Contrast thresholds of 1.25’ spots were determined across the field for the conditions of best correction, on-axis correction, and the special lenses. Results The participant had high relative peripheral hyperopia, particularly in the temporal visual field (maximum 2.9 D). There were differences > 0.5D between subjective and objective refractions at a few field angles. On-axis correction reduced peripheral detection acuity and increased peripheral contrast threshold in the peripheral visual field, relative to the best correction, by up to 0.4 and 0.5 log units, respectively. The special lenses restored most of the peripheral vision, although not all at angles to ±10°, and with the lens optimized with aperture-smoothing possibly giving better vision than the lens optimized without aperture-smoothing at some angles. Conclusion It is possible to design and manufacture lenses to give near optimum peripheral visual performance to at least ±30° along one visual field meridian. The benefit of such lenses is likely to be manifest only if a subject has a considerable relative peripheral refraction, for example of the order of 2 D.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Texture information in the iris image is not uniform in discriminatory information content for biometric identity verification. The bits in an iris code obtained from the image differ in their consistency from one sample to another for the same identity. In this work, errors in bit strings are systematically analysed in order to investigate the effect of light-induced and drug-induced pupil dilation and constriction on the consistency of iris texture information. The statistics of bit errors are computed for client and impostor distributions as functions of radius and angle. Under normal conditions, a V-shaped radial trend of decreasing bit errors towards the central region of the iris is obtained for client matching, and it is observed that the distribution of errors as a function of angle is uniform. When iris images are affected by pupil dilation or constriction the radial distribution of bit errors is altered. A decreasing trend from the pupil outwards is observed for constriction, whereas a more uniform trend is observed for dilation. The main increase in bit errors occurs closer to the pupil in both cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The term “Human error” can simply be defined as an error which made by a human. In fact, Human error is an explanation of malfunctions, unintended consequents from operating a system. There are many factors that cause a person to have an error due to the unwanted error of human. The aim of this paper is to investigate the relationship of human error as one of the factors to computer related abuses. The paper beings by computer-relating to human errors and followed by mechanism mitigate these errors through social and technical perspectives. We present the 25 techniques of computer crime prevention, as a heuristic device that assists. A last section discussing the ways of improving the adoption of security, and conclusion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Monte Carlo model of an Elekta iViewGT amorphous silicon electronic portal imaging device (a-Si EPID) has been validated for pre-treatment verification of clinical IMRT treatment plans. The simulations involved the use of the BEAMnrc and DOSXYZnrc Monte Carlo codes to predict the response of the iViewGT a-Si EPID model. The predicted EPID images were compared to the measured images obtained from the experiment. The measured EPID images were obtained by delivering a photon beam from an Elekta Synergy linac to the Elekta iViewGT a-Si EPID. The a-Si EPID was used with no additional build-up material. Frame averaged EPID images were acquired and processed using in-house software. The agreement between the predicted and measured images was analyzed using the gamma analysis technique with acceptance criteria of 3% / 3 mm. The results show that the predicted EPID images for four clinical IMRT treatment plans have a good agreement with the measured EPID signal. Three prostate IMRT plans were found to have an average gamma pass rate of more than 95.0 % and a spinal IMRT plan has the average gamma pass rate of 94.3 %. During the period of performing this work a routine MLC calibration was performed and one of the IMRT treatments re-measured with the EPID. A change in the gamma pass rate for one field was observed. This was the motivation for a series of experiments to investigate the sensitivity of the method by introducing delivery errors, MLC position and dosimetric overshoot, into the simulated EPID images. The method was found to be sensitive to 1 mm leaf position errors and 10% overshoot errors.