979 resultados para Human Errors
Resumo:
The term “Human error” can simply be defined as an error which made by a human. In fact, Human error is an explanation of malfunctions, unintended consequents from operating a system. There are many factors that cause a person to have an error due to the unwanted error of human. The aim of this paper is to investigate the relationship of human error as one of the factors to computer related abuses. The paper beings by computer-relating to human errors and followed by mechanism mitigate these errors through social and technical perspectives. We present the 25 techniques of computer crime prevention, as a heuristic device that assists. A last section discussing the ways of improving the adoption of security, and conclusion.
Resumo:
Measurement is the act or the result of a quantitative comparison between a given quantity and a quantity of the same kind chosen as a unit. It is generally agreed that all measurements contain errors. In a measuring system where both a measuring instrument and a human being taking the measurement using a preset process, the measurement error could be due to the instrument, the process or the human being involved. The first part of the study is devoted to understanding the human errors in measurement. For that, selected person related and selected work related factors that could affect measurement errors have been identified. Though these are well known, the exact extent of the error and the extent of effect of different factors on human errors in measurement are less reported. Characterization of human errors in measurement is done by conducting an experimental study using different subjects, where the factors were changed one at a time and the measurements made by them recorded. From the pre‐experiment survey research studies, it is observed that the respondents could not give the correct answers to questions related to the correct values [extent] of human related measurement errors. This confirmed the fears expressed regarding lack of knowledge about the extent of human related measurement errors among professionals associated with quality. But in postexperiment phase of survey study, it is observed that the answers regarding the extent of human related measurement errors has improved significantly since the answer choices were provided based on the experimental study. It is hoped that this work will help users of measurement in practice to better understand and manage the phenomena of human related errors in measurement.
Resumo:
The paper presents an innovative approach to modelling the causal relationships of human errors in rail crack incidents (RCI) from a managerial perspective. A Bayesian belief network is developed to model RCI by considering the human errors of designers, manufactures, operators and maintainers (DMOM) and the causal relationships involved. A set of dependent variables whose combinations express the relevant functions performed by each DMOM participant is used to model the causal relationships. A total of 14 RCI on Hong Kong’s mass transit railway (MTR) from 2008 to 2011 are used to illustrate the application of the model. Bayesian inference is used to conduct an importance analysis to assess the impact of the participants’ errors. Sensitivity analysis is then employed to gauge the effect the increased probability of occurrence of human errors on RCI. Finally, strategies for human error identification and mitigation of RCI are proposed. The identification of ability of maintainer in the case study as the most important factor influencing the probability of RCI implies the priority need to strengthen the maintenance management of the MTR system and that improving the inspection ability of the maintainer is likely to be an effective strategy for RCI risk mitigation.
Resumo:
Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.
Resumo:
The term human factor is used by professionals of various fields meant for understanding the behavior of human beings at work. The human being, while developing a cooperative activity with a computer system, is subject to cause an undesirable situation in his/her task. This paper starts from the principle that human errors may be considered as a cause or factor contributing to a series of accidents and incidents in many diversified fields in which human beings interact with automated systems. We propose a simulator of performance in error with potentiality to assist the Human Computer Interaction (HCI) project manager in the construction of the critical systems. © 2011 Springer-Verlag.
Resumo:
One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of error. Based on Reason's (1992) definition of human errors and Norman's (1986) cognitive theory of human action, we have developed a preliminary action-based cognitive taxonomy of errors that largely satisfies these four criteria in the domain of medicine. We discuss initial steps for applying this taxonomy to develop an online medical error reporting system that not only categorizes errors but also identifies problems and generates solutions.
Resumo:
Experiments with simulators allow psychologists to better understand the causes of human errors and build models of cognitive processes to be used in human reliability assessment (HRA). This paper investigates an approach to task failure analysis based on patterns of behaviour, by contrast to more traditional event-based approaches. It considers, as a case study, a formal model of an air traffic control (ATC) system which incorporates controller behaviour. The cognitive model is formalised in the CSP process algebra. Patterns of behaviour are expressed as temporal logic properties. Then a model-checking technique is used to verify whether the decomposition of the operator's behaviour into patterns is sound and complete with respect to the cognitive model. The decomposition is shown to be incomplete and a new behavioural pattern is identified, which appears to have been overlooked in the analysis of the data provided by the experiments with the simulator. This illustrates how formal analysis of operator models can yield fresh insights into how failures may arise in interactive systems.
Resumo:
This research was concerned with identifying factors which may influence human reliability within chemical process plants - these factors are referred to as Performance Shaping Factors (PSFs). Following a period of familiarization within the industry, a number of case studies were undertaken covering a range of basic influencing factors. Plant records and site `lost time incident reports' were also used as supporting evidence for identifying and classifying PSFs. In parallel to the investigative research, the available literature appertaining to human reliability assessment and PSFs was considered in relation to the chemical process plan environment. As a direct result of this work, a PSF classification structure has been produced with an accompanying detailed listing. Phase two of the research considered the identification of important individual PSFs for specific situations. Based on the experience and data gained during phase one, it emerged that certain generic features of a task influenced PSF relevance. This led to the establishment of a finite set of generic task groups and response types. Similarly, certain PSFs influence some human errors more than others. The result was a set of error type key words, plus the identification and classification of error causes with their underlying error mechanisms. By linking all these aspects together, a comprehensive methodology has been forwarded as the basis of a computerized aid for system designers. To recapitulate, the major results of this research have been: One, the development of a comprehensive PSF listing specifically for the chemical process industries with a classification structure that facilitates future updates; and two, a model of identifying relevant SPFs and their order of priority. Future requirements are the evaluation of the PSF listing and the identification method. The latter must be considered both in terms of `useability' and its success as a design enhancer, in terms of an observable reduction in important human errors.
Resumo:
A pragmatic method for assessing the accuracy and precision of a given processing pipeline required for converting computed tomography (CT) image data of bones into representative three dimensional (3D) models of bone shapes is proposed. The method is based on coprocessing a control object with known geometry which enables the assessment of the quality of resulting 3D models. At three stages of the conversion process, distance measurements were obtained and statistically evaluated. For this study, 31 CT datasets were processed. The final 3D model of the control object contained an average deviation from reference values of −1.07±0.52 mm standard deviation (SD) for edge distances and −0.647±0.43 mm SD for parallel side distances of the control object. Coprocessing a reference object enables the assessment of the accuracy and precision of a given processing pipeline for creating CTbased 3D bone models and is suitable for detecting most systematic or human errors when processing a CT-scan. Typical errors have about the same size as the scan resolution.
Resumo:
Maintenance trains travel in convoy. In Australia, only the first train of the convoy pays attention to the track sig- nalization (the other convoy vehicles simply follow the preceding vehicle). Because of human errors, collisions can happen between the maintenance vehicles. Although an anti-collision system based on a laser distance meter is already in operation, the existing system has a limited range due to the curvature of the tracks. In this paper, we introduce an anti-collision system based on vision. The two main ideas are, (1) to warp the camera image into an image where the rails are parallel through a projective transform, and (2) to track the two rail curves simultaneously by evaluating small parallel segments. The performance of the system is demonstrated on an image dataset.
Resumo:
Vigilance declines when exposed to highly predictable and uneventful tasks. Monotonous tasks provide little cognitive and motor stimulation and contribute to human errors. This paper aims to model and detect vigilance decline in real time through participant’s reaction times during a monotonous task. A lab-based experiment adapting the Sustained Attention to Response Task (SART) is conducted to quantify the effect of monotony on overall performance. Then relevant parameters are used to build a model detecting hypovigilance throughout the experiment. The accuracy of different mathematical models are compared to detect in real-time – minute by minute - the lapses in vigilance during the task. We show that monotonous tasks can lead to an average decline in performance of 45%. Furthermore, vigilance modelling enables to detect vigilance decline through reaction times with an accuracy of 72% and a 29% false alarm rate. Bayesian models are identified as a better model to detect lapses in vigilance as compared to Neural Networks and Generalised Linear Mixed Models. This modelling could be used as a framework to detect vigilance decline of any human performing monotonous tasks.
Resumo:
Intelligent Transport Systems (ITS) have the potential to substantially reduce the number of crashes caused by human errors at railway levels crossings. Such systems, however, will only exert an influence on driving behaviour if they are accepted by the driver. This study aimed at assessing driver acceptance of different ITS interventions designed to enhance driver behaviour at railway crossings. Fifty eight participants, divided into three groups, took part in a driving simulator study in which three ITS devices were tested: an in-vehicle visual ITS, an in-vehicle audio ITS, and an on-road valet system. Driver acceptance of each ITS intervention was assessed in a questionnaire guided by the Technology Acceptance Model and the Theory of Planned Behaviour. Overall, results indicated that the strongest intentions to use the ITS devices belonged to participants exposed to the road-based valet system at passive crossings. The utility of both models in explaining drivers’ intention to use the systems is discussed, with results showing greater support for the Theory of Planned Behaviour. Directions for future studies, along with strategies that target attitudes and subjective norms to increase drivers’ behavioural intentions, are also discussed.
Resumo:
Intelligent Transport Systems (ITS) have the potential to substantially reduce the number of crashes caused by human errors at railway levels crossings. However, such systems could overwhelm drivers, generate different types of driver errors and have negative effects on safety at level crossing. The literature shows an increasing interest for new ITS for increasing driver situational awareness at level crossings, as well as evaluations of such new systems on compliance. To our knowledge, the potential negative effects of such technologies have not been comprehensively evaluated yet. This study aimed at assessing the effect of different ITS interventions, designed to enhance driver behaviour at railway crossings, on driver’s cognitive loads. Fifty eight participants took part in a driving simulator study in which three ITS devices were tested: an in-vehicle visual ITS, an in-vehicle audio ITS, and an on-road valet system. Driver cognitive load was objectively and subjectively assessed for each ITS intervention. Objective data were collected from a heart rate monitor and an eye tracker, while subjective data was collected with the NASA-TLX questionnaire. Overall, results indicated that the three trialled technologies did not result in significant changes in cognitive load while approaching crossings.
Resumo:
Substation Automation Systems have undergone many transformational changes triggered by improvements in technologies. Prior to the digital era, it made sense to confirm that the physical wiring matched the schematic design by meticulous and laborious point to point testing. In this way, human errors in either the design or the construction could be identified and fixed prior to entry into service. However, even though modern secondary systems today are largely computerised, we are still undertaking commissioning testing using the same philosophy as if each signal were hard wired. This is slow and tedious and doesn’t do justice to modern computer systems and software automation. One of the major architectural advantages of the IEC 61850 standard is that it “abstracts” the definition of data and services independently of any protocol allowing the mapping of them to any protocol that can meet the modelling and performance requirements. On this basis, any substation element can be defined using these common building blocks and are made available at the design, configuration and operational stages of the system. The primary advantage of accessing data using this methodology rather than the traditional position method (such as DNP 3.0) is that generic tools can be created to manipulate data. Self-describing data contains the information that these tools need to manipulate different data types correctly. More importantly, self-describing data makes the interface between programs robust and flexible. This paper proposes that the improved data definitions and methods for dealing with this data within a tightly bound and compliant IEC 61850 Substation Automation System could completely revolutionise the need to test systems when compared to traditional point to point methods. Using the outcomes of an undergraduate thesis project, we can demonstrate with some certainty that it is possible to automatically test the configuration of a protection relay by comparing the IEC 61850 configuration extracted from the relay against its SCL file for multiple relay vendors. The software tool provides a quick and automatic check that the data sets on a particular relay are correct according to its CID file, thus ensuring that no unexpected modifications are made at any stage of the commissioning process. This tool has been implemented in a Java programming environment using an open source IEC 61850 library to facilitate the server-client association with the relay.