957 resultados para Human Error


Relevância:

60.00% 60.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Universities are under no less pressure to adopt risk management strategies than other public and private organisations. The risk management of doctoral education is a particularly important issue given that a doctorate is the highest academic qualification a university offers and stakes are high in terms of assuring its quality. However, intense risk management can interfere with the intellectual and pedagogical work which are essentially part of doctoral education. This paper seeks to understand how the culture of risk meets the culture of doctoral education and with what effect. The authors draw on sociological understandings of risk in the work of Anthony Giddens (2002) and Ulrich Beck (1992), the anthropological focus on liminality in the work of Mary Douglas (1990), and the psychological theorising of human error in the work of James Reason (1990). The paper concludes that risk consciousness brings its own risks—in particular, the potential transformation of a culture based on intellect into a culture based on compliance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We carried out a retrospective review of the videoconference activity records in a university-run hospital telemedicine studio. Usage records describing videoconferencing activity in the telemedicine studio were compared with the billing records provided by the telecommunications company. During a seven-month period there were 211 entries in the studio log: 108 calls made from the studio and 103 calls made from a far-end location. We found that 103 calls from a total of 195 calls reported by the telecommunications company were recorded in the usage log. The remaining 92 calls were not recorded, probably for one of several reasons, including: failed calls-a large number of unrecorded calls (57%) lasted for less than 2 min (median 1.6 min); origin of videoconference calls-calls may have been recorded incorrectly in the usage diary (i.e. as being initiated from the far end, when actually initiated from the studio); and human error. Our study showed that manual recording of videoconference activity may not accurately reflect the actual activity taking place. Those responsible for recording and analysing videoconference activity, particularly in large telemedicine networks, should do so with care.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation investigates the very important and current problem of modelling human expertise. This is an apparent issue in any computer system emulating human decision making. It is prominent in Clinical Decision Support Systems (CDSS) due to the complexity of the induction process and the vast number of parameters in most cases. Other issues such as human error and missing or incomplete data present further challenges. In this thesis, the Galatean Risk Screening Tool (GRiST) is used as an example of modelling clinical expertise and parameter elicitation. The tool is a mental health clinical record management system with a top layer of decision support capabilities. It is currently being deployed by several NHS mental health trusts across the UK. The aim of the research is to investigate the problem of parameter elicitation by inducing them from real clinical data rather than from the human experts who provided the decision model. The induced parameters provide an insight into both the data relationships and how experts make decisions themselves. The outcomes help further understand human decision making and, in particular, help GRiST provide more accurate emulations of risk judgements. Although the algorithms and methods presented in this dissertation are applied to GRiST, they can be adopted for other human knowledge engineering domains.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It has never been easy for manufacturing companies to understand their confidence level in terms of how accurate and to what degree of flexibility parts can be made. This brings uncertainty in finding the most suitable manufacturing method as well as in controlling their product and process verification systems. The aim of this research is to develop a system for capturing the company’s knowledge and expertise and then reflect it into an MRP (Manufacturing Resource Planning) system. A key activity here is measuring manufacturing and machining capabilities to a reasonable confidence level. For this purpose an in-line control measurement system is introduced to the company. Using SPC (Statistical Process Control) not only helps to predict the trend in manufacturing of parts but also minimises the human error in measurement. Gauge R&R (Repeatability and Reproducibility) study identifies problems in measurement systems. Measurement is like any other process in terms of variability. Reducing this variation via an automated machine probing system helps to avoid defects in future products.Developments in aerospace, nuclear, oil and gas industries demand materials with high performance and high temperature resistance under corrosive and oxidising environments. Superalloys were developed in the latter half of the 20th century as high strength materials for such purposes. For the same characteristics superalloys are considered as difficult-to-cut alloys when it comes to formation and machining. Furthermore due to the sensitivity of superalloy applications, in many cases they should be manufactured with tight tolerances. In addition superalloys, specifically Nickel based, have unique features such as low thermal conductivity due to having a high amount of Nickel in their material composition. This causes a high surface temperature on the work-piece at the machining stage which leads to deformation in the final product.Like every process, the material variations have a significant impact on machining quality. The main cause of variations can originate from chemical composition and mechanical hardness. The non-uniform distribution of metal elements is a major source of variation in metallurgical structures. Different heat treatment standards are designed for processing the material to the desired hardness levels based on application. In order to take corrective actions, a study on the material aspects of superalloys has been conducted. In this study samples from different batches of material have been analysed. This involved material preparation for microscopy analysis, and the effect of chemical compositions on hardness (before and after heat treatment). Some of the results are discussed and presented in this paper.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Human operators are unique in their decision making capability, judgment and nondeterminism. Their sense of judgment, unpredictable decision procedures, susceptibility to environmental elements can cause them to erroneously execute a given task description to operate a computer system. Usually, a computer system is protected against some erroneous human behaviors by having necessary safeguard mechanisms in place. But some erroneous human operator behaviors can lead to severe or even fatal consequences especially in safety critical systems. A generalized methodology that can allow modeling and analyzing the interactions between computer systems and human operators where the operators are allowed to deviate from their prescribed behaviors will provide a formal understanding of the robustness of a computer system against possible aberrant behaviors by its human operators. We provide several methodology for assisting in modeling and analyzing human behaviors exhibited while operating computer systems. Every human operator is usually given a specific recommended set of guidelines for operating a system. We first present process algebraic methodology for modeling and verifying recommended human task execution behavior. We present how one can perform runtime monitoring of a computer system being operated by a human operator for checking violation of temporal safety properties. We consider the concept of a protection envelope giving a wider class of behaviors than those strictly prescribed by a human task that can be tolerated by a system. We then provide a framework for determining whether a computer system can maintain its guarantees if the human operators operate within their protection envelopes. This framework also helps to determine the robustness of the computer system under weakening of the protection envelopes. In this regard, we present a tool called Tutela that assists in implementing the framework. We then examine the ability of a system to remain safe under broad classes of variations of the prescribed human task. We develop a framework for addressing two issues. The first issue is: given a human task specification and a protection envelope, will the protection envelope properties still hold under standard erroneous executions of that task by the human operators? In other words how robust is the protection envelope? The second issue is: in the absence of a protection envelope, can we approximate a protection envelope encompassing those standard erroneous human behaviors that can be safely endured by the system? We present an extension of Tutela that implements this framework. The two frameworks mentioned above use Concurrent Game Structures (CGS) as models for both computer systems and their human operators. However, there are some shortcomings of this formalism for our uses. We add incomplete information concepts in CGSs to achieve better modularity for the players. We introduce nondeterminism in both the transition system and strategies of players and in the modeling of human operators and computer systems. Nondeterministic action strategies for players in \emph{i}ncomplete information \emph{N}ondeterministic CGS (iNCGS) is a more precise formalism for modeling human behaviors exhibited while operating a computer system. We show how we can reason about a human behavior satisfying a guarantee by providing a semantics of Alternating Time Temporal Logic based on iNCGS player strategies. In a nutshell this dissertation provides formal methodology for modeling and analyzing system robustness against both expected and erroneous human operator behaviors.