5 resultados para SET SUPERPOSITION ERROR
em Aston University Research Archive
Resumo:
This research was concerned with identifying factors which may influence human reliability within chemical process plants - these factors are referred to as Performance Shaping Factors (PSFs). Following a period of familiarization within the industry, a number of case studies were undertaken covering a range of basic influencing factors. Plant records and site `lost time incident reports' were also used as supporting evidence for identifying and classifying PSFs. In parallel to the investigative research, the available literature appertaining to human reliability assessment and PSFs was considered in relation to the chemical process plan environment. As a direct result of this work, a PSF classification structure has been produced with an accompanying detailed listing. Phase two of the research considered the identification of important individual PSFs for specific situations. Based on the experience and data gained during phase one, it emerged that certain generic features of a task influenced PSF relevance. This led to the establishment of a finite set of generic task groups and response types. Similarly, certain PSFs influence some human errors more than others. The result was a set of error type key words, plus the identification and classification of error causes with their underlying error mechanisms. By linking all these aspects together, a comprehensive methodology has been forwarded as the basis of a computerized aid for system designers. To recapitulate, the major results of this research have been: One, the development of a comprehensive PSF listing specifically for the chemical process industries with a classification structure that facilitates future updates; and two, a model of identifying relevant SPFs and their order of priority. Future requirements are the evaluation of the PSF listing and the identification method. The latter must be considered both in terms of `useability' and its success as a design enhancer, in terms of an observable reduction in important human errors.
Resumo:
The efficacy of a specially constructed Gallager-type error-correcting code to communication in a Gaussian channel is examined. The construction is based on the introduction of complex matrices, used in both encoding and decoding, which comprise sub-matrices of cascading connection values. The finite-size effects are estimated for comparing the results with the bounds set by Shannon. The critical noise level achieved for certain code rates and infinitely large systems nearly saturates the bounds set by Shannon even when the connectivity used is low.
Resumo:
The performance of "typical set (pairs) decoding" for ensembles of Gallager's linear code is investigated using statistical physics. In this decoding method, errors occur, either when the information transmission is corrupted by atypical noise, or when multiple typical sequences satisfy the parity check equation as provided by the received corrupted codeword. We show that the average error rate for the second type of error over a given code ensemble can be accurately evaluated using the replica method, including the sensitivity to message length. Our approach generally improves the existing analysis known in the information theory community, which was recently reintroduced in IEEE Trans. Inf. Theory 45, 399 (1999), and is believed to be the most accurate to date. © 2002 The American Physical Society.
Resumo:
Regression problems are concerned with predicting the values of one or more continuous quantities, given the values of a number of input variables. For virtually every application of regression, however, it is also important to have an indication of the uncertainty in the predictions. Such uncertainties are expressed in terms of the error bars, which specify the standard deviation of the distribution of predictions about the mean. Accurate estimate of error bars is of practical importance especially when safety and reliability is an issue. The Bayesian view of regression leads naturally to two contributions to the error bars. The first arises from the intrinsic noise on the target data, while the second comes from the uncertainty in the values of the model parameters which manifests itself in the finite width of the posterior distribution over the space of these parameters. The Hessian matrix which involves the second derivatives of the error function with respect to the weights is needed for implementing the Bayesian formalism in general and estimating the error bars in particular. A study of different methods for evaluating this matrix is given with special emphasis on the outer product approximation method. The contribution of the uncertainty in model parameters to the error bars is a finite data size effect, which becomes negligible as the number of data points in the training set increases. A study of this contribution is given in relation to the distribution of data in input space. It is shown that the addition of data points to the training set can only reduce the local magnitude of the error bars or leave it unchanged. Using the asymptotic limit of an infinite data set, it is shown that the error bars have an approximate relation to the density of data in input space.
Resumo:
An intelligent agent, operating in an external world which cannot be fully described in its internal world model, must be able to monitor the success of a previously generated plan and to respond to any errors which may have occurred. The process of error analysis requires the ability to reason in an expert fashion about time and about processes occurring in the world. Reasoning about time is needed to deal with causality. Reasoning about processes is needed since the direct effects of a plan action can be completely specified when the plan is generated, but the indirect effects cannot. For example, the action `open tap' leads with certainty to `tap open', whereas whether there will be a fluid flow and how long it might last is more difficult to predict. The majority of existing planning systems cannot handle these kinds of reasoning, thus limiting their usefulness. This thesis argues that both kinds of reasoning require a complex internal representation of the world. The use of Qualitative Process Theory and an interval-based representation of time are proposed as a representation scheme for such a world model. The planning system which was constructed has been tested on a set of realistic planning scenarios. It is shown that even simple planning problems, such as making a cup of coffee, require extensive reasoning if they are to be carried out successfully. The final Chapter concludes that the planning system described does allow the correct solution of planning problems involving complex side effects, which planners up to now have been unable to solve.