25 resultados para Railroad safety, Bayesian methods, Accident modification factor, Countermeasure selection
Resumo:
Bayesian techniques have been developed over many years in a range of different fields, but have only recently been applied to the problem of learning in neural networks. As well as providing a consistent framework for statistical pattern recognition, the Bayesian approach offers a number of practical advantages including a potential solution to the problem of over-fitting. This chapter aims to provide an introductory overview of the application of Bayesian methods to neural networks. It assumes the reader is familiar with standard feed-forward network models and how to train them using conventional techniques.
Resumo:
Bayesian techniques have been developed over many years in a range of different fields, but have only recently been applied to the problem of learning in neural networks. As well as providing a consistent framework for statistical pattern recognition, the Bayesian approach offers a number of practical advantages including a potential solution to the problem of over-fitting. This chapter aims to provide an introductory overview of the application of Bayesian methods to neural networks. It assumes the reader is familiar with standard feed-forward network models and how to train them using conventional techniques.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Two probabilistic interpretations of the n-tuple recognition method are put forward in order to allow this technique to be analysed with the same Bayesian methods used in connection with other neural network models. Elementary demonstrations are then given of the use of maximum likelihood and maximum entropy methods for tuning the model parameters and assisting their interpretation. One of the models can be used to illustrate the significance of overlapping n-tuple samples with respect to correlations in the patterns.
Resumo:
The problem of evaluating different learning rules and other statistical estimators is analysed. A new general theory of statistical inference is developed by combining Bayesian decision theory with information geometry. It is coherent and invariant. For each sample a unique ideal estimate exists and is given by an average over the posterior. An optimal estimate within a model is given by a projection of the ideal estimate. The ideal estimate is a sufficient statistic of the posterior, so practical learning rules are functions of the ideal estimator. If the sole purpose of learning is to extract information from the data, the learning rule must also approximate the ideal estimator. This framework is applicable to both Bayesian and non-Bayesian methods, with arbitrary statistical models, and to supervised, unsupervised and reinforcement learning schemes.
Resumo:
We present results that compare the performance of neural networks trained with two Bayesian methods, (i) the Evidence Framework of MacKay (1992) and (ii) a Markov Chain Monte Carlo method due to Neal (1996) on a task of classifying segmented outdoor images. We also investigate the use of the Automatic Relevance Determination method for input feature selection.
Resumo:
Following adaptation to an oriented (1-d) signal in central vision, the orientation of subsequently viewed test signals may appear repelled away from or attracted towards the adapting orientation. Small angular differences between the adaptor and test yield 'repulsive' shifts, while large angular differences yield 'attractive' shifts. In peripheral vision, however, both small and large angular differences yield repulsive shifts. To account for these tilt after-effects (TAEs), a cascaded model of orientation estimation that is optimized using hierarchical Bayesian methods is proposed. The model accounts for orientation bias through adaptation-induced losses in information that arise because of signal uncertainties and neural constraints placed upon the propagation of visual information. Repulsive (direct) TAEs arise at early stages of visual processing from adaptation of orientation-selective units with peak sensitivity at the orientation of the adaptor (theta). Attractive (indirect) TAEs result from adaptation of second-stage units with peak sensitivity at theta and theta+90 degrees , which arise from an efficient stage of linear compression that pools across the responses of the first-stage orientation-selective units. A spatial orientation vector is estimated from the transformed oriented unit responses. The change from attractive to repulsive TAEs in peripheral vision can be explained by the differing harmonic biases resulting from constraints on signal power (in central vision) versus signal uncertainties in orientation (in peripheral vision). The proposed model is consistent with recent work by computational neuroscientists in supposing that visual bias reflects the adjustment of a rational system in the light of uncertain signals and system constraints.
Resumo:
Background: Atrophy of skeletal muscle in cancer cachexia has been attributed to a tumour-produced highly glycosylated peptide called proteolysis-inducing factor (PIF). The action of PIF is mediated through a high-affinity membrane receptor in muscle. This study investigates the ability of peptides derived from the 20 N-terminal amino acids of the receptor to neutralise PIF action both in vitro and in vivo. Methods: Proteolysis-inducing factor was purified from the MAC16 tumour using an initial pronase digestion, followed by binding on DEAE cellulose, and the pronase was inactivated by heating to 80°C, before purification of the PIF using affinity chromatography. In vitro studies were carried out using C2C12 murine myotubes, while in vivo studies employed mice bearing the cachexia-inducing MAC16 tumour. Results: The process resulted in almost a 23?000-fold purification of PIF, but with a recovery of only 0.004%. Both the D- and L-forms of the 20mer peptide attenuated PIF-induced protein degradation in vitro through the ubiquitin-proteosome proteolytic pathway and increased expression of myosin. In vivo studies showed that neither the D- nor the L-peptides significantly attenuated weight loss, although the D-peptide did show a tendency to increase lean body mass. Conclusion: These results suggest that the peptides may be too hydrophilic to be used as therapeutic agents, but confirm the importance of the receptor in the action of the PIF on muscle protein degradation.
Resumo:
We propose a Bayesian framework for regression problems, which covers areas which are usually dealt with by function approximation. An online learning algorithm is derived which solves regression problems with a Kalman filter. Its solution always improves with increasing model complexity, without the risk of over-fitting. In the infinite dimension limit it approaches the true Bayesian posterior. The issues of prior selection and over-fitting are also discussed, showing that some of the commonly held beliefs are misleading. The practical implementation is summarised. Simulations using 13 popular publicly available data sets are used to demonstrate the method and highlight important issues concerning the choice of priors.
Resumo:
Conventional feed forward Neural Networks have used the sum-of-squares cost function for training. A new cost function is presented here with a description length interpretation based on Rissanen's Minimum Description Length principle. It is a heuristic that has a rough interpretation as the number of data points fit by the model. Not concerned with finding optimal descriptions, the cost function prefers to form minimum descriptions in a naive way for computational convenience. The cost function is called the Naive Description Length cost function. Finding minimum description models will be shown to be closely related to the identification of clusters in the data. As a consequence the minimum of this cost function approximates the most probable mode of the data rather than the sum-of-squares cost function that approximates the mean. The new cost function is shown to provide information about the structure of the data. This is done by inspecting the dependence of the error to the amount of regularisation. This structure provides a method of selecting regularisation parameters as an alternative or supplement to Bayesian methods. The new cost function is tested on a number of multi-valued problems such as a simple inverse kinematics problem. It is also tested on a number of classification and regression problems. The mode-seeking property of this cost function is shown to improve prediction in time series problems. Description length principles are used in a similar fashion to derive a regulariser to control network complexity.
Resumo:
The literature discusses several methods to control for self-selection effects but provides little guidance on which method to use in a setting with a limited number of variables. The authors theoretically compare and empirically assess the performance of different matching methods and instrumental variable and control function methods in this type of setting by investigating the effect of online banking on product usage. Hybrid matching in combination with the Gaussian kernel algorithm outperforms the other methods with respect to predictive validity. The empirical finding of large self-selection effects indicates the importance of controlling for these effects when assessing the effectiveness of marketing activities.
Resumo:
Tool life is an important factor to be considered during the optimisation of a machining process since cutting parameters can be adjusted to optimise tool changing, reducing cost and time of production. Also the performance of a tool is directly linked to the generated surface roughness and this is important in cases where there are strict surface quality requirements. The prediction of tool life and the resulting surface roughness in milling operations has attracted considerable research efforts. The research reported herein is focused on defining the influence of milling cutting parameters such as cutting speed, feed rate and axial depth of cut, on three major tool performance parameters namely, tool life, material removal and surface roughness. The research is seeking to define methods that will allow the selection of optimal parameters for best tool performance when face milling 416 stainless steel bars. For this study the Taguchi method was applied in a special design of an orthogonal array that allows studying the entire parameter space with only a number of experiments representing savings in cost and time of experiments. The findings were that the cutting speed has the most influence on tool life and surface roughness and very limited influence on material removal. By last tool life can be judged either from tool life or volume of material removal.
Resumo:
The manufacture of copper alloy flat rolled metals involves hot and cold rolling operations, together with annealing and other secondary processes, to transform castings (mainly slabs and cakes) into such shapes as strip, plate, sheet, etc. Production is mainly to customer orders in a wide range of specifications for dimensions and properties. However, order quantities are often small and so process planning plays an important role in this industry. Much research work has been done in the past in relation to the technology of flat rolling and the details of the operations, however, there is little or no evidence of any research in the planning of processes for this type of manufacture. Practical observation in a number of rolling mills has established the type of manual process planning traditionally used in this industry. This manual approach, however, has inherent drawbacks, being particularly dependent on the individual planners who gain their knowledge over a long span of practical experience. The introduction of the retrieval CAPP approach to this industry was a first step to reduce these problems. But this could not provide a long-term answer because of the need for an experienced planner to supervise generation of any plan. It also fails to take account of the dynamic nature of the parameters involved in the planning, such as the availability of resources, operation conditions and variations in the costs. The other alternative is the use of a generative approach to planning in the rolling mill context. In this thesis, generative methods are developed for the selection of optimal routes for single orders and then for batches of orders, bearing in mind equipment restrictions, production costs and material yield. The batch order process planning involves the use of a special cluster analysis algorithm for optimal grouping of the orders. This research concentrates on cold-rolling operations. A prototype model of the proposed CAPP system, including both single order and batch order planning options, has been developed and tested on real order data in the industry. The results were satisfactory and compared very favourably with the existing manual and retrieval methods.
Resumo:
Purpose: To investigate whether modification of liver complement factor H (CFH) production, by alteration of liver CFH Y402H genotype through liver transplantation (LT), influences the development of age-related macular degeneration (AMD). Design: Multicenter, cross-sectional study. Participants: We recruited 223 Western European patients ≥55 years old who had undergone LT ≥5 years previously. Methods: We determined AMD status using a standard grading system. Recipient CFH Y402H genotype was obtained from DNA extracted from recipient blood samples. Donor CFH Y402H genotype was inferred from recipient plasma CFH Y402H protein allotype, measured using enzyme-linked immunosorbent assays. This approach was verified by genotyping donor tissue from a subgroup of patients. Systemic complement activity was ascertained by measuring levels of plasma complement proteins using an enzyme-linked immunosorbent assay, including substrates (C3, C4), activation products (C3a, C4a, and terminal complement complex), and regulators (total CFH, C1 inhibitor). Main Outcome Measures: We evaluated AMD status and recipient and donor CFH Y402H genotype. Results: In LT patients, AMD was associated with recipient CFH Y402H genotype (P = 0.036; odds ratio [OR], 1.6; 95% confidence interval [CI], 1.0-2.4) but not with donor CFH Y402H genotype (P = 0.626), after controlling for age, sex, smoking status, and body mass index. Recipient plasma CFH Y402H protein allotype predicted donor CFH Y402H genotype with 100% accuracy (n = 49). Plasma complement protein or activation product levels were similar in LT patients with and without AMD. Compared with previously reported prevalence figures (Rotterdam Study), LT patients demonstrated a high prevalence of both AMD (64.6% vs 37.1%; OR, 3.09; P<0.001) and the CFH Y402H sequence variation (41.9% vs 36.2%; OR, 1.27; P = 0.014). Conclusions: Presence of AMD is not associated with modification of hepatic CFH production. In addition, AMD is not associated with systemic complement activity in LT patients. These findings suggest that local intraocular complement activity is of greater importance in AMD pathogenesis. The high AMD prevalence observed in LT patients may be associated with the increased frequency of the CFH Y402H sequence variation. © 2013 by the American Academy of Ophthalmology Published by Elsevier Inc.
Resumo:
PCA/FA is a method of analyzing complex data sets in which there are no clearly defined X or Y variables. It has multiple uses including the study of the pattern of variation between individual entities such as patients with particular disorders and the detailed study of descriptive variables. In most applications, variables are related to a smaller number of ‘factors’ or PCs that account for the maximum variance in the data and hence, may explain important trends among the variables. An increasingly important application of the method is in the ‘validation’ of questionnaires that attempt to relate subjective aspects of a patients experience with more objective measures of vision.