149 resultados para probabilistic tests
Resumo:
An extensive chloride profiling program was undertaken on concrete pier stems erected in the vicinity of the Dornoch Bridge located at the Dornoch Firth in Northeast Scotland. The pier stems were 2 m (6.562 ft) high and octagonal in plan with 0.66 m (2.165 ft) wide faces. The piers were constructed in sets of three with the lowest of each set in the tidal zone and the highest in the atmospheric zone. The pier stems were placed in such a way that they would represent the exposure conditions of the actual bridge piers of the Dornoch Bridge. In all, six of the pier stems were made using plain ordinary portland cement (OPC) concrete (with three of these having the surface treated with silane); the remaining three pier stems had a concrete containing caltite as an additive. Three exposurezones were studied: the tidal zone, the splash zone, and the atmospheric zone. The tidal zone was further subdivided into two levels defined as low-level and high-level. Chloride profiles were obtained from the different regimes over a period of 7 years for all nine pier stems. This paper describes the nature of chloride ingress and the usefulness of diffusion parameters in classifying each exposure regimes. Furthermore, the effectiveness of silane and caltite in protecting concrete from chloride ingress in different exposure zones was studied.
Resumo:
This paper discusses the relations between extended incidence calculus and assumption-based truth maintenance systems (ATMSs). We first prove that managing labels for statements (nodes) in an ATMS is equivalent to producing incidence sets of these statements in extended incidence calculus. We then demonstrate that the justification set for a node is functionally equivalent to the implication relation set for the same node in extended incidence calculus. As a consequence, extended incidence calculus can provide justifications for an ATMS, because implication relation sets are discovered by the system automatically. We also show that extended incidence calculus provides a theoretical basis for constructing a probabilistic ATMS by associating proper probability distributions on assumptions. In this way, we can not only produce labels for all nodes in the system, but also calculate the probability of any of such nodes in it. The nogood environments can also be obtained automatically. Therefore, extended incidence calculus and the ATMS are equivalent in carrying out inferences at both the symbolic level and the numerical level. This extends a result due to Laskey and Lehner.
Resumo:
Logistic regression and Gaussian mixture model (GMM) classifiers have been trained to estimate the probability of acute myocardial infarction (AMI) in patients based upon the concentrations of a panel of cardiac markers. The panel consists of two new markers, fatty acid binding protein (FABP) and glycogen phosphorylase BB (GPBB), in addition to the traditional cardiac troponin I (cTnI), creatine kinase MB (CKMB) and myoglobin. The effect of using principal component analysis (PCA) and Fisher discriminant analysis (FDA) to preprocess the marker concentrations was also investigated. The need for classifiers to give an accurate estimate of the probability of AMI is argued and three categories of performance measure are described, namely discriminatory ability, sharpness, and reliability. Numerical performance measures for each category are given and applied. The optimum classifier, based solely upon the samples take on admission, was the logistic regression classifier using FDA preprocessing. This gave an accuracy of 0.85 (95% confidence interval: 0.78-0.91) and a normalised Brier score of 0.89. When samples at both admission and a further time, 1-6 h later, were included, the performance increased significantly, showing that logistic regression classifiers can indeed use the information from the five cardiac markers to accurately and reliably estimate the probability AMI. © Springer-Verlag London Limited 2008.
Resumo:
This paper presents the experimental results of loading tests on two 18m span tapered member portal frames designed to BS 5950. Deflection test results for vertical, lateral and combined loading cases are compared with the predictions given by elastic analysis to BS 5950 and shown to be favourable. The predicted ultimate capacities and modes of failure, which were by lateral-torsional buckling of the columns, were also found to agree with the experimental behaviour. It was found that the method of modelling the tapered members as a series of prismatic elements gave good comparison with test results.
Resumo:
The LifeShirt is a novel ambulatory monitoring system that records cardio respiratory measurements outside the laboratory. Validity and reliability of cardiorespiratory measurements recorded by the LifeShirt were assessed and two methods of calibrating the LifeShirt were compared. Participants performed an incremental treadmill test and a constant work rate test (65% peak oxygen uptake) on four occasions (>48 In apart) and wore the LifeShirt, COSMED system and Polar Sport Tester simultaneously. The LifeShirt was calibrated using two methods: comparison to a spirometer; and 800 ml fixed-volume bag. Ventilation, respiratory rate, expiratory time and heart rate recorded by the LifeShirt were compared to measurements recorded by laboratory equipment. Sixteen adults participated (6M: 10F); mean (SD) age 23.1 (2.9) years. Agreement between the LifeShirt and laboratory equipment was acceptable. Agreement for ventilation was improved by calibrating the LifeShirt using a spirometer. Reliability was similar for the LifeShirt and the laboratory equipment. This study suggests that the LifeShirt provides a valid and reliable method of ambulatory monitoring. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Ligand prediction has been driven by a fundamental desire to understand more about how biomolecules recognize their ligands and by the commercial imperative to develop new drugs. Most of the current available software systems are very complex and time-consuming to use. Therefore, developing simple and efficient tools to perform initial screening of interesting compounds is an appealing idea. In this paper, we introduce our tool for very rapid screening for likely ligands (either substrates or inhibitors) based on reasoning with imprecise probabilistic knowledge elicited from past experiments. Probabilistic knowledge is input to the system via a user-friendly interface showing a base compound structure. A prediction of whether a particular compound is a substrate is queried against the acquired probabilistic knowledge base and a probability is returned as an indication of the prediction. This tool will be particularly useful in situations where a number of similar compounds have been screened experimentally, but information is not available for all possible members of that group of compounds. We use two case studies to demonstrate how to use the tool.
Resumo:
This paper examines the finite sample properties of three testing regimes for the null hypothesis of a panel unit root against stationary alternatives in the presence of cross-sectional correlation. The regimes of Bai and Ng (2004), Moon and Perron (2004) and Pesaran (2007) are assessed in the presence of multiple factors and also other non-standard situations. The behaviour of some information criteria used to determine the number of factors in a panel is examined and new information criteria with improved properties in small-N panels proposed. An application to the efficient markets hypothesis is also provided. The null hypothesis of a panel random walk is not rejected by any of the tests, supporting the efficient markets hypothesis in the financial services sector of the Australian Stock Exchange.