929 resultados para Probabilistic Error Correction
Model-based procedure for scale-up of wet, overflow ball mills - Part III: Validation and discussion
Resumo:
A new ball mill scale-up procedure is developed. This procedure has been validated using seven sets of Ml-scale ball mil data. The largest ball mills in these data have diameters (inside liners) of 6.58m. The procedure can predict the 80% passing size of the circuit product to within +/-6% of the measured value, with a precision of +/-11% (one standard deviation); the re-circulating load to within +/-33% of the mass-balanced value (this error margin is within the uncertainty associated with the determination of the re-circulating load); and the mill power to within +/-5% of the measured value. This procedure is applicable for the design of ball mills which are preceded by autogenous (AG) mills, semi-autogenous (SAG) mills, crushers and flotation circuits. The new procedure is more precise and more accurate than Bond's method for ball mill scale-up. This procedure contains no efficiency correction which relates to the mill diameter. This suggests that, within the range of mill diameter studied, milling efficiency does not vary with mill diameter. This is in contrast with Bond's equation-Bond claimed that milling efficiency increases with mill diameter. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
The present paper addresses two major concerns that were identified when developing neural network based prediction models and which can limit their wider applicability in the industry. The first problem is that it appears neural network models are not readily available to a corrosion engineer. Therefore the first part of this paper describes a neural network model of CO2 corrosion which was created using a standard commercial software package and simple modelling strategies. It was found that such a model was able to capture practically all of the trends noticed in the experimental data with acceptable accuracy. This exercise has proven that a corrosion engineer could readily develop a neural network model such as the one described below for any problem at hand, given that sufficient experimental data exist. This applies even in the cases when the understanding of the underlying processes is poor. The second problem arises from cases when all the required inputs for a model are not known or can be estimated with a limited degree of accuracy. It seems advantageous to have models that can take as input a range rather than a single value. One such model, based on the so-called Monte Carlo approach, is presented. A number of comparisons are shown which have illustrated how a corrosion engineer might use this approach to rapidly test the sensitivity of a model to the uncertainities associated with the input parameters. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Combinatorial optimization problems share an interesting property with spin glass systems in that their state spaces can exhibit ultrametric structure. We use sampling methods to analyse the error surfaces of feedforward multi-layer perceptron neural networks learning encoder problems. The third order statistics of these points of attraction are examined and found to be arranged in a highly ultrametric way. This is a unique result for a finite, continuous parameter space. The implications of this result are discussed.
Resumo:
The choice of genotyping families vs unrelated individuals is a critical factor in any large-scale linkage disequilibrium (LD) study. The use of unrelated individuals for such studies is promising, but in contrast to family designs, unrelated samples do not facilitate detection of genotyping errors, which have been shown to be of great importance for LD and linkage studies and may be even more important in genotyping collaborations across laboratories. Here we employ some of the most commonly-used analysis methods to examine the relative accuracy of haplotype estimation using families vs unrelateds in the presence of genotyping error. The results suggest that even slight amounts of genotyping error can significantly decrease haplotype frequency and reconstruction accuracy, that the ability to detect such errors in large families is essential when the number/complexity of haplotypes is high (low LD/common alleles). In contrast, in situations of low haplotype complexity (high LD and/or many rare alleles) unrelated individuals offer such a high degree of accuracy that there is little reason for less efficient family designs. Moreover, parent-child trios, which comprise the most popular family design and the most efficient in terms of the number of founder chromosomes per genotype but which contain little information for error detection, offer little or no gain over unrelated samples in nearly all cases, and thus do not seem a useful sampling compromise between unrelated individuals and large families. The implications of these results are discussed in the context of large-scale LD mapping projects such as the proposed genome-wide haplotype map.
Resumo:
The purpose of this study was threefold: first, the study was designed to illustrate the use of data and information collected in food safety surveys in a quantitative risk assessment. In this case, the focus was on the food service industry; however, similar data from other parts of the food chain could be similarly incorporated. The second objective was to quantitatively describe and better understand the role that the food service industry plays in the safety of food. The third objective was to illustrate the additional decision-making information that is available when uncertainty and variability are incorporated into the modelling of systems. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Regional commodity forecasts are being used increasingly in agricultural industries to enhance their risk management and decision-making processes. These commodity forecasts are probabilistic in nature and are often integrated with a seasonal climate forecast system. The climate forecast system is based on a subset of analogue years drawn from the full climatological distribution. In this study we sought to measure forecast quality for such an integrated system. We investigated the quality of a commodity (i.e. wheat and sugar) forecast based on a subset of analogue years in relation to a standard reference forecast based on the full climatological set. We derived three key dimensions of forecast quality for such probabilistic forecasts: reliability, distribution shift, and change in dispersion. A measure of reliability was required to ensure no bias in the forecast distribution. This was assessed via the slope of the reliability plot, which was derived from examination of probability levels of forecasts and associated frequencies of realizations. The other two dimensions related to changes in features of the forecast distribution relative to the reference distribution. The relationship of 13 published accuracy/skill measures to these dimensions of forecast quality was assessed using principal component analysis in case studies of commodity forecasting using seasonal climate forecasting for the wheat and sugar industries in Australia. There were two orthogonal dimensions of forecast quality: one associated with distribution shift relative to the reference distribution and the other associated with relative distribution dispersion. Although the conventional quality measures aligned with these dimensions, none measured both adequately. We conclude that a multi-dimensional approach to assessment of forecast quality is required and that simple measures of reliability, distribution shift, and change in dispersion provide a means for such assessment. The analysis presented was also relevant to measuring quality of probabilistic seasonal climate forecasting systems. The importance of retaining a focus on the probabilistic nature of the forecast and avoiding simplifying, but erroneous, distortions was discussed in relation to applying this new forecast quality assessment paradigm to seasonal climate forecasts. Copyright (K) 2003 Royal Meteorological Society.
Resumo:
A associação entre experiências adversas na infância e o desencadeamento de depressão ou dor crônica na vida adulta tem sido documentada, assim como a relação entre os sintomas de dor crônica e depressão. No entanto, há poucos estudos avaliando o papel da exposição a experiências adversas na infância na ocorrência dessa comorbidade. O objetivo deste trabalho é avaliar a influência da exposição a experiências adversas na infância na ocorrência de dor crônica, de depressão e na comorbidade dor crônica e depressão na vida adulta, em uma amostra da população geral adulta (maiores de 18 anos) residente na Região metropolitana de São Paulo, Brasil. Os dados são resultantes do Estudo Epidemiológicos dos Transtornos Mentais São Paulo Megacity. Os respondentes foram avaliados usando a versão desenvolvida para o Estudo Mundial de Saúde Mental do Composite International Diagnostic Interview da Organização Mundial da Saúde (WMH-CIDI), que é composto por módulos clínicos e nãoclínicos provendo diagnósticos de acordo com os critérios do Manual Diagnóstico e Estatístico dos Transtornos Mentais 4ª edição (DSM-IV). Um total de 5.037 indivíduos foi entrevistado, com uma taxa global de resposta de 81,3%. Foram realizadas análises descritivas para médias e proporções, e associações (Razões de Chances – OR) entre experiências adversas na infância, dor crônica e depressão através de regressão logística. Todas as análises foram realizadas através do programa estatístico Data Analysis and Statistical Software versão 12.0 (STATA 12.0), com testes bi-caudais com nível de significância de 5%. Uma elevada taxa de prevalência de dor crônica (31%, Erro Padrão [ER]=0.8) foi encontrada, Dor Crônica esteve associada aos transtornos de ansiedade (OR=2,3; 95% IC=1,9 – 3,0), transtornos de humor (OR=3,3; IC=2,6 – 4,4) em qualquer transtorno mental (OR=2,7; 95% IC=2,3 – 3,3). As adversidades na infância estiveram fortemente associadas aos respondentes com dor crônica e depressão concomitante, principalmente quanto ao abuso físico (OR=2,7; 95% IC=2,1 – 3,5) e sexual (OR=7,4; 95% IC=3,4 – 16,1).
Resumo:
Pectus excavatum is the most common congenital deformity of the anterior thoracic wall. The surgical correction of such deformity, using Nuss procedure, consists in the placement of a personalized convex prosthesis into sub-sternal position to correct the deformity. The aim of this work is the CT-scan substitution by ultrasound imaging for the pre-operative diagnosis and pre-modeling of the prosthesis, in order to avoid patient radiation exposure. To accomplish this, ultrasound images are acquired along an axial plane, followed by a rigid registration method to obtain the spatial transformation between subsequent images. These images are overlapped to reconstruct an axial plane equivalent to a CT-slice. A phantom was used to conduct preliminary experiments and the achieved results were compared with the corresponding CT-data, showing that the proposed methodology can be capable to create a valid approximation of the anterior thoracic wall, which can be used to model/bend the prosthesis
Resumo:
Pectus Carinatum (PC) is a chest deformity consisting on the anterior protrusion of the sternum and adjacent costal cartilages. Non-operative corrections, such as the orthotic compression brace, require previous information of the patient chest surface, to improve the overall brace fit. This paper focuses on the validation of the Kinect scanner for the modelling of an orthotic compression brace for the correction of Pectus Carinatum. To this extent, a phantom chest wall surface was acquired using two scanner systems – Kinect and Polhemus FastSCAN – and compared through CT. The results show a RMS error of 3.25mm between the CT data and the surface mesh from the Kinect sensor and 1.5mm from the FastSCAN sensor
Resumo:
Pectus Carinatum is a deformity of the chest wall, characterized by an anterior protrusion of the sternum, often corrected surgically due to cosmetic motivation. This work presents an alternative approach to the current open surgery option, proposing a novel technique based on a personalized orthosis. Two different processes for the orthosis’ personalization are presented. One based on a 3D laser scan of the patient chest, followed by the reconstruction of the thoracic wall mesh using a radial basis function, and a second one, based on a computer tomography scan followed by a neighbouring cells algorithm. The axial position where the orthosis is to be located is automatically calculated using a Ray-Triangle intersection method, whose outcome is input to a pseudo Kochenek interpolating spline method to define the orthosis curvature. Results show that no significant differences exist between the patient chest physiognomy and the curvature angle and size of the orthosis, allowing a better cosmetic outcome and less initial discomfort