51 resultados para Constraint based modelling
Resumo:
The principle of using induction rules based on spatial environmental data to model a soil map has previously been demonstrated Whilst the general pattern of classes of large spatial extent and those with close association with geology were delineated small classes and the detailed spatial pattern of the map were less well rendered Here we examine several strategies to improve the quality of the soil map models generated by rule induction Terrain attributes that are better suited to landscape description at a resolution of 250 m are introduced as predictors of soil type A map sampling strategy is developed Classification error is reduced by using boosting rather than cross validation to improve the model Further the benefit of incorporating the local spatial context for each environmental variable into the rule induction is examined The best model was achieved by sampling in proportion to the spatial extent of the mapped classes boosting the decision trees and using spatial contextual information extracted from the environmental variables.
Resumo:
Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
We suggest a new notion of behaviour preserving transition refinement based on partial order semantics. This notion is called transition refinement. We introduced transition refinement for elementary (low-level) Petri Nets earlier. For modelling and verifying complex distributed algorithms, high-level (Algebraic) Petri nets are usually used. In this paper, we define transition refinement for Algebraic Petri Nets. This notion is more powerful than transition refinement for elementary Petri nets because it corresponds to the simultaneous refinement of several transitions in an elementary Petri net. Transition refinement is particularly suitable for refinement steps that increase the degree of distribution of an algorithm, e.g. when synchronous communication is replaced by asynchronous message passing. We study how to prove that a replacement of a transition is a transition refinement.
Resumo:
This paper describes a practical application of MDA and reverse engineering based on a domain-specific modelling language. A well defined metamodel of a domain-specific language is useful for verification and validation of associated tools. We apply this approach to SIFA, a security analysis tool. SIFA has evolved as requirements have changed, and it has no metamodel. Hence, testing SIFA’s correctness is difficult. We introduce a formal metamodelling approach to develop a well-defined metamodel of the domain. Initially, we develop a domain model in EMF by reverse engineering the SIFA implementation. Then we transform EMF to Object-Z using model transformation. Finally, we complete the Object-Z model by specifying system behavior. The outcome is a well-defined metamodel that precisely describes the domain and the security properties that it analyses. It also provides a reliable basis for testing the current SIFA implementation and forward engineering its successor.
Resumo:
The anisotropic norm of a linear discrete-time-invariant system measures system output sensitivity to stationary Gaussian input disturbances of bounded mean anisotropy. Mean anisotropy characterizes the degree of predictability (or colouredness) and spatial non-roundness of the noise. The anisotropic norm falls between the H-2 and H-infinity norms and accommodates their loss of performance when the probability structure of input disturbances is not exactly known. This paper develops a method for numerical computation of the anisotropic norm which involves linked Riccati and Lyapunov equations and an associated special type equation.
Resumo:
A combination of modelling and analysis techniques was used to design a six component force balance. The balance was designed specifically for the measurement of impulsive aerodynamic forces and moments characteristic of hypervelocity shock tunnel testing using the stress wave force measurement technique. Aerodynamic modelling was used to estimate the magnitude and distribution of forces and finite element modelling to determine the mechanical response of proposed balance designs. Simulation of balance performance was based on aerodynamic loads and mechanical responses using convolution techniques. Deconvolution was then used to assess balance performance and to guide further design modifications leading to the final balance design. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Bond's method for ball mill scale-up only gives the mill power draw for a given duty. This method is incompatible with computer modelling and simulation techniques. It might not be applicable for the design of fine grinding ball mills and ball mills preceded by autogenous and semi-autogenous grinding mills. Model-based ball mill scale-up methods have not been validated using a wide range of full-scale circuit data. Their accuracy is therefore questionable. Some of these methods also need expensive pilot testing. A new ball mill scale-up procedure is developed which does not have these limitations. This procedure uses data from two laboratory tests to determine the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of full-scale mill circuits. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw.
Resumo:
A new ball mill scale-up procedure is developed which uses laboratory data to predict the performance of MI-scale ball mill circuits. This procedure contains two laboratory tests. These laboratory tests give the data for the determination of the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of the full-scale mill circuit. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw. A worked example shows how the new ball mill scale-up procedure is executed. This worked example uses laboratory data to predict the performance of a full-scale re-grind mill circuit. This circuit consists of a ball mill in closed circuit with hydrocyclones. The MI-scale ball mill has a diameter (inside liners) of 1.85m. The scale-up procedure shows that the full-scale circuit produces a product (hydrocyclone overflow) that has an 80% passing size of 80 mum. The circuit has a recirculating load of 173%. The calculated power draw of the full-scale mill is 92kW (C) 2001 Elsevier Science Ltd. All rights reserved.
Model-based procedure for scale-up of wet, overflow ball mills - Part III: Validation and discussion
Resumo:
A new ball mill scale-up procedure is developed. This procedure has been validated using seven sets of Ml-scale ball mil data. The largest ball mills in these data have diameters (inside liners) of 6.58m. The procedure can predict the 80% passing size of the circuit product to within +/-6% of the measured value, with a precision of +/-11% (one standard deviation); the re-circulating load to within +/-33% of the mass-balanced value (this error margin is within the uncertainty associated with the determination of the re-circulating load); and the mill power to within +/-5% of the measured value. This procedure is applicable for the design of ball mills which are preceded by autogenous (AG) mills, semi-autogenous (SAG) mills, crushers and flotation circuits. The new procedure is more precise and more accurate than Bond's method for ball mill scale-up. This procedure contains no efficiency correction which relates to the mill diameter. This suggests that, within the range of mill diameter studied, milling efficiency does not vary with mill diameter. This is in contrast with Bond's equation-Bond claimed that milling efficiency increases with mill diameter. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The present paper addresses two major concerns that were identified when developing neural network based prediction models and which can limit their wider applicability in the industry. The first problem is that it appears neural network models are not readily available to a corrosion engineer. Therefore the first part of this paper describes a neural network model of CO2 corrosion which was created using a standard commercial software package and simple modelling strategies. It was found that such a model was able to capture practically all of the trends noticed in the experimental data with acceptable accuracy. This exercise has proven that a corrosion engineer could readily develop a neural network model such as the one described below for any problem at hand, given that sufficient experimental data exist. This applies even in the cases when the understanding of the underlying processes is poor. The second problem arises from cases when all the required inputs for a model are not known or can be estimated with a limited degree of accuracy. It seems advantageous to have models that can take as input a range rather than a single value. One such model, based on the so-called Monte Carlo approach, is presented. A number of comparisons are shown which have illustrated how a corrosion engineer might use this approach to rapidly test the sensitivity of a model to the uncertainities associated with the input parameters. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper proposes an integrated methodology for modelling froth zone performance in batch and continuously operated laboratory flotation cells. The methodology is based on a semi-empirical approach which relates the overall flotation rate constant to the froth depth (FD) in the flotation cell; from this relationship, a froth zone recovery (R,) can be extracted. Froth zone recovery, in turn, may be related to the froth retention time (FRT), defined as the ratio of froth volume to the volumetric flow rate of concentrate from the cell. An expansion of this relationship to account for particles recovered both by true flotation and entrainment provides a simple model that may be used to predict the froth performance in continuous tests from the results of laboratory batch experiments. Crown Copyright (C) 2002 Published by Elsevier Science B.V. All rights reserved.
Resumo:
A comprehensive probabilistic model for simulating microstructure formation and evolution during solidification has been developed, based on coupling a Finite Differential Method (FDM) for macroscopic modelling of heat diffusion to a modified Cellular Automaton (mCA) for microscopic modelling of nucleation, growth of microstructures and solute diffusion. The mCA model is similar to Nastac's model for handling solute redistribution in the liquid and solid phases, curvature and growth anisotropy, but differs in the treatment of nucleation and growth. The aim is to improve understanding of the relationship between the solidification conditions and microstructure formation and evolution. A numerical algorithm used for FDM and mCA was developed. At each coarse scale, temperatures at FDM nodes were calculated while nucleation-growth simulation was done at a finer scale, with the temperature at the cell locations being interpolated from those at the coarser volumes. This model takes account of thermal, curvature and solute diffusion effects. Therefore, it can not only simulate microstructures of alloys both on the scale of grain size (macroscopic level) and the dendrite tip length (mesoscopic level), but also investigate nucleation mechanisms and growth kinetics of alloys solidified with various solute concentrations and solidification morphologies. The calculated results are compared with values of grain sizes and solidification morphologies of microstructures obtained from a set of casting experiments of Al-Si alloys in graphite crucibles.
Resumo:
Many granulation plants operate well below design capacity, suffering from high recycle rates and even periodic instabilities. This behaviour cannot be fully predicted using the present models. The main objective of the paper is to provide an overview of the current status of model development for granulation processes and suggest future directions for research and development. The end-use of the models is focused on the optimal design and control of granulation plants using the improved predictions of process dynamics. The development of novel models involving mechanistically based structural switching methods is proposed in the paper. A number of guidelines are proposed for the selection of control relevant model structures. (C) 2002 Published by Elsevier Science B.V.
Resumo:
A technique based on laser light diffraction is shown to be successful in collecting on-line experimental data. Time series of floc size distributions (FSD) under different shear rates (G) and calcium additions were collected. The steady state mass mean diameter decreased with increasing shear rate G and increased when calcium additions exceeded 8 mg/l. A so-called population balance model (PBM) was used to describe the experimental data, This kind of model describes both aggregation and breakage through birth and death terms. A discretised PBM was used since analytical solutions of the integro-partial differential equations are non-existing. Despite the complexity of the model, only 2 parameters need to be estimated: the aggregation rate and the breakage rate. The model seems, however, to lack flexibility. Also, the description of the floc size distribution (FSD) in time is not accurate.
Resumo:
We focus on mixtures of factor analyzers from the perspective of a method for model-based density estimation from high-dimensional data, and hence for the clustering of such data. This approach enables a normal mixture model to be fitted to a sample of n data points of dimension p, where p is large relative to n. The number of free parameters is controlled through the dimension of the latent factor space. By working in this reduced space, it allows a model for each component-covariance matrix with complexity lying between that of the isotropic and full covariance structure models. We shall illustrate the use of mixtures of factor analyzers in a practical example that considers the clustering of cell lines on the basis of gene expressions from microarray experiments. (C) 2002 Elsevier Science B.V. All rights reserved.