40 resultados para 3D printing,steel bars,calibration of design values,correlation
Resumo:
The power required to operate large gyratory mills often exceeds 10 MW. Hence, optimisation of the power consumption will have a significant impact on the overall economic performance and environmental impact of the mineral processing plant. In most of the published models of tumbling mills (e.g. [Morrell, S., 1996. Power draw of wet tumbling mills and its relationship to charge dynamics, Part 2: An empirical approach to modelling of mill power draw. Trans. Inst. Mining Metall. (Section C: Mineral Processing Ext. Metall.) 105, C54-C62. Austin, L.G., 1990. A mill power equation for SAG mills. Miner. Metall. Process. 57-62]), the effect of lifter design and its interaction with mill speed and filling are not incorporated. Recent experience suggests that there is an opportunity for improving grinding efficiency by choosing the appropriate combination of these variables. However, it is difficult to experimentally determine the interactions of these variables in a full scale mill. Although some work has recently been published using DEM simulations, it was basically. limited to 2D. The discrete element code, Particle Flow Code 3D (PFC3D), has been used in this work to model the effects of lifter height (525 cm) and mill speed (50-90% of critical) on the power draw and frequency distribution of specific energy (J/kg) of normal impacts in a 5 m diameter autogenous (AG) mill. It was found that the distribution of the impact energy is affected by the number of lifters, lifter height, mill speed and mill filling. Interactions of lifter design, mill speed and mill filling are demonstrated through three dimensional distinct element methods (3D DEM) modelling. The intensity of the induced stresses (shear and normal) on lifters, and hence the lifter wear, is also simulated. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The Gauss-Marquardt-Levenberg (GML) method of computer-based parameter estimation, in common with other gradient-based approaches, suffers from the drawback that it may become trapped in local objective function minima, and thus report optimized parameter values that are not, in fact, optimized at all. This can seriously degrade its utility in the calibration of watershed models where local optima abound. Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use. It is also easily adapted to maintain this efficiency in the face of potential numerical problems (that adversely affect all parameter estimation methodologies) caused by parameter insensitivity and/or parameter correlation. The present paper presents two algorithmic enhancements to the GML method that retain its strengths, but which overcome its weaknesses in the face of local optima. Using the first of these methods an intelligent search for better parameter sets is conducted in parameter subspaces of decreasing dimensionality when progress of the parameter estimation process is slowed either by numerical instability incurred through problem ill-posedness, or when a local objective function minimum is encountered. The second methodology minimizes the chance of successive GML parameter estimation runs finding the same objective function minimum by starting successive runs at points that are maximally removed from previous parameter trajectories. As well as enhancing the ability of a GML-based method to find the global objective function minimum, the latter technique can also be used to find the locations of many non-global optima (should they exist) in parameter space. This can provide a useful means of inquiring into the well-posedness of a parameter estimation problem, and for detecting the presence of bimodal parameter and predictive probability distributions. The new methodologies are demonstrated by calibrating a Hydrological Simulation Program-FORTRAN (HSPF) model against a time series of daily flows. Comparison with the SCE-UA method in this calibration context demonstrates a high level of comparative model run efficiency for the new method. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a Solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The cost of uniqueness is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, ill turn, can lead to erroneous predictions made by a model that is ostensibly well calibrated. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as all inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based oil pilot points, and calibration is Implemented using both zones of piecewise constancy and constrained minimization regularization. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The Fornax Spectroscopic Survey will use the Two degree Field spectrograph (2dF) of the Angle-Australian Telescope to obtain spectra for a complete sample of all 14000 objects with 16.5 less than or equal to b(j) less than or equal to 19.7 in a 12 square degree area centred on the Fornax Cluster. The aims of this project include the study of dwarf galaxies in the cluster (both known low surface brightness objects and putative normal surface brightness dwarfs) and a comparison sample of background field galaxies. We will also measure quasars and other active galaxies, any previously unrecognised compact galaxies and a large sample of Galactic stars. By selecting all objects-both stars and galaxies-independent of morphology, we cover a much larger range of surface brightness and scale size than previous surveys. In this paper we first describe the design of the survey. Our targets are selected from UK Schmidt Telescope sky survey plates digitised by the Automated Plate Measuring (APM) facility. We then describe the photometric and astrometric calibration of these data and show that the APM astrometry is accurate enough for use with the 2dF. We also describe a general approach to object identification using cross-correlations which allows us to identify and classify both stellar and galaxy spectra. We present results from the first 2dF field. Redshift distributions and velocity structures are shown for all observed objects in the direction of Fornax, including Galactic stars? galaxies in and around the Fornax Cluster, and for the background galaxy population. The velocity data for the stars show the contributions from the different Galactic components, plus a small tail to high velocities. We find no galaxies in the foreground to the cluster in our 2dF field. The Fornax Cluster is clearly defined kinematically. The mean velocity from the 26 cluster members having reliable redshifts is 1560 +/- 80 km s(-1). They show a velocity dispersion of 380 +/- 50 km s(-1). Large-scale structure can be traced behind the cluster to a redshift beyond z = 0.3. Background compact galaxies and low surface brightness galaxies are found to follow the general galaxy distribution.
Resumo:
Epidemiological studies report confidence or uncertainty intervals around their estimates. Estimates of the burden of diseases and risk factors are subject to a broader range of uncertainty because of the combination of multiple data sources and value choices. Sensitivity analysis can be used to examine the effects of social values that have been incorporated into the design of the disability–adjusted life year (DALY). Age weight, where a year of healthy life lived at one age is valued differently from at another age, is the most controversial value built into the DALY. The discount rate, which addresses the difference in value of current versus future health benefits, also has been criticized. The distribution of the global disease burden and rankings of various conditions are largely insensitive to alternate assumptions about the discount rate and age weighting. The major effects of discounting and age weighting are to enhance the importance of neuropsychiatric conditions and sexually transmitted infections. The Global Burden of Disease study also has been criticized for estimating mortality and disease burden for regions using incomplete and uncertain data. Including uncertain results, with uncertainty quantified to the extent possible, is preferable, however, to leaving blank cells in tables intended to provide policy makers with an overall assessment of burden of disease. No estimate is generally interpreted as no problem. Greater investment in getting the descriptive epidemiology of diseases and injuries correct in poor countries will do vastly more to reduce uncertainty in disease burden assessments than a philosophical debate about the appropriateness of social value
Resumo:
The contingent valuation method is often used for valuing environmental goods possessing use as well as non-use values. This paper investigates the relative importance of these values in relation to the existence of the wild Asian elephant. It does so by analysing results from a contingent valuation survey of a sample of urban residents living in three selected housing schemes in Colombo. We find that the major proportion of the respondents’ willingness to pay (WTP) for conservation of wild elephants is attributable to the non-use values of the elephant. However, differences in the relative importance of these values exist between those who visit national parks and those who do not. Differences in respondents’ WTP for conservation of elephants are found to be largely influenced by attitudinal and behavioural factors rather than socio-economic ones. We conclude that policymakers must recognise and take account of the importance of non-use values of the Asian elephant, if this endangered species is to survive in the long run. Nevertheless, the non-consumptive use value of elephants in Sri Lanka is also found to be substantial.
Resumo:
Design of liquid retaining structures involves many decisions to be made by the designer based on rules of thumb, heuristics, judgment, code of practice and previous experience. Various design parameters to be chosen include configuration, material, loading, etc. A novice engineer may face many difficulties in the design process. Recent developments in artificial intelligence and emerging field of knowledge-based system (KBS) have made widespread applications in different fields. However, no attempt has been made to apply this intelligent system to the design of liquid retaining structures. The objective of this study is, thus, to develop a KBS that has the ability to assist engineers in the preliminary design of liquid retaining structures. Moreover, it can provide expert advice to the user in selection of design criteria, design parameters and optimum configuration based on minimum cost. The development of a prototype KBS for the design of liquid retaining structures (LIQUID), using blackboard architecture with hybrid knowledge representation techniques including production rule system and object-oriented approach, is presented in this paper. An expert system shell, Visual Rule Studio, is employed to facilitate the development of this prototype system. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Supporting student learning can be difficult, especially within open-ended or loosely structured activities, often seen as valuable for promoting student autonomy in many curriculum areas and contexts. This paper reports an investigation into the experiences of three teachers who implemented design and technology education ideas in their primary school classrooms for the first time. The teachers did not capitalise upon many of the opportunities for scaffolding their students' learning within the open-ended activities they implemented. Limitations of the teachers' conceptual and procedural knowledge of design and technology were elements that influenced their early experiences. The study has implications for professional developers planning programs in newly introduced areas of the curriculum to support teachers in supporting learning within open-ended and loosely structured problem solving activities. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper describes a coupled knowledge-based system (KBS) for the design of liquid-retaining structures, which can handle both the symbolic knowledge processing based on engineering heuristics in the preliminary synthesis stage and the extensive numerical crunching involved in the detailed analysis stage. The prototype system is developed by employing blackboard architecture and a commercial shell VISUAL RULE STUDIO. Its present scope covers design of three types of liquid-retaining structures, namely, a rectangular shape with one compartment, a rectangular shape with two compartments and a circular shape. Through custom-built interactive graphical user interfaces, the user is directed throughout the design process, which includes preliminary design, load specification, model generation, finite element analysis, code compliance checking and member sizing optimization. It is also integrated with various relational databases that provide the system with sectional properties, moment and shear coefficients and final member details. This system can act as a consultant to assist novice designers in the design of liquid-retaining structures with increase in efficiency and optimization of design output and automated record keeping. The design of a typical example of the liquid-retaining structure is also illustrated. (C) 2003 Elsevier B.V All rights reserved.