949 resultados para Entity-relationship Models
Resumo:
Blast fragmentation can have a significant impact on the profitability of a mine. An optimum run of mine (ROM) size distribution is required to maximise the performance of downstream processes. If this fragmentation size distribution can be modelled and controlled, the operation will have made a significant advancement towards improving its performance. Blast fragmentation modelling is an important step in Mine to Mill™ optimisation. It allows the estimation of blast fragmentation distributions for a number of different rock mass, blast geometry, and explosive parameters. These distributions can then be modelled in downstream mining and milling processes to determine the optimum blast design. When a blast hole is detonated rock breakage occurs in two different stress regions - compressive and tensile. In the-first region, compressive stress waves form a 'crushed zone' directly adjacent to the blast hole. The second region, termed the 'cracked zone', occurs outside the crush one. The widely used Kuz-Ram model does not recognise these two blast regions. In the Kuz-Ram model the mean fragment size from the blast is approximated and is then used to estimate the remaining size distribution. Experience has shown that this model predicts the coarse end reasonably accurately, but it can significantly underestimate the amount of fines generated. As part of the Australian Mineral Industries Research Association (AMIRA) P483A Mine to Mill™ project, the Two-Component Model (TCM) and Crush Zone Model (CZM), developed by the Julius Kruttschnitt Mineral Research Centre (JKMRC), were compared and evaluated to measured ROM fragmentation distributions. An important criteria for this comparison was the variation of model results from measured ROM in the-fine to intermediate section (1-100 mm) of the fragmentation curve. This region of the distribution is important for Mine to Mill™ optimisation. The comparison of modelled and Split ROM fragmentation distributions has been conducted in harder ores (UCS greater than 80 MPa). Further work involves modelling softer ores. The comparisons will be continued with future site surveys to increase confidence in the comparison of the CZM and TCM to Split results. Stochastic fragmentation modelling will then be conducted to take into account variation of input parameters. A window of possible fragmentation distributions can be compared to those obtained by Split . Following this work, an improved fragmentation model will be developed in response to these findings.
Resumo:
A new algebraic Bethe ansatz scheme is proposed to diagonalize classes of integrable models relevant to the description of Bose-Einstein condensation in dilute alkali gases. This is achieved by introducing the notion of Z-graded representations of the Yang-Baxter algebra. (C) 2003 American Institute of Physics.
Resumo:
The standard mathematical models in population ecology assume that a population's growth rate is a function of its environment. In this paper we investigate an alternative proposal according to which the rate of change of the growth rate is a function of the environment and of environmental change. We focus on the philosophical issues involved in such a fundamental shift in theoretical assumptions, as well as on the explanations the two theories offer for some of the key data such as cyclic populations. We also discuss the relationship between this move in population ecology and a similar move from first-order to second-order differential equations championed by Galileo and Newton in celestial mechanics.
Resumo:
Despite the strong influence of plant architecture on crop yield, most crop models either ignore it or deal with it in a very rudimentary way. This paper demonstrates the feasibility of linking a model that simulates the morphogenesis and resultant architecture of individual cotton plants with a crop model that simulates the effects of environmental factors on critical physiological processes and resulting yield in cotton. First the varietal parameters of the models were made concordant. Then routines were developed to allocate the flower buds produced each day by the crop model amongst the potential positions generated by the architectural model. This allocation is done according to a set of heuristic rules. The final weight of individual bolls and the shedding of buds and fruit caused by water, N, and C stresses are processed in a similar manner. Observations of the positions of harvestable fruits, both within and between plants, made under a variety of agronomic conditions that had resulted in a broad range of plant architectures were compared to those predicted by the model with the same environmental inputs. As illustrated by comparisons of plant maps, the linked models performed reasonably well, though performance of the fruiting point allocation and shedding algorithms could probably be improved by further analysis of the spatial relationships of retained fruit. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
We present two integrable spin ladder models which possess a general free parameter besides the rung coupling J. The models are exactly solvable by means of the Bethe ansatz method and we present the Bethe ansatz equations. We analyze the elementary excitations of the models which reveal the existence of a gap for both models that depends on the free parameter. (C) 2003 American Institute of Physics.
Resumo:
Cropp and Gabric [Ecosystem adaptation: do ecosystems maximise resilience? Ecology. In press] used a simple phytoplanktonzooplankton-nutrient model and a genetic algorithm to determine the parameter values that would maximize the value of certain goal functions. These goal functions were to maximize biomass, maximize flux, maximize flux to biomass ratio, and maximize resilience. It was found that maximizing goal functions maximized resilience. The objective of this study was to investigate whether the Cropp and Gabric [Ecosystem adaptation: do ecosystems maximise resilience? Ecology. In press] result was indicative of a general ecosystem principle, or peculiar to the model and parameter ranges used. This study successfully replicated the Cropp and Gabric [Ecosystem adaptation: do ecosystems maximise resilience? Ecology. In press] experiment for a number of different model types, however, a different interpretation of the results is made. A new metric, concordance, was devised to describe the agreement between goal functions. It was found that resilience has the highest concordance of all goal functions trialled. for most model types. This implies that resilience offers a compromise between the established ecological goal functions. The parameter value range used is found to affect the parameter versus goal function relationships. Local maxima and minima affected the relationship between parameters and goal functions, and between goal functions. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Background Latent left ventricular (LV) dysfunction in patients with valvular or myocardial disease may be identified by loss of contractile reserve (CR) at exercise echocardiography. Contraction in the LV longitudinal axis may be more sensitive than radial contraction to minor disturbances of LV function. We sought to determine whether tissue Doppler measurement of longitudinal function could be used to identify CR. Methods Exercise echocardiography was performed in 86 patients (20 women, age 53 +/- 18 years), 72 with asymptomatic or minimally symptomatic mitral regurgitation, and 14 normal controls. Pulsed-wave tissue Doppler imaging (DTI) was used to measure maximum annular systolic velocity at rest and stress. Inducible ischemia was excluded by analysis of wall motion by an experienced observer. CR was defined by greater than or equal to5% improvement of stress compared with rest ejection fraction (EF). Exercise capacity was assessed from expired gas analysis. Results CR was present in 34 patients with mitral regurgitation (47%); peak EF in patients with and without CR was 74% +/- 11% versus 54% +/- 15% (P
Resumo:
Activated sludge floes are a flocculated mass of microorganisms, extracellular polymeric substances (EPS) and adsorbed organic and inorganic material. The structure of the floes is very heterogeneous and floes with very different properties and morphologies may occur, depending on the conditions in the activated sludge treatment plant and wastewater composition. Present thinking suggests that cations, such as calcium, create cationic bridges with EPS excreted by the bacteria and thereby hold the various floe constituents together. However, due to the complex and heterogeneous nature of activated sludge, the mechanisms have neither been thoroughly investigated nor successfully quantified. A better understanding and description of the biological flocculation process is necessary in order to establish more efficient operational strategies. The main aim of this study was to get a comprehensive and unique insight into the floe properties of activated sludge and to assess the relative impact of chemical and physical parameters. A variety of sludges from full scale treatment plants with different settling properties were characterised. The interrelationships between floe parameters such as composition of EPS, surface properties and floe structure, and their effect on the flocculation and separation properties were assessed. The results indicate that the EPS, both in terms of quantity and quality, are very important for the floe properties of the activated sludge. However, presence of filaments may alter the physical properties of the floes considerably. The EPS showed positive correlations to sludge volume index (SVI) if only sludges with low or moderate numbers of filaments were included. The surface properties were more affected by the composition of the EPS than by the number of filaments. The EPS showed positive correlation to negative surface charge and a negative correlation to relative hydrophobicity and flocculation ability. The negative correlation between flocculation ability and amount of EPS was surprising. The shear sensitivity, measured as degree of erosion of floes when subjected to shear, was more affected by floe size and number of filaments than amount of EPS.
Resumo:
Twenty-one strains of Bacillus (10 B. stearothermophilus, 3 B. cereus, and 8 B. licheniformis strains) were assayed for spore surface hydrophobicity on the basis of three measures: contact angle measurement (CAM), microbial adhesion to hydrocarbons (MATH), and hydrophobic interaction chromatography (HIC). On the basis of the spore surface characteristics obtained from these assays, along with data on the heat resistance of these spores in water, eight strains of Bacillus (three B. stearothermophilus, three B. cereus, and two B. licheniformis strains) either suspended in water or adhering to stainless steel were exposed to sublethal heat treatments at 90 to 110degreesC to determine heat resistance (D-value). Significant increases in heat resistance (ranging from 3 to 400%) were observed for the eight strains adhering to stainless steel. No significant correlation was found between these heat resistance increases and spore surface characteristics as determined by the three hydrophobicity assays. There was a significant positive correlation between the hydrophobicity data obtained by the MATH assay and those obtained by the HIC assay, but these data did not correlate with those obtained by the CAM assay.
Resumo:
High concentrations of ammonium ( up to 270 kg N/ha) have been observed in a Vertisol soil below 1 m depth near Warra in south-east Queensland. This study examined the possibility that increased water movement into the subsoil after the removal of native vegetation, and a subsequent increase in periods of waterlogging, could have triggered nitrate ammonification and be responsible for the production of ammonium. Two incubation experiments were conducted to test this hypothesis. The first involved the incubation of repacked cores that had been amended with 30 mg N/kg of 5 atom% N-15 nitrate under low oxygen conditions for a period of 360 days. Over this time period the N-15 enrichment of the exchangeable ammonium fraction was monitored in order to detect any reduction of nitrate to ammonium. The second experiment involved the incubation of soil amended with 30 mg N/ kg of 5 atom% N-15 nitrate under waterlogged and low oxygen conditions for 75 days. During this period the redox potential of the soil was monitored using a field test to determine if reducing conditions would develop in this soil over a period of waterlogging, combined with the monitoring of any nitrate reduction to ammonium. The results of these experiments indicated that a small amount of nitrate ammonification (< 0.1 mg N/ kg) could be observed in the Warra subsoil, but that unless the rate of reduction were to significantly increase with time, this could not account for the accumulation of ammonium observed in the field. The environmental conditions that would make either dissimilatory or abiotic nitrate ammonification favourable were not observed to develop. Consequently, it has been concluded that the observed nitrate ammonification occurred via an assimilatory pathway. Due to the low rate of microbial activity in this subsoil it is considered unlikely that this process was responsible for the subsoil ammonium accumulation at Warra.
Resumo:
Regional commodity forecasts are being used increasingly in agricultural industries to enhance their risk management and decision-making processes. These commodity forecasts are probabilistic in nature and are often integrated with a seasonal climate forecast system. The climate forecast system is based on a subset of analogue years drawn from the full climatological distribution. In this study we sought to measure forecast quality for such an integrated system. We investigated the quality of a commodity (i.e. wheat and sugar) forecast based on a subset of analogue years in relation to a standard reference forecast based on the full climatological set. We derived three key dimensions of forecast quality for such probabilistic forecasts: reliability, distribution shift, and change in dispersion. A measure of reliability was required to ensure no bias in the forecast distribution. This was assessed via the slope of the reliability plot, which was derived from examination of probability levels of forecasts and associated frequencies of realizations. The other two dimensions related to changes in features of the forecast distribution relative to the reference distribution. The relationship of 13 published accuracy/skill measures to these dimensions of forecast quality was assessed using principal component analysis in case studies of commodity forecasting using seasonal climate forecasting for the wheat and sugar industries in Australia. There were two orthogonal dimensions of forecast quality: one associated with distribution shift relative to the reference distribution and the other associated with relative distribution dispersion. Although the conventional quality measures aligned with these dimensions, none measured both adequately. We conclude that a multi-dimensional approach to assessment of forecast quality is required and that simple measures of reliability, distribution shift, and change in dispersion provide a means for such assessment. The analysis presented was also relevant to measuring quality of probabilistic seasonal climate forecasting systems. The importance of retaining a focus on the probabilistic nature of the forecast and avoiding simplifying, but erroneous, distortions was discussed in relation to applying this new forecast quality assessment paradigm to seasonal climate forecasts. Copyright (K) 2003 Royal Meteorological Society.