74 resultados para Network scale-up method
em University of Queensland eSpace - Australia
Resumo:
Bond's method for ball mill scale-up only gives the mill power draw for a given duty. This method is incompatible with computer modelling and simulation techniques. It might not be applicable for the design of fine grinding ball mills and ball mills preceded by autogenous and semi-autogenous grinding mills. Model-based ball mill scale-up methods have not been validated using a wide range of full-scale circuit data. Their accuracy is therefore questionable. Some of these methods also need expensive pilot testing. A new ball mill scale-up procedure is developed which does not have these limitations. This procedure uses data from two laboratory tests to determine the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of full-scale mill circuits. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw.
Model-based procedure for scale-up of wet, overflow ball mills - Part III: Validation and discussion
Resumo:
A new ball mill scale-up procedure is developed. This procedure has been validated using seven sets of Ml-scale ball mil data. The largest ball mills in these data have diameters (inside liners) of 6.58m. The procedure can predict the 80% passing size of the circuit product to within +/-6% of the measured value, with a precision of +/-11% (one standard deviation); the re-circulating load to within +/-33% of the mass-balanced value (this error margin is within the uncertainty associated with the determination of the re-circulating load); and the mill power to within +/-5% of the measured value. This procedure is applicable for the design of ball mills which are preceded by autogenous (AG) mills, semi-autogenous (SAG) mills, crushers and flotation circuits. The new procedure is more precise and more accurate than Bond's method for ball mill scale-up. This procedure contains no efficiency correction which relates to the mill diameter. This suggests that, within the range of mill diameter studied, milling efficiency does not vary with mill diameter. This is in contrast with Bond's equation-Bond claimed that milling efficiency increases with mill diameter. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
A new ball mill scale-up procedure is developed which uses laboratory data to predict the performance of MI-scale ball mill circuits. This procedure contains two laboratory tests. These laboratory tests give the data for the determination of the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of the full-scale mill circuit. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw. A worked example shows how the new ball mill scale-up procedure is executed. This worked example uses laboratory data to predict the performance of a full-scale re-grind mill circuit. This circuit consists of a ball mill in closed circuit with hydrocyclones. The MI-scale ball mill has a diameter (inside liners) of 1.85m. The scale-up procedure shows that the full-scale circuit produces a product (hydrocyclone overflow) that has an 80% passing size of 80 mum. The circuit has a recirculating load of 173%. The calculated power draw of the full-scale mill is 92kW (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
There is considerable anecdotal evidence from industry that poor wetting and liquid distribution can lead to broad granule size distributions in mixer granulators. Current scale-up scenarios lead to poor liquid distribution and a wider product size distribution. There are two issues to consider when scaling up: the size and nature of the spray zone and the powder flow patterns as a function of granulator scale. Short, nucleation-only experiments in a 25L PMA Fielder mixer using lactose powder with water and HPC solutions demonstrated the existence of different nucleation regimes depending on the spray flux Psi(a)-from drop-controlled nucleation to caking. In the drop-controlled regime at low Psi(a) values. each drop forms a single nucleus and the nuclei distribution is controlled by the spray droplet size distribution. As Psi(a) increases, the distribution broadens rapidly as the droplets overlap and coalesce in the spray zone. The results are in excellent agreement with previous experiments and confirm that for drop-controlled nucleation. Psi(a) should be less than 0.1. Granulator flow studies showed that there are two powder flow regimes-bumping and roping. The powder flow goes through a transition from bumping to roping as impeller speed is increased. The roping regime gives good bed turn over and stable flow patterns. This regime is recommended for good liquid distribution and nucleation. Powder surface velocities as a function of impeller speed were measured using high-speed video equipment and MetaMorph image analysis software, Powder surface velocities were 0.2 to 1 ms(-1)-an order of magnitude lower than the impeller tip speed. Assuming geometrically similar granulators, impeller speed should be set to maintain constant Froude number during scale-up rather than constant tip speed to ensure operation in the roping regime. (C) 2002 Published by Elsevier Science B.V.
Resumo:
This paper summarises test results that were used to validate a model and scale-up procedure of the high pressure grinding roll (HPGR) which was developed at the JKMRC by Morrell et al. [Morrell, Lim, Tondo, David,1996. Modelling the high pressure grinding rolls. In: Mining Technology Conference, pp. 169-176.]. Verification of the model is based on results from four data sets that describe the performance of three industrial scale units fitted with both studded and smooth roll surfaces. The industrial units are currently in operation within the diamond mining industry and are represented by De Beers, BHP Billiton and Rio Tinto. Ore samples from the De Beers and BHP Billiton operations were sent to the JKMRC for ore characterisation and HPGR laboratory-scale tests. Rio Tinto contributed an historical data set of tests completed during a previous research project. The results conclude that the modelling of the HPGR process has matured to a point where the model may be used to evaluate new and to optimise existing comminution circuits. The model prediction of product size distribution is good and has been found to be strongly dependent of the characteristics of the material being tested. The prediction of throughput and corresponding power draw (based on throughput) is sensitive to inconsistent gap/diameter ratios observed between laboratory-scale tests and full-scale operations. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The notorious "dimensionality curse" is a well-known phenomenon for any multi-dimensional indexes attempting to scale up to high dimensions. One well-known approach to overcome degradation in performance with respect to increasing dimensions is to reduce the dimensionality of the original dataset before constructing the index. However, identifying the correlation among the dimensions and effectively reducing them are challenging tasks. In this paper, we present an adaptive Multi-level Mahalanobis-based Dimensionality Reduction (MMDR) technique for high-dimensional indexing. Our MMDR technique has four notable features compared to existing methods. First, it discovers elliptical clusters for more effective dimensionality reduction by using only the low-dimensional subspaces. Second, data points in the different axis systems are indexed using a single B+-tree. Third, our technique is highly scalable in terms of data size and dimension. Finally, it is also dynamic and adaptive to insertions. An extensive performance study was conducted using both real and synthetic datasets, and the results show that our technique not only achieves higher precision, but also enables queries to be processed efficiently. Copyright Springer-Verlag 2005
Resumo:
Two methods were compared for determining the concentration of penetrative biomass during growth of Rhizopus oligosporus on an artificial solid substrate consisting of an inert gel and starch as the sole source of carbon and energy. The first method was based on the use of a hand microtome to make sections of approximately 0.2- to 0.4-mm thickness parallel to the substrate surface and the determination of the glucosamine content in each slice. Use of glucosamine measurements to estimate biomass concentrations was shown to be problematic due to the large variations in glucosamine content with mycelial age. The second method was a novel method based on the use of confocal scanning laser microscopy to estimate the fractional volume occupied by the biomass. Although it is not simple to translate fractional volumes into dry weights of hyphae due to the lack of experimentally determined conversion factors, measurement of the fractional volumes in themselves is useful for characterizing fungal penetration into the substrate. Growth of penetrative biomass in the artificial model substrate showed two forms of growth with an indistinct mass in the region close to the substrate surface and a few hyphae penetrating perpendicularly to the surface in regions further away from the substrate surface. The biomass profiles against depth obtained from the confocal microscopy showed two linear regions on log-linear plots, which are possibly related to different oxygen availability at different depths within the substrate. Confocal microscopy has the potential to be a powerful tool in the investigation of fungal growth mechanisms in solid-state fermentation. (C) 2003 Wiley Periodicals, Inc.
Resumo:
Objective: To examine the impact of a multi-component health assessment on mortality and morbidity in Kimberley Aboriginal residents during a 13-year follow-up. Method. A population-based randomised controlled trial using linked hospital, cancer and death records to evaluate outcomes in 620 intervention and 6,736 control subjects. Results: The intervention group had a higher rate of first-time hospitalisation for any reason (IRR = 1.37; 95 % Cl 1.25-1.50), a higher rate of injury-related hospital episodes (IRR = 1.31; 95 % Cl 1.15-1.48) and a higher notification rate of alcohol-related cancers. There was a smaller difference in the rates of multiple hospitalisations (IRR = 1.14; 95 % Cl 0.751.74) and no improvement in overall mortality compared with controls (IRR = 1.08; 95 % Cl 0.91-1.29). Conclusions: There was no overall mortality benefit despite increased health service contact associated with the intervention. Implications: Although not influencing mortality rates, multi-component health assessment may result in a period of increased health service use in Aboriginal and Torres Strait Islander populations, thus constituting an 'intervention'. However, this should not be confused with systematic and sustained interventions and investment in community development to achieve better health outcomes.
Resumo:
Virus-like particles (VLPs) are of interest in vaccination, gene therapy and drug delivery, but their potential has yet to be fully realized. This is because existing laboratory processes, when scaled, do not easily give a compositionally and architecturally consistent product. Research suggests that new process routes might ultimately be based on chemical processing by self-assembly, involving the precision manufacture of precursor capsomeres followed by in vitro VLP self-assembly and scale-up to required levels. A synergistic interaction of biomolecular design and bioprocess engineering (i.e. biomolecular engineering) is required if these alternative process routes and, thus, the promise of new VLP products, are to be realized.
Resumo:
The development of large-scale solid-stale fermentation (SSF) processes is hampered by the lack of simple tools for the design of SSF bioreactors. The use of semifundamental mathematical models to design and operate SSF bioreactors can be complex. In this work, dimensionless design factors are used to predict the effects of scale and of operational variables on the performance of rotating drum bioreactors. The dimensionless design factor (DDF) is a ratio of the rate of heat generation to the rate of heat removal at the time of peak heat production. It can be used to predict maximum temperatures reached within the substrate bed for given operational variables. Alternatively, given the maximum temperature that can be tolerated during the fermentation, it can be used to explore the combinations of operating variables that prevent that temperature from being exceeded. Comparison of the predictions of the DDF approach with literature data for operation of rotating drums suggests that the DDF is a useful tool. The DDF approach was used to explore the consequences of three scale-up strategies on the required air flow rates and maximum temperatures achieved in the substrate bed as the bioreactor size was increased on the basis of geometric similarity. The first of these strategies was to maintain the superficial flow rate of the process air through the drum constant. The second was to maintain the ratio of volumes of air per volume of bioreactor constant. The third strategy was to adjust the air flow rate with increase in scale in such a manner as to maintain constant the maximum temperature attained in the substrate bed during the fermentation. (C) 2000 John Wiley & Sons, Inc.