41 resultados para optimising compiler


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In spite of their wide application in comminution circuits, hydrocyclones have at least one significant disadvantage in that their operation inherently tends to return the fine denser liberated minerals to the grinding mill. This results in unnecessary overgrinding which adds to the milling cost and can adversely affect the efficiency of downstream processes. In an attempt to solve this problem, a three-product cyclone has been developed at the Julius Kruttschnitt Mineral Research Centre (JKMRC) to generate a second overflow in which the fine dense liberated minerals can be selectively concentrated for further treatment. In this paper, the design and operation of the three-product cyclone are described. The influence of the length of the second vortex finder on the performance of a 150-mm unit treating a mixture of magnetite and silica is investigated. Conventional cyclone tests were also conducted under similar conditions. Using the operational performance data of the three-product and conventional cyclones, it is shown that by optimising the length of the second vortex finder, the amount of fine dense mineral particles that reports to the three-product cyclone underflow can be reduced. In addition, the three-product cyclone can be used to generate middlings stream that may be more suitable for flash flotation than the conventional cyclone underflow, or alternatively, could be classified with a microscreen to separate the valuables from the gangue. At the same time, a fines stream having similar properties to those of the conventional overflow can be obtained. Hence, if the middlings stream was used as feed for flash flotation or microscreening, the fines stream could be used in lieu of the conventional overflow without compromising the feed requirements for the conventional flotation circuit. Some of the other potential applications of the new cyclone are described. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the most important advantages of database systems is that the underlying mathematics is rich enough to specify very complex operations with a small number of statements in the database language. This research covers an aspect of biological informatics that is the marriage of information technology and biology, involving the study of real-world phenomena using virtual plants derived from L-systems simulation. L-systems were introduced by Aristid Lindenmayer as a mathematical model of multicellular organisms. Not much consideration has been given to the problem of persistent storage for these simulations. Current procedures for querying data generated by L-systems for scientific experiments, simulations and measurements are also inadequate. To address these problems the research in this paper presents a generic process for data-modeling tools (L-DBM) between L-systems and database systems. This paper shows how L-system productions can be generically and automatically represented in database schemas and how a database can be populated from the L-system strings. This paper further describes the idea of pre-computing recursive structures in the data into derived attributes using compiler generation. A method to allow a correspondence between biologists' terms and compiler-generated terms in a biologist computing environment is supplied. Once the L-DBM gets any specific L-systems productions and its declarations, it can generate the specific schema for both simple correspondence terminology and also complex recursive structure data attributes and relationships.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In standard cylindrical gradient coils consisting of a single layer of wires, a limiting factor in achieving very large magnetic field gradients is the rapid increase in coil resistance with efficiency. This is a particular problem in small-bore scanners, such as those used for MR microscopy. By adopting a multi-layer design in which the coil wires are allowed to spread out into multiple layers wound at increasing radii, a more favourable scaling of resistance with efficiency is achieved, thus allowing the design of more powerful gradient coils with acceptable resistance values. Previously this approach has been applied to the design of unshielded, longitudinal, and transverse gradient coils. Here, the multi-layer approach has been extended to allow the design of actively shielded multi-layer gradient coils, and also to produce coils exhibiting enhanced cooling characteristics. An iterative approach to modelling the steady-state temperature distribution within the coil has also been developed. Results indicate that a good level of screening can be achieved in multi-layer coils, that small versions of such coils can yield higher efficiencies at fixed resistance than conventional two-layer (primary and screen) coils, and that performance improves as the number of layers of increases. Simulations show that by optimising multi-layer coils for cooling it is possible to achieve significantly higher gradient strengths at a fixed maximum operating temperature. A four-layer coil of 8 mm inner diameter has been constructed and used to test the steady-state temperature model. (C) 2003 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim To develop a population pharmacokinetic model for mycophenolic acid in adult kidney transplant recipients, quantifying average population pharmacokinetic parameter values, and between- and within-subject variability and to evaluate the influence of covariates on the pharmacokinetic variability. Methods Pharmacokinetic data for mycophenolic acid and covariate information were previously available from 22 patients who underwent kidney transplantation at the Princess Alexandra Hospital. All patients received mycophenolate mofetil 1 g orally twice daily. A total of 557 concentration-time points were available. Data were analysed using the first-order method in NONMEM (version 5 level 1.1) using the G77 FORTRAN compiler. Results The best base model was a two-compartment model with a lag time (apparent oral clearance was 271 h(-1), and apparent volume of the central compartment 981). There was visual evidence of complex absorption and time-dependent clearance processes, but they could not be successfully modelled in this study. Weight was investigated as a covariate, but no significant relationship was determined. Conclusions The complexity in determining the pharmacokinetics of mycophenolic acid is currently underestimated. More complex pharmacokinetic models, though not supported by the limited data collected for this study, may prove useful in the future. The large between-subject and between-occasion variability and the possibility of nonlinear processes associated with the pharmacokinetics of mycophenolic acid raise questions about the value of the use of therapeutic monitoring and limited sampling strategies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The influence of a new aeration system on the biopile performance was investigated. The purpose was to increase biodegradation efficiency by optimising airflow through the pile. During a 1-month field trial, the performance of a new system using two perforated vertical pipes with wind-driven turbines was compared with that of a standard pile configuration with two horizontal perforated pipes. Both piles were composed of a similar mix of diesel-contaminated soils, woodchips, compost and NPK fertiliser. Hydrocarbons were recovered using solvent extraction, and determined both gravimetrically and by gas chromatography. Total heterotrophs, pH and moisture content were also assessed. Air pressure measurements were made to compare the efficiency of suction in the pipes. Results at the end of the experiment showed that there was no significant difference between the two piles in the total amount of hydrocarbon biodegradation. The normalised degradation rate was, however, considerably higher in the new system than in the standard one, suggesting that the vertical venting method may have improved the efficiency of the biological reactions in the pile. The pressure measurements showed a significant improvement in the suction produced by the new aeration system. However, many factors other than the airflow (oxygen supply) may influence and limit the biodegradation rates, including moisture content, age of contaminants and the climatic conditions. Additional experiments and modelling need to be carried out to explore further the new aeration method and to develop criteria and guidelines for engineering design of optimal aeration schemes in order to achieve maximum biodegradation in biopiles. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The real-time refinement calculus is a formal method for the systematic derivation of real-time programs from real-time specifications in a style similar to the non-real-time refinement calculi of Back and Morgan. In this paper we extend the real-time refinement calculus with procedures and provide refinement rules for refining real-time specifications to procedure calls. A real-time specification can include constraints on, not only what outputs are produced, but also when they are produced. The derived programs can also include time constraints oil when certain points in the program must be reached; these are expressed in the form of deadline commands. Such programs are machine independent. An important consequence of the approach taken is that, not only are the specifications machine independent, but the whole refinement process is machine independent. To implement the machine independent code on a target machine one has a separate task of showing that the compiled machine code will reach all its deadlines before they expire. For real-time programs, externally observable input and output variables are essential. These differ from local variables in that their values are observable over the duration of the execution of the program. Hence procedures require input and output parameter mechanisms that are references to the actual parameters so that changes to external inputs are observable within the procedure and changes to output parameters are externally observable. In addition, we allow value and result parameters. These may be auxiliary parameters, which are used for reasoning about the correctness of real-time programs as well as in the expression of timing deadlines, but do not lead to any code being generated for them by a compiler. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a relatively simple and quick method for implementing aerodynamic heating models into a finite element code for non-linear transient thermal-structural and thermal-structural-vibrational analyses of a Mach 10 generic HyShot scramjet engine. The thermal-structural-vibrational response of the engine was studied for the descent trajectory from 60 to 26 km. Aerodynamic heating fluxes, as a function of spatial position and time for varying trajectory points, were implemented in the transient heat analysis. Additionally, the combined effect of varying dynamic pressure and thermal loads with altitude was considered. This aero-thermal-structural analysis capability was used to assess the temperature distribution, engine geometry distortion and yielding of the structural material due to aerodynamic heating during the descent trajectory, and for optimising the wall thickness, nose radius of leading edge, etc. of the engine intake. A structural vibration analysis was also performed following the aero-thermal-structural analysis to determine the changes in natural frequencies of the structural vibration modes that occur at the various temperatures associated with the descent trajectory. This analysis provides a unique and relatively simple design strategy for predicting and mitigating the thermal-structural-vibrational response of hypersonic engines. (C) 2006 Elsevier SAS. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Proprioceptive neuromuscular facilitation (PNF) stretching techniques are commonly used in the athletic and clinical environments to enhance both active and passive range of motion (ROM) with a view to optimising motor performance and rehabilitation. PNF stretching is positioned in the literature as the most effective stretching technique when the aim is to increase ROM, particularly in respect to short-term changes in ROM. With due consideration of the heterogeneity across the applied PNF stretching research, a summary of the findings suggests that an 'active' PNF stretching technique achieves the greatest gains in ROM, e.g. utilising a shortening contraction of the opposing muscle to place the target muscle on stretch, followed by a static contraction of the target muscle. The inclusion of a shortening contraction of the opposing muscle appears to have the greatest impact on enhancing ROM. When including a static contraction of the target muscle, this needs to be held for approximately 3 seconds at no more than 20% of a maximum voluntary contraction. The greatest changes in ROM generally occur after the first repetition and in order to achieve more lasting changes in ROM, PNF stretching needs to be performed once or twice per week. The superior changes in ROM that PNF stretching often produces compared with other stretching techniques has traditionally been attributed to autogenic and/or reciprocal inhibition, although the literature does not support this hypothesis. Instead, and in the absence of a biomechanical explanation, the contemporary view proposes that PNF stretching influences the point at which stretch is perceived or tolerated. The mechanism(s) underpinning the change in stretch perception or tolerance are not known, although pain modulation has been suggested.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most practitioners teaching English to speakers of other languages (TESOL) will agree that students come with some expectations about course content and teaching methodology and that these expectations play a vital role in student motivation and learning. However, the study of student expectations has been a surprising omission from Second Language Acquisition research. In the studies reported here, the authors develop a model of student expectations by adapting the Expectation Disconfirmation paradigm, widely used in consumer psychology. Student and teacher perspectives on student expectations were gathered by interviews. Responses shed light on the nature of expectations, factors causing expectations and effects of expectation fulfilment (or lack of it). The findings provide new avenues for research on affective factors as well as clarify some ambiguities in motivational research in second language acquisition. The model presented here can be used by teachers or institutions to conduct classroom-based research, thus optimising students' learning and performance, and enhancing student morale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Methods of analysing and optimising flotation circuits have improved significantly over the last 15 years. Mineral flotation is now generally better understood through major advances in measuring and modelling the sub-processes within the flotation system. JKSimFloat V6 is a user-friendly Windows-based software package incorporating simulation, mass balancing, and, currently under development, liberation data viewing and model fitting. This paper presents an overview of the development of the program up to its current status, and the plans established for the future. The application of the simulator, in particular, at various operations is also discussed with emphasis on the use of the program in improving flotation circuit performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous work on formally modelling and analysing program compilation has shown the need for a simple and expressive semantics for assembler level programs. Assembler programs contain unstructured jumps and previous formalisms have modelled these by using continuations, or by embedding the program in an explicit emulator. We propose a simpler approach, which uses techniques from compiler theory in a formal setting. This approach is based on an interpretation of programs as collections of program paths, each of which has a weakest liberal precondition semantics. We then demonstrate, by example, how we can use this formalism to justify the compilation of block-structured high-level language programs into assembler.