467 resultados para Scale Composition

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

To date, the formation of deposits on heat exchanger surfaces is the least understood problem in the design of heat exchangers for processing industries. Dr East has related the structure of the deposits to solution composition and has developed predictive models for composite fouling of calcium oxalate and silica in sugar factory evaporators.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cleaning of sugar mill evaporators is an expensive exercise. Identifying the scale components assists in determining which chemical cleaning agents would result in effective evaporator cleaning. The current methods (based on x-ray diffraction techniques, ion exchange/high performance liquid chromatography and thermogravimetry/differential thermal analysis) used for scale characterisation are difficult, time consuming and expensive, and cannot be performed in a conventional analytical laboratory or by mill staff. The present study has examined the use of simple descriptor tests for the characterisation of Australian sugar mill evaporator scales. Scale samples were obtained from seven Australian sugar mill evaporators by mechanical means. The appearance, texture and colour of the scale were noted before the samples were characterised using x-ray fluorescence and x-ray powder diffraction to determine the compounds present. A number of commercial analytical test kits were used to determine the phosphate and calcium contents of scale samples. Dissolution experiments were carried out on the scale samples with selected cleaning agents to provide relevant information about the effect the cleaning agents have on different evaporator scales. Results have shown that by simply identifying the colour and the appearance of the scale, the elemental composition and knowing from which effect the scale originates, a prediction of the scale composition can be made. These descriptors and dissolution experiments on scale samples can be used to provide factory staff with an on-site rapid process to predict the most effective chemicals for chemical cleaning of the evaporators.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cleaning of sugar mill evaporators is an expensive exercise. Identifying the scale components assists in determining which chemical cleaning agents would result in effective evaporator cleaning. The current methods (based on x-ray diffraction techniques, ion exchange/high performance liquid chromatography and thermogravimetry/differential thermal analysis) used for scale characterisation are difficult, time consuming and expensive, and cannot be performed in a conventional analytical laboratory or by mill staff. The present study has examined the use of simple descriptor tests for the characterisation of Australian sugar mill evaporator scales. Scale samples were obtained from seven Australian sugar mill evaporators by mechanical means. The appearance, texture and colour of the scale were noted before the samples were characterised using x-ray fluorescence and x-ray powder diffraction to determine the compounds present. A number of commercial analytical test kits were used to determine the phosphate and calcium contents of scale samples. Dissolution experiments were carried out on the scale samples with selected cleaning agents to provide relevant information about the effect the cleaning agents have on different evaporator scales. Results have shown that by simply identifying the colour and the appearance of the scale, the elemental composition and knowing from which effect the scale originates, a prediction of the scale composition can be made. These descriptors and dissolution experiments on scale samples can be used to provide factory staff with an on-site rapid process to predict the most effective chemicals for chemical cleaning of the evaporators.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Developments in evaporator cleaning have accelerated in the past 10 years as a result of an extended period of research into scale formation and scale composition. Chemical cleaning still provides the most cost effective method of cleaning the evaporators. The paper describes a system that was designed to obtain on-line samples of evaporator scale negating the need to open up hot evaporator vessels for scale collection. This system was successfully implemented in a number of evaporators at a sugar mill. This paper also describes a recent experience in a sugar factory in which the cleaning procedure was slightly modified, resulting in effective removal of intractable scale.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Developments in evaporator cleaning have accelerated in the past 10 years as a result of an extended period of research into scale formation and scale composition. Chemical cleaning still provides the most cost effective method of cleaning the evaporators. The paper describes a system that was designed to obtain on-line samples of evaporator scale negating the need to open up hot evaporator vessels for scale collection. This system was successfully implemented in a number of evaporators at a sugar mill. This paper also describes a recent experience in a sugar factory in which the cleaning procedure was slightly modified resulting in effective removal of intractable scale.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study decomposed the determinants of environmental quality into scale, technique, and composition effects. We applied a semiparametric method of generalized additive models, which enabled us to use flexible functional forms and include several independent variables in the model. The differences in the technique effect were found to play a crucial role in reducing pollution. We found that the technique effect was sufficient to reduce sulfur dioxide emissions. On the other hand, its effect was not enough to reduce carbon dioxide (CO2) emissions and energy use, except for the case of CO2 emissions in high-income countries.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE: This paper describes dynamic agent composition, used to support the development of flexible and extensible large-scale agent-based models (ABMs). This approach was motivated by a need to extend and modify, with ease, an ABM with an underlying networked structure as more information becomes available. Flexibility was also sought after so that simulations are set up with ease, without the need to program. METHODS: The dynamic agent composition approach consists in having agents, whose implementation has been broken into atomic units, come together at runtime to form the complex system representation on which simulations are run. These components capture information at a fine level of detail and provide a vast range of combinations and options for a modeller to create ABMs. RESULTS: A description of the dynamic agent composition is given in this paper, as well as details about its implementation within MODAM (MODular Agent-based Model), a software framework which is applied to the planning of the electricity distribution network. Illustrations of the implementation of the dynamic agent composition are consequently given for that domain throughout the paper. It is however expected that this approach will be beneficial to other problem domains, especially those with a networked structure, such as water or gas networks. CONCLUSIONS: Dynamic agent composition has many advantages over the way agent-based models are traditionally built for the users, the developers, as well as for agent-based modelling as a scientific approach. Developers can extend the model without the need to access or modify previously written code; they can develop groups of entities independently and add them to those already defined to extend the model. Users can mix-and-match already implemented components to form large-scales ABMs, allowing them to quickly setup simulations and easily compare scenarios without the need to program. The dynamic agent composition provides a natural simulation space over which ABMs of networked structures are represented, facilitating their implementation; and verification and validation of models is facilitated by quickly setting up alternative simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A mathematical model is developed to simulate the discharge of a LiFePO4 cathode. This model contains 3 size scales, which match with experimental observations present in the literature on the multi-scale nature of LiFePO4 material. A shrinking-core is used on the smallest scale to represent the phase-transition of LiFePO4 during discharge. The model is then validated against existing experimental data and this validated model is then used to investigate parameters that influence active material utilisation. Specifically the size and composition of agglomerates of LiFePO4 crystals is discussed, and we investigate and quantify the relative effects that the ionic and electronic conductivities within the oxide have on oxide utilisation. We find that agglomerates of crystals can be tolerated under low discharge rates. The role of the electrolyte in limiting (cathodic) discharge is also discussed, and we show that electrolyte transport does limit performance at high discharge rates, confirming the conclusions of recent literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web service technology is increasingly being used to build various e-Applications, in domains such as e-Business and e-Science. Characteristic benefits of web service technology are its inter-operability, decoupling and just-in-time integration. Using web service technology, an e-Application can be implemented by web service composition — by composing existing individual web services in accordance with the business process of the application. This means the application is provided to customers in the form of a value-added composite web service. An important and challenging issue of web service composition, is how to meet Quality-of-Service (QoS) requirements. This includes customer focused elements such as response time, price, throughput and reliability as well as how to best provide QoS results for the composites. This in turn best fulfils customers’ expectations and achieves their satisfaction. Fulfilling these QoS requirements or addressing the QoS-aware web service composition problem is the focus of this project. From a computational point of view, QoS-aware web service composition can be transformed into diverse optimisation problems. These problems are characterised as complex, large-scale, highly constrained and multi-objective problems. We therefore use genetic algorithms (GAs) to address QoS-based service composition problems. More precisely, this study addresses three important subproblems of QoS-aware web service composition; QoS-based web service selection for a composite web service accommodating constraints on inter-service dependence and conflict, QoS-based resource allocation and scheduling for multiple composite services on hybrid clouds, and performance-driven composite service partitioning for decentralised execution. Based on operations research theory, we model the three problems as a constrained optimisation problem, a resource allocation and scheduling problem, and a graph partitioning problem, respectively. Then, we present novel GAs to address these problems. We also conduct experiments to evaluate the performance of the new GAs. Finally, verification experiments are performed to show the correctness of the GAs. The major outcomes from the first problem are three novel GAs: a penaltybased GA, a min-conflict hill-climbing repairing GA, and a hybrid GA. These GAs adopt different constraint handling strategies to handle constraints on interservice dependence and conflict. This is an important factor that has been largely ignored by existing algorithms that might lead to the generation of infeasible composite services. Experimental results demonstrate the effectiveness of our GAs for handling the QoS-based web service selection problem with constraints on inter-service dependence and conflict, as well as their better scalability than the existing integer programming-based method for large scale web service selection problems. The major outcomes from the second problem has resulted in two GAs; a random-key GA and a cooperative coevolutionary GA (CCGA). Experiments demonstrate the good scalability of the two algorithms. In particular, the CCGA scales well as the number of composite services involved in a problem increases, while no other algorithms demonstrate this ability. The findings from the third problem result in a novel GA for composite service partitioning for decentralised execution. Compared with existing heuristic algorithms, the new GA is more suitable for a large-scale composite web service program partitioning problems. In addition, the GA outperforms existing heuristic algorithms, generating a better deployment topology for a composite web service for decentralised execution. These effective and scalable GAs can be integrated into QoS-based management tools to facilitate the delivery of feasible, reliable and high quality composite web services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1.Marine ecosystems provide critically important goods and services to society, and hence their accelerated degradation underpins an urgent need to take rapid, ambitious and informed decisions regarding their conservation and management. 2.The capacity, however, to generate the detailed field data required to inform conservation planning at appropriate scales is limited by time and resource consuming methods for collecting and analysing field data at the large scales required. 3.The ‘Catlin Seaview Survey’, described here, introduces a novel framework for large-scale monitoring of coral reefs using high-definition underwater imagery collected using customized underwater vehicles in combination with computer vision and machine learning. This enables quantitative and geo-referenced outputs of coral reef features such as habitat types, benthic composition, and structural complexity (rugosity) to be generated across multiple kilometre-scale transects with a spatial resolution ranging from 2 to 6 m2. 4.The novel application of technology described here has enormous potential to contribute to our understanding of coral reefs and associated impacts by underpinning management decisions with kilometre-scale measurements of reef health. 5.Imagery datasets from an initial survey of 500 km of seascape are freely available through an online tool called the Catlin Global Reef Record. Outputs from the image analysis using the technologies described here will be updated on the online repository as work progresses on each dataset. 6.Case studies illustrate the utility of outputs as well as their potential to link to information from remote sensing. The potential implications of the innovative technologies on marine resource management and conservation are also discussed, along with the accuracy and efficiency of the methodologies deployed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research develops a better understanding on how large-scale and complex IT-enabled business transformations are managed. Evidence from three global case studies suggest that business transformations can be composed and orchestrated like a jazz band, where improvisation plays a fundamental role to maintain the melody, harmony and rhythm of such initiatives. The thesis details how the jazz metaphor can assist senior management on how to reuse and reconfigure capabilities as services for transforming organizations. To the academic body of knowledge, the thesis provides a study on the use of management services as a theoretical lens to examine Business Transformation Management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context Older oncology patients have unique needs associated with the many physical, psychological,and social changes associated with the aging process. The mechanisms underpinning and the impact of these changes are not well understood. Identification of clusters of symptoms is one approach that has been used to elicit hypotheses about the biological and/or psychological basis for variations in symptom experiences. Objectives The purposes of this study were to identify and compare symptom clusters in younger (<60 years) and older ($60 years) patients undergoing cancer treatment. Methods. Symptom data from one Australian study and two U.S. studies were combined to conduct this analysis. A total of 593 patients receiving active treatment were dichotomized into younger (<60 years) and older ($60 years) groups. Separate exploratory factor analyses (EFAs) were undertaken within each group to identify symptom clusters from occurrence ratings of the 32 symptoms assessed by the Memorial Symptom Assessment Scale. Results In both groups, a seven-factor solution was selected. Four partially concordant symptom clusters emerged in both groups (i.e., mood/cognitive, malaise, body image, and genitourinary). In the older patients, the three unique clusters reflected physiological changes associated with aging, whereas in the younger group the three unique clusters reflected treatment-related effects. Conclusion The symptom clusters identified in older patients typically included a larger and more diverse range of physical and psychological symptoms. Differences also may be reflective of variations in treatment approaches between age groups. Findings highlight the need for better understanding of variation in treatment and symptom burden between younger and older adults with cancer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pilot and industrial scale dilute acid pretreatment data can be difficult to obtain due to the significant infrastructure investment required. Consequently, models of dilute acid pretreatment by necessity use laboratory scale data to determine kinetic parameters and make predictions about optimal pretreatment conditions at larger scales. In order for these recommendations to be meaningful, the ability of laboratory scale models to predict pilot and industrial scale yields must be investigated. A mathematical model of the dilute acid pretreatment of sugarcane bagasse has previously been developed by the authors. This model was able to successfully reproduce the experimental yields of xylose and short chain xylooligomers obtained at the laboratory scale. In this paper, the ability of the model to reproduce pilot scale yield and composition data is examined. It was found that in general the model over predicted the pilot scale reactor yields by a significant margin. Models that appear very promising at the laboratory scale may have limitations when predicting yields on a pilot or industrial scale. It is difficult to comment whether there are any consistent trends in optimal operating conditions between reactor scale and laboratory scale hydrolysis due to the limited reactor datasets available. Further investigation is needed to determine whether the model has some efficacy when the kinetic parameters are re-evaluated by parameter fitting to reactor scale data, however, this requires the compilation of larger datasets. Alternatively, laboratory scale mathematical models may have enhanced utility for predicting larger scale reactor performance if bulk mass transport and fluid flow considerations are incorporated into the fibre scale equations. This work reinforces the need for appropriate attention to be paid to pilot scale experimental development when moving from laboratory to pilot and industrial scales for new technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Species identification based on short sequences of DNA markers, that is, DNA barcoding, has emerged as an integral part of modern taxonomy. However, software for the analysis of large and multilocus barcoding data sets is scarce. The Basic Local Alignment Search Tool (BLAST) is currently the fastest tool capable of handling large databases (e.g. >5000 sequences), but its accuracy is a concern and has been criticized for its local optimization. However, current more accurate software requires sequence alignment or complex calculations, which are time-consuming when dealing with large data sets during data preprocessing or during the search stage. Therefore, it is imperative to develop a practical program for both accurate and scalable species identification for DNA barcoding. In this context, we present VIP Barcoding: a user-friendly software in graphical user interface for rapid DNA barcoding. It adopts a hybrid, two-stage algorithm. First, an alignment-free composition vector (CV) method is utilized to reduce searching space by screening a reference database. The alignment-based K2P distance nearest-neighbour method is then employed to analyse the smaller data set generated in the first stage. In comparison with other software, we demonstrate that VIP Barcoding has (i) higher accuracy than Blastn and several alignment-free methods and (ii) higher scalability than alignment-based distance methods and character-based methods. These results suggest that this platform is able to deal with both large-scale and multilocus barcoding data with accuracy and can contribute to DNA barcoding for modern taxonomy. VIP Barcoding is free and available at http://msl.sls.cuhk.edu.hk/vipbarcoding/.