299 resultados para 259903 Industrial Chemistry
Resumo:
China is experiencing rapid progress in industrialization, with its own rationale toward industrial land development based on a deliberate change from an extensive to intensive form of urban land use. One result has been concerted attempts by local government to attract foreign investment by a low industrial land price strategy, which has resulted in a disproportionally large amount of industrial land within the total urban land use structure at the expense of the urban sprawl of many cities. This paper first examines “Comparable Benchmark Price as Residential land use” (CBPR) as the theoretical basis of the low industrial land price phenomenon. Empirical findings are presented from a case study based on data from Jinyun County, China. These data are analyzed to reveal the rationale of industrial land price from 2000 to 2010 concerning the CBPR model. We then explore the causes of low industrial land prices in the form of a “Centipede Game Model”, involving two neighborhood regions as “major players” to make a set of moves (or strategies). When one of the players unilaterally reduces the land price to attract investment with the aim to maximize profits arising from the revenues generated from foreign investment and land premiums, a two-player price war begins in the form of a dynamic game, the effect of which is to produce a downward spiral of prices. In this context, the paradox of maximizing profits for each of the two players are not accomplished due to the inter-regional competition of attracted investment leading to a lose-lose situation for both sides’ in competing for land premium revenues. A short-term solution to the problem is offered involving the establishment of inter-regional cooperative partnerships. For the longer term, however, a comprehensive reform of the local financial system, more adroit regional planning and an improved means of evaluating government performance is needed to ensure the government's role in securing pubic goods is not abandoned in favor of one solely concerned with revenue generation.
Resumo:
The overall aim of our research was to characterize airborne particles from selected nanotechnology processes and to utilize the data to develop and test quantitative particle concentration-based criteria that can be used to trigger an assessment of particle emission controls. We investigated particle number concentration (PNC), particle mass (PM) concentration, count median diameter (CMD), alveolar deposited surface area, elemental composition, and morphology from sampling of aerosols arising from six nanotechnology processes. These included fibrous and non-fibrous particles, including carbon nanotubes (CNTs). We adopted standard occupational hygiene principles in relation to controlling peak emission and exposures, as outlined by both Safe Work Australia, (1) and the American Conference of Governmental Industrial Hygienists (ACGIH®). (2) The results from the study were used to analyses peak and 30-minute averaged particle number and mass concentration values measured during the operation of the nanotechnology processes. Analysis of peak (highest value recorded) and 30-minute averaged particle number and mass concentration values revealed: Peak PNC20–1000 nm emitted from the nanotechnology processes were up to three orders of magnitude greater than the local background particle concentration (LBPC). Peak PNC300–3000 nm was up to an order of magnitude greater, and PM2.5 concentrations up to four orders of magnitude greater. For three of these nanotechnology processes, the 30-minute average particle number and mass concentrations were also significantly different from the LBPC (p-value < 0.001). We propose emission or exposure controls may need to be implemented or modified, or further assessment of the controls be undertaken, if concentrations exceed three times the LBPC, which is also used as the local particle reference value, for more than a total of 30 minutes during a workday, and/or if a single short-term measurement exceeds five times the local particle reference value. The use of these quantitative criteria, which we are terming the universal excursion guidance criteria, will account for the typical variation in LBPC and inaccuracy of instruments, while precautionary enough to highlight peaks in particle concentration likely to be associated with particle emission from the nanotechnology process. Recommendations on when to utilize local excursion guidance criteria are also provided.
Resumo:
The main contribution of this project was to investigate power electronics technology in designing and developing high frequency high power converters for industrial applications. Therefore, the research was conducted at two levels; first at system level which mainly encapsulated the circuit topology and control scheme and second at application level which involves with real-world applications. Pursuing these objectives, varied topologies have been developed and proposed within this research. The main aim was to resolving solid-state switches limited power rating and operating speed while increasing the system flexibility considering the application characteristics. The developed new power converter configurations were applied to pulsed power and high power ultrasound applications for experimental validation.
Resumo:
Generating nano-sized materials of a controlled size and chemical composition is essential for the manufacturing of materials with enhanced properties on an industrial scale, as well as for research purposes, such as toxicological studies. Among the generation methods for airborne nanoparticles (also known as aerosolisation methods), liquid-phase techniques have been widely applied due to the simplicity of their use and their high particle production rate. The use of a collison nebulizer is one such technique, in which the atomisation takes place as a result of the liquid being sucked into the air stream and injected toward the inner walls of the nebulizer reservoir via nozzles, before the solution is dispersed. Despite the above-mentioned benefits, this method also falls victim to various sources of impurities (Knight and Petrucci 2003; W. LaFranchi, Knight et al. 2003). Since these impurities can affect the characterization of the generated nanoparticles, it is crucial to understand and minimize their effect.
Resumo:
Over the past ten years, scaled-up utilisation of a previously under-exploited zeolite, Zeolite N1, has been demonstrated for selective ion exchange of ammonium and other ions in aqueous environments. As with many zeolite syntheses, the required source material should contain predictable levels of aluminium and silicon and, for full-scale industrial applications, kaolin and/or montmorillonite serve such a purpose. Field, pilot and commercial scale trials of kaolin-derived Zeolite N have focused on applications in agriculture and water treatment as these sectors are primary producers or users of ammonium. The format for the material – as fine powders, granules or extrudates – depends on the specific application albeit each has been evaluated.
Resumo:
The prime objective of drying is to enhance shelf life of perishable food materials. As the process is very energy intensive in nature, researchers are trying to minimise energy consumption in the drying process. In order to determine the exact amount of energy needed for drying a food product, understanding the physics of moisture distribution and bond strength of water within the food material is essential. In order understand the critical moisture content, moisture distribution and water bond strength in food material, Thermogravimetric analysis (TGA) can be properly utilised. This work has been conducted to investigate moisture distribution and water bond strength in selected food materials; apple, banana and potato. It was found that moisture distribution and water bond strength influence moisture migration from the food materials. In addition, proportion of different types of water (bound, free, surface water) has been simply identified using TGA. This study provides a better understanding of water contents and its role in drying rate and energy consumption.
Resumo:
Scores of well-researched individual papers and posters specifically or indirectly addressing the occurrence, measurement or exposure impacts of chemicals in buildings were presented at 2012 Healthy Buildings Conference. Many of these presentations offered advances in sampling and characterisation of chemical pollutants while others extended the frontiers of knowledge on the emission, adsorption, risk, fate and compositional levels of chemicals in indoor and outdoor microenvironments. Several modelled or monitored indoor chemistry, including processes that generated secondary pollutants. This article provides an overview of the state of knowledge on healthy buildings based on papers presented in chemistry sessions at Healthy Buildings 2012 (HB2012) Conference. It also suggests future directions in healthy buildings research.
Resumo:
Despite the existence of air quality guidelines in Australia and New Zealand, the concentrations of particulate matter have exceeded these guidelines on several occasions. To identify the sources of particulate matter, examine the contributions of the sources to the air quality at specific areas and estimate the most likely locations of the sources, a growing number of source apportionment studies have been conducted. This paper provides an overview of the locations of the studies, salient features of the results obtained and offers some perspectives for the improvement of future receptor modelling of air quality in these countries. The review revealed that because of its advantages over alternative models, Positive Matrix Factorisation (PMF) was the most commonly applied model in the studies. Although there were differences in the sources identified in the studies, some general trends were observed. While biomass burning was a common problem in both countries, the characteristics of this source varied from one location to another. In New Zealand, domestic heating was the highest contributor to particle levels on days when the guidelines were exceeded. On the other hand, forest back-burning was a concern in Brisbane while marine aerosol was a major source in most studies. Secondary sulphate, traffic emissions, industrial emissions and re-suspended soil were also identified as important sources. Some unique species, for example, volatile organic compounds and particle size distribution were incorporated into some of the studies with results that have significant ramifications for the improvement of air quality. Overall, the application of source apportionment models provided useful information that can assist the design of epidemiological studies and refine air pollution reduction strategies in Australia and New Zealand.
Resumo:
Particles of two isolates of subterranean clover red leaf virus were purified by a method in which infected plant tissue was digested with an industrial-grade cellulase, Celluclast® 2.0 L type X. The yields of virus particles using this enzyme were comparable with those obtained using either of two laboratory-grade cellulases, Cellulase type 1 (Sigma) and Driselase®. However, the specific infectivity or aphid transmissibility of the particles purified using Celluclast® was 10-100 times greater than those of preparations obtained using laboratory-grade cellulases or no enzyme. The main advantage of using Celluclast® is that at present in Australia its cost is only ca. 1% of laboratory-grade cellulases.
Resumo:
Diagnostics of rotating machinery has developed significantly in the last decades, and industrial applications are spreading in different sectors. Most applications are characterized by varying velocities of the shaft and in many cases transients are the most critical to monitor. In these variable speed conditions, fault symptoms are clearer in the angular/order domains than in the common time/frequency ones. In the past, this issue was often solved by synchronously sampling data by means of phase locked circuits governing the acquisition; however, thanks to the spread of cheap and powerful microprocessors, this procedure is nowadays rarer; sampling is usually performed at constant time intervals, and the conversion to the order domain is made by means of digital signal processing techniques. In the last decades different algorithms have been proposed for the extraction of an order spectrum from a signal sampled asynchronously with respect to the shaft rotational velocity; many of them (the so called computed order tracking family) use interpolation techniques to resample the signal at constant angular increments, followed by a common discrete Fourier transform to shift from the angular domain to the order domain. A less exploited family of techniques shifts directly from the time domain to the order spectrum, by means of modified Fourier transforms. This paper proposes a new transform, named velocity synchronous discrete Fourier transform, which takes advantage of the instantaneous velocity to improve the quality of its result, reaching performances that can challenge the computed order tracking.
Resumo:
Cyclostationary models for the diagnostic signals measured on faulty rotating machineries have proved to be successful in many laboratory tests and industrial applications. The squared envelope spectrum has been pointed out as the most efficient indicator for the assessment of second order cyclostationary symptoms of damages, which are typical, for instance, of rolling element bearing faults. In an attempt to foster the spread of rotating machinery diagnostics, the current trend in the field is to reach higher levels of automation of the condition monitoring systems. For this purpose, statistical tests for the presence of cyclostationarity have been proposed during the last years. The statistical thresholds proposed in the past for the identification of cyclostationary components have been obtained under the hypothesis of having a white noise signal when the component is healthy. This need, coupled with the non-white nature of the real signals implies the necessity of pre-whitening or filtering the signal in optimal narrow-bands, increasing the complexity of the algorithm and the risk of losing diagnostic information or introducing biases on the result. In this paper, the authors introduce an original analytical derivation of the statistical tests for cyclostationarity in the squared envelope spectrum, dropping the hypothesis of white noise from the beginning. The effect of first order and second order cyclostationary components on the distribution of the squared envelope spectrum will be quantified and the effectiveness of the newly proposed threshold verified, providing a sound theoretical basis and a practical starting point for efficient automated diagnostics of machine components such as rolling element bearings. The analytical results will be verified by means of numerical simulations and by using experimental vibration data of rolling element bearings.
Resumo:
The diagnostics of mechanical components operating in transient conditions is still an open issue, in both research and industrial field. Indeed, the signal processing techniques developed to analyse stationary data are not applicable or are affected by a loss of effectiveness when applied to signal acquired in transient conditions. In this paper, a suitable and original signal processing tool (named EEMED), which can be used for mechanical component diagnostics in whatever operating condition and noise level, is developed exploiting some data-adaptive techniques such as Empirical Mode Decomposition (EMD), Minimum Entropy Deconvolution (MED) and the analytical approach of the Hilbert transform. The proposed tool is able to supply diagnostic information on the basis of experimental vibrations measured in transient conditions. The tool has been originally developed in order to detect localized faults on bearings installed in high speed train traction equipments and it is more effective to detect a fault in non-stationary conditions than signal processing tools based on spectral kurtosis or envelope analysis, which represent until now the landmark for bearings diagnostics.
Resumo:
In the field of rolling element bearing diagnostics envelope analysis, and in particular the squared envelope spectrum, have gained in the last years a leading role among the different digital signal processing techniques. The original constraint of constant operating speed has been relaxed thanks to the combination of this technique with the computed order tracking, able to resample signals at constant angular increments. In this way, the field of application of squared envelope spectrum has been extended to cases in which small speed fluctuations occur, maintaining the effectiveness and efficiency that characterize this successful technique. However, the constraint on speed has to be removed completely, making envelope analysis suitable also for speed and load transients, to implement an algorithm valid for all the industrial application. In fact, in many applications, the coincidence of high bearing loads, and therefore high diagnostic capability, with acceleration-deceleration phases represents a further incentive in this direction. This paper is aimed at providing and testing a procedure for the application of envelope analysis to speed transients. The effect of load variation on the proposed technique will be also qualitatively addressed.
Resumo:
Diagnostics of rolling element bearings involves a combination of different techniques of signal enhancing and analysis. The most common procedure presents a first step of order tracking and synchronous averaging, able to remove the undesired components, synchronous with the shaft harmonics, from the signal, and a final step of envelope analysis to obtain the squared envelope spectrum. This indicator has been studied thoroughly, and statistically based criteria have been obtained, in order to identify damaged bearings. The statistical thresholds are valid only if all the deterministic components in the signal have been removed. Unfortunately, in various industrial applications, characterized by heterogeneous vibration sources, the first step of synchronous averaging is not sufficient to eliminate completely the deterministic components and an additional step of pre-whitening is needed before the envelope analysis. Different techniques have been proposed in the past with this aim: The most widely spread are linear prediction filters and spectral kurtosis. Recently, a new technique for pre-whitening has been proposed, based on cepstral analysis: the so-called cepstrum pre-whitening. Owing to its low computational requirements and its simplicity, it seems a good candidate to perform the intermediate pre-whitening step in an automatic damage recognition algorithm. In this paper, the effectiveness of the new technique will be tested on the data measured on a full-scale industrial bearing test-rig, able to reproduce the harsh conditions of operation. A benchmark comparison with the traditional pre-whitening techniques will be made, as a final step for the verification of the potentiality of the cepstrum pre-whitening.
Resumo:
This paper merges the analysis of a case history and the simplified theoretical model related to a rather singular phenomenon that may happen in rotating machinery. Starting from the first, a small industrial steam turbine experienced a very strange behavior during megawatt load. When the unit was approaching the maximum allowed power, the temperature of the babbitt metal of the pads of the thrust bearing showed constant increase with an unrecoverable drift. Bearing inspection showed that pad trailing edge had the typical aspect of electrical pitting. This kind of damage was not reparable and bearing pads had to replaced. This problem occurred several times in sequence and was solved only by adding further ground brushes to the shaft-line. Failure analysis indicated electrodischarge machining as the root fault. A specific model, able to take into consideration the effect of electrical pitting and loading capacity decreasing as a consequence of the damage of the babbitt metal, is proposed in the paper and shows that the phenomenon causes the irretrievable failure of the thrust bearing.