1000 resultados para 670706 Organic industrial chemicals not elsewhere classified
Resumo:
Electrostatic discharge is the sudden and brief electric current that flashes between two objects at different voltages. This is a serious issue ranging in application from solid-state electronics to spectacular and dangerous lightning strikes (arc flashes). The research herein presents work on the experimental simulation and measurement of the energy in an electrostatic discharge. The energy released in these discharges has been linked to ignitions and burning in a number of documented disasters and can be enormously hazardous in many other industrial scenarios. Simulations of electrostatic discharges were designed to specifications by IEC standards. This is typically based on the residual voltage/charge on the discharge capacitor, whereas this research examines the voltage and current in the actual spark in order to obtain a more precise comparative measurement of the energy dissipated.
Resumo:
Many industrial processes and systems can be modelled mathematically by a set of Partial Differential Equations (PDEs). Finding a solution to such a PDF model is essential for system design, simulation, and process control purpose. However, major difficulties appear when solving PDEs with singularity. Traditional numerical methods, such as finite difference, finite element, and polynomial based orthogonal collocation, not only have limitations to fully capture the process dynamics but also demand enormous computation power due to the large number of elements or mesh points for accommodation of sharp variations. To tackle this challenging problem, wavelet based approaches and high resolution methods have been recently developed with successful applications to a fixedbed adsorption column model. Our investigation has shown that recent advances in wavelet based approaches and high resolution methods have the potential to be adopted for solving more complicated dynamic system models. This chapter will highlight the successful applications of these new methods in solving complex models of simulated-moving-bed (SMB) chromatographic processes. A SMB process is a distributed parameter system and can be mathematically described by a set of partial/ordinary differential equations and algebraic equations. These equations are highly coupled; experience wave propagations with steep front, and require significant numerical effort to solve. To demonstrate the numerical computing power of the wavelet based approaches and high resolution methods, a single column chromatographic process modelled by a Transport-Dispersive-Equilibrium linear model is investigated first. Numerical solutions from the upwind-1 finite difference, wavelet-collocation, and high resolution methods are evaluated by quantitative comparisons with the analytical solution for a range of Peclet numbers. After that, the advantages of the wavelet based approaches and high resolution methods are further demonstrated through applications to a dynamic SMB model for an enantiomers separation process. This research has revealed that for a PDE system with a low Peclet number, all existing numerical methods work well, but the upwind finite difference method consumes the most time for the same degree of accuracy of the numerical solution. The high resolution method provides an accurate numerical solution for a PDE system with a medium Peclet number. The wavelet collocation method is capable of catching up steep changes in the solution, and thus can be used for solving PDE models with high singularity. For the complex SMB system models under consideration, both the wavelet based approaches and high resolution methods are good candidates in terms of computation demand and prediction accuracy on the steep front. The high resolution methods have shown better stability in achieving steady state in the specific case studied in this Chapter.
Resumo:
A 4 week intensive measurement campaign was conducted in March–April 2007 at Agnes Water, a remote coastal site on the east coast of Australia. A Volatility-Hygroscopicity-Tandem Differential Mobility Analyser (VH-TDMA) was used to investigate changes in the hygroscopic properties of ambient particles as volatile components were progressively evaporated. Nine out of 18 VH-TDMA volatility scans detected internally mixed multi-component particles in the nucleation and Aitken modes in clean marine air. Evaporation of a volatile, organic-like component in the VH-TDMA caused significant increases in particle hygroscopicity. In 3 scans the increase in hygroscopicity was so large it was explained by an increase in the absolute volume of water uptake by the particle residuals, and not merely an increase in their relative hygroscopicity. This indicates the presence of organic components that were suppressing the hygroscopic growth of mixed particles on the timescale of humidification in the VH-TDMA (6.5 secs). This observation was supported by ZSR calculations for one scan, which showed that the measured growth factors of mixed particles were up to 18% below those predicted assuming independent water uptake of the individual particle components. The observed suppression of water uptake could be due to a reduced rate of hygroscopic growth caused by the presence of organic films or organic-inorganic interactions in solution droplets that had a negative effect on hygroscopicity.
Resumo:
This study reports the potential toxicological impact of particles produced during biomass combustion by an automatic pellet boiler and a traditional logwood stove under various combustion conditions using a novel profluorescent nitroxide probe BPEAnit. This probe is weakly fluorescent, but yields strong fluorescence emission upon radical trapping or redox activity. Samples were collected by bubbling aerosol through an impinger containing BPEAnit solution, followed by fluorescence measurement. The fluorescence of BPEAnit was measured for particles produced during various combustion phases, at the beginning of burning (cold start), stable combustion after refilling with the fuel (warm start) and poor burning conditions. For particles produced by the logwood stove under cold-start conditions significantly higher amounts of reactive species per unit of particulate mass were observed compared to emissions produced during a warm start. In addition, sampling of logwood burning emissions after passing through a thermodenuder at 250oC resulted in an 80-100% reduction of the fluorescence signal of BPEAnit probe, indicating that the majority of reactive species were semivolatile. Moreover, the amount of reactive species showed a strong correlation with the amount of particulate organic material. This indicates the importance of semivolatile organics in particle-related toxicity. Particle emissions from the pellet boiler, although of similar mass concentration, were not observed to lead to an increase in fluorescence signal during any of the combustion phases.
Resumo:
This overview focuses on the application of chemometrics techniques for the investigation of soils contaminated by polycyclic aromatic hydrocarbons (PAHs) and metals because these two important and very diverse groups of pollutants are ubiquitous in soils. The salient features of various studies carried out in the micro- and recreational environments of humans, are highlighted in the context of the various multivariate statistical techniques available across discipline boundaries that have been effectively used in soil studies. Particular attention is paid to techniques employed in the geosciences that may be effectively utilized for environmental soil studies; classical multivariate approaches that may be used in isolation or as complementary methods to these are also discussed. Chemometrics techniques widely applied in atmospheric studies for identifying sources of pollutants or for determining the importance of contaminant source contributions to a particular site, have seen little use in soil studies, but may be effectively employed in such investigations. Suitable programs are also available for suggesting mitigating measures in cases of soil contamination, and these are also considered. Specific techniques reviewed include pattern recognition techniques such as Principal Components Analysis (PCA), Fuzzy Clustering (FC) and Cluster Analysis (CA); geostatistical tools include variograms, Geographical Information Systems (GIS), contour mapping and kriging; source identification and contribution estimation methods reviewed include Positive Matrix Factorisation (PMF), and Principal Component Analysis on Absolute Principal Component Scores (PCA/APCS). Mitigating measures to limit or eliminate pollutant sources may be suggested through the use of ranking analysis and multi criteria decision making methods (MCDM). These methods are mainly represented in this review by studies employing the Preference Ranking Organisation Method for Enrichment Evaluation (PROMETHEE) and its associated graphic output, Geometrical Analysis for Interactive Aid (GAIA).
Resumo:
The economiser is a critical component for efficient operation of coal-fired power stations. It consists of a large system of water-filled tubes which extract heat from the exhaust gases. When it fails, usually due to erosion causing a leak, the entire power station must be shut down to effect repairs. Not only are such repairs highly expensive, but the overall repair costs are significantly affected by fluctuations in electricity market prices, due to revenue lost during the outage. As a result, decisions about when to repair an economiser can alter the repair costs by millions of dollars. Therefore, economiser repair decisions are critical and must be optimised. However, making optimal repair decisions is difficult because economiser leaks are a type of interactive failure. If left unfixed, a leak in a tube can cause additional leaks in adjacent tubes which will need more time to repair. In addition, when choosing repair times, one also needs to consider a number of other uncertain inputs such as future electricity market prices and demands. Although many different decision models and methodologies have been developed, an effective decision-making method specifically for economiser repairs has yet to be defined. In this paper, we describe a Decision Tree based method to meet this need. An industrial case study is presented to demonstrate the application of our method.
Resumo:
Many cities worldwide face the prospect of major transformation as the world moves towards a global information order. In this new era, urban economies are being radically altered by dynamic processes of economic and spatial restructuring. The result is the creation of ‘informational cities’ or its new and more popular name, ‘knowledge cities’. For the last two centuries, social production had been primarily understood and shaped by neo-classical economic thought that recognized only three factors of production: land, labor and capital. Knowledge, education, and intellectual capacity were secondary, if not incidental, factors. Human capital was assumed to be either embedded in labor or just one of numerous categories of capital. In the last decades, it has become apparent that knowledge is sufficiently important to deserve recognition as a fourth factor of production. Knowledge and information and the social and technological settings for their production and communication are now seen as keys to development and economic prosperity. The rise of knowledge-based opportunity has, in many cases, been accompanied by a concomitant decline in traditional industrial activity. The replacement of physical commodity production by more abstract forms of production (e.g. information, ideas, and knowledge) has, however paradoxically, reinforced the importance of central places and led to the formation of knowledge cities. Knowledge is produced, marketed and exchanged mainly in cities. Therefore, knowledge cities aim to assist decision-makers in making their cities compatible with the knowledge economy and thus able to compete with other cities. Knowledge cities enable their citizens to foster knowledge creation, knowledge exchange and innovation. They also encourage the continuous creation, sharing, evaluation, renewal and update of knowledge. To compete nationally and internationally, cities need knowledge infrastructures (e.g. universities, research and development institutes); a concentration of well-educated people; technological, mainly electronic, infrastructure; and connections to the global economy (e.g. international companies and finance institutions for trade and investment). Moreover, they must possess the people and things necessary for the production of knowledge and, as importantly, function as breeding grounds for talent and innovation. The economy of a knowledge city creates high value-added products using research, technology, and brainpower. Private and the public sectors value knowledge, spend money on its discovery and dissemination and, ultimately, harness it to create goods and services. Although many cities call themselves knowledge cities, currently, only a few cities around the world (e.g., Barcelona, Delft, Dublin, Montreal, Munich, and Stockholm) have earned that label. Many other cities aspire to the status of knowledge city through urban development programs that target knowledge-based urban development. Examples include Copenhagen, Dubai, Manchester, Melbourne, Monterrey, Singapore, and Shanghai. Knowledge-Based Urban Development To date, the development of most knowledge cities has proceeded organically as a dependent and derivative effect of global market forces. Urban and regional planning has responded slowly, and sometimes not at all, to the challenges and the opportunities of the knowledge city. That is changing, however. Knowledge-based urban development potentially brings both economic prosperity and a sustainable socio-spatial order. Its goal is to produce and circulate abstract work. The globalization of the world in the last decades of the twentieth century was a dialectical process. On one hand, as the tyranny of distance was eroded, economic networks of production and consumption were constituted at a global scale. At the same time, spatial proximity remained as important as ever, if not more so, for knowledge-based urban development. Mediated by information and communication technology, personal contact, and the medium of tacit knowledge, organizational and institutional interactions are still closely associated with spatial proximity. The clustering of knowledge production is essential for fostering innovation and wealth creation. The social benefits of knowledge-based urban development extend beyond aggregate economic growth. On the one hand is the possibility of a particularly resilient form of urban development secured in a network of connections anchored at local, national, and global coordinates. On the other hand, quality of place and life, defined by the level of public service (e.g. health and education) and by the conservation and development of the cultural, aesthetic and ecological values give cities their character and attract or repel the creative class of knowledge workers, is a prerequisite for successful knowledge-based urban development. The goal is a secure economy in a human setting: in short, smart growth or sustainable urban development.
Resumo:
The enhanced social profile of not-for-profit organisations (NFPs) and the role of volunteers have resulted in calls for NFPs to be more accountable and to disclose information relating to such contributions. In this study we identify, locate and categorise the extent of disclosures made in relation to volunteer contributions. We find that disclosure was more prevalent on NFP websites compared to digital annual report disclosures. We find that more NFPs provided disclosure on the activities of their volunteers than other items pertaining to volunteers. The valuation of volunteer contributions was the least likely to be disclosed. The findings contribute to international debate over the inclusion of volunteer contributions in the assessment of a NFP’s accountability over its resources and ultimately the enhancement of its sustainability.
Resumo:
An experimental set-up was used to visually observe the characteristics of bubbles as they moved up a column holding xanthan gum crystal suspensions. The bubble rise characteristics in xanthan gum solutions with crystal suspension are presented in this paper. The suspensions were made by using different concentrations of xanthan gum solutions with 0.23 mm mean diameter polystyrene crystal particles. The influence of the dimensionless quantities; namely the Reynolds number, Re, the Weber number, We, and the drag co-efficient, cd, are identified for the determination of the bubble rise velocity. The effect of these dimensionless groups together with the Eötvös number, Eo, the Froude number, Fr, and the bubble deformation parameter, D, on the bubble rise velocity and bubble trajectory are analysed. The experimental results show that the average bubble velocity increases with the increase in bubble volume for xanthan gum crystal suspensions. At high We, Eo and Re, bubbles are spherical-capped and their velocities are found to be very high. At low We and Eo, the surface tension force is significant compared to the inertia force. The viscous forces were shown to have no substantial effect on the bubble rise velocity for 45 < Re < 299. The results show that the drag co-efficient decreases with the increase in bubble velocity and Re. The trajectory analysis showed that small bubbles followed a zigzag motion while larger bubbles followed a spiral motion. The smaller bubbles experienced less horizontal motion in crystal suspended xanthan gum solutions while larger bubbles exhibited a greater degree of spiral motion than those seen in the previous studies on the bubble rise in xanthan gum solutions without crystal.
Resumo:
This report presents the findings of an exploratory study into the perceptions held by students regarding the use of criterion-referenced assessment in an undergraduate differential equations class. Students in the class were largely unaware of the concept of criterion referencing and of the various interpretations that this concept has among mathematics educators. Our primary goal was to investigate whether explicitly presenting assessment criteria to students was useful to them and guided them in responding to assessment tasks. Quantitative data and qualitative feedback from students indicates that while students found the criteria easy to understand and useful in informing them as to how they would be graded, the manner in which they actually approached the assessment activity was not altered as a result of the use of explicitly communicated grading criteria.
Resumo:
A computational fluid dynamics (CFD) analysis has been performed for a flat plate photocatalytic reactor using CFD code FLUENT. Under the simulated conditions (Reynolds number, Re around 2650), a detailed time accurate computation shows the different stages of flow evolution and the effects of finite length of the reactor in creating flow instability, which is important to improve the performance of the reactor for storm and wastewater reuse. The efficiency of a photocatalytic reactor for pollutant decontamination depends on reactor hydrodynamics and configurations. This study aims to investigate the role of different parameters on the optimization of the reactor design for its improved performance. In this regard, more modelling and experimental efforts are ongoing to better understand the interplay of the parameters that influence the performance of the flat plate photocatalytic reactor.
Resumo:
Urban water quality can be significantly impaired by the build-up of pollutants such as heavy metals and volatile organics on urban road surfaces due to vehicular traffic. Any control strategy for the mitigation of traffic related build-up of heavy metals and volatile organic pollutants should be based on the knowledge of their build-up processes. In the study discussed in this paper, the outcomes of a detailed experiment investigation into build-up processes of heavy metals and volatile organics are presented. It was found that traffic parameters such as average daily traffic, volume over capacity ratio and surface texture depth had similar strong correlations with the build-up of heavy metals and volatile organics. Multicriteria decision analyses revealed that the 1 - 74 um particulate fraction of total suspended solids (TSS) could be regarded as a surrogate indicator for particulate heavy metals in build-up and this same fraction of total organic carbon could be regarded as a surrogate indicator for particulate volatile organics build-up. In terms of pollutants affinity, TSS was found to be the predominant parameter for particulate heavy metals build-up and total dissolved solids was found to be the predominant parameter for he potential dissolved particulate fraction in heavy metals build-up. It was also found that land use did not play a significant role in the build-up of traffic generated heavy metals and volatile organics.
Resumo:
The heterogeneous photocatalytic oxidation process offers a versatile promise in the detoxification and disinfection of wastewater containing hazardous organic compounds such as pesticides and phenolic compounds in storm and wastewater effluent. This process has gained wide attention due to its effectiveness in degrading and mineralizing the organic compounds into harmless and often useful components. To develop an efficient photocatalytic process, titanium dioxide has been actively studied in recent years due to its excellent performance as a photocatalyst under UV light irradiation. This paper aims at critically evaluating and highlighting the recent developments of the heterogeneous photocatalytic systems with a special focus on storm and wastewater treatment applications.