941 resultados para 290601 Chemical Engineering Design
Resumo:
In the modern engineering design cycle the use of computational tools becomes a necessity. The complexity of the engineering systems under consideration for design increases dramatically as the demands for advanced and innovative design concepts and engineering products is expanding. At the same time the advancements in the available technology in terms of computational resources and power, as well as the intelligence of the design software, accommodate these demands and make them a viable approach towards the challenge of real-world engineering problems. This class of design optimisation problems is by nature multi-disciplinary. In the present work we establish enhanced optimisation capabilities within the Nimrod/O tool for massively distributed execution of computational tasks through cluster and computational grid resources, and develop the potential to combine and benefit from all the possible available technological advancements, both software and hardware. We develop the interface between a Free Form Deformation geometry management in-house code with the 2D airfoil aerodynamic efficiency evaluation tool XFoil, and the well established multi-objective heuristic optimisation algorithm NSGA-II. A simple airfoil design problem has been defined to demonstrate the functionality of the design system, but also to accommodate a framework for future developments and testing with other state-of-the-art optimisation algorithms such as the Multi-Objective Genetic Algorithm (MOGA) and the Multi-Objective Tabu Search (MOTS) techniques. Ultimately, heavily computationally expensive industrial design cases can be realised within the presented framework that could not be investigated before. ©2012 AIAA.
Resumo:
Engineering changes (ECs) are essential in complex product development, and their management is a crucial discipline for engineering industries. Numerous methods have been developed to support EC management (ECM), of which the change prediction method (CPM) is one of the most established. This article contributes a requirements-based benchmarking approach to assess and improve existing methods. The CPM is selected to be improved. First, based on a comprehensive literature survey and insights from industrial case studies, a set of 25 requirements for change management methods are developed. Second, these requirements are used as benchmarking criteria to assess the CPM in comparison to seven other promising methods. Third, the best-in-class solutions for each requirement are investigated to draw improvement suggestions for the CPM. Finally, an enhanced ECM method which implements these improvements is presented. © 2013 © 2013 The Author(s). Published by Taylor & Francis.
Oxygen carrier dispersion in inert packed beds to improve performance in chemical looping combustion
Resumo:
Various packed beds of copper-based oxygen carriers (CuO on Al2O3) were tested over 100 cycles of low temperature (673K) Chemical Looping Combustion (CLC) with H2 as the fuel gas. The oxygen carriers were uniformly mixed with alumina (Al2O3) in order to investigate the level of separation necessary to prevent agglomeration. It was found that a mass ratio of 1:6 oxygen carrier to alumina gave the best performance in terms of stable, repeating hydrogen breakthrough curves over 100 cycles. In order to quantify the average separation achieved in the mixed packed beds, two sphere-packing models were developed. The hexagonal close-packing model assumed a uniform spherical packing structure, and based the separation calculations on a hypergeometric probability distribution. The more computationally intensive full-scale model used discrete element modelling to simulate random packing arrangements governed by gravity and contact dynamics. Both models predicted that average 'nearest neighbour' particle separation drops to near zero for oxygen carrier mass fractions of x≥0.25. For the packed bed systems studied, agglomeration was observed when the mass fraction of oxygen carrier was above this threshold. © 2013 Elsevier B.V.
Resumo:
The concepts of reliability, robustness, adaptability, versatility, resilience and flexibility have been used to describe how a system design can mitigate the likely impact of uncertainties without removing their sources. With the increasing number of publications on designing systems to have such ilities, there is a need to clarify the relationships between the different ideas. This short article introduces a framework to compare these different ways in which a system can be insensitive to uncertainty, clarifying their meaning in the context of complex system design. We focus on relationships between the ilities listed above and do not discuss in detail methods to design-for-ilities. © 2013 The Author(s). Published by Taylor & Francis.
Resumo:
© 2014, Springer-Verlag London. Engineering changes are essential for any product development, and their management has become a crucial discipline. Research in engineering change management has brought about some methods and tools to support dealing with changes. This work extends the change prediction method through incorporation of a function–behaviour–structure (FBS) scheme. These additional levels of detail provide the rationales for change propagation and allow a more proactive management of changes. First, we develop the ontology of this method based on a comprehensive comparison of three seminal functional reasoning schemes. Then, we demonstrate the FBS Linkage technique by applying it to a diesel engine. Finally, we evaluate the method.
Resumo:
As a kind of waste collected from restaurants, trap grease is a chemically challenging feedstock for biodiesel production for its high free fatty acid (FFA) content. A central composite design was used to evaluate the effect of methanol quantity, acid concentration and reaction time on the synthesis of biodiesel from the trap grease with 50% free fatty acid, while the reaction temperature was selected at 95 degrees C. Using response surface methodology, a quadratic polynomial equation was obtained for ester content by multiple regression analysis. Verification experiments confirmed the validity of the predicted model. To achieve the highest ester content of crude biodiesel (89.67%), the critical values of the three variables were 35.00 (methanol-to-oil molar ratio), 11.27 wt% (catalyst concentration based on trap grease) and 4.59 h (reaction time). The crude biodiesel could be purified by a second distillation to meet the requirement of biodiesel specification of Korea.
Resumo:
We have demonstrated the design of a new type fluorescent assay based on the inner filter effect (IFE) of metal nanoparticles (NPs), which is conceptually different from the previously reported metal NPs-based fluorescent assays. With a high extinction coefficient and tunable plasmon absorption feature, metal NPs are expected to be capable of functioning as a powerful absorber to tune the emission of the fluorophore in the IFE-based fluorescent assays. In this work, we presented two proof-of-concept examples based on the IFE of Au NPs by choosing MDMO-PPV as a model fluorophore, whose fluorescence could be tuned by the absorbance of Au NPs with a much higher sensitivity than the corresponding absorbance approach.
Resumo:
The pyrolytic and kinetic characteristics of Enteromorpha prolifera from the Yellow Sea were evaluated at heating rates of 10, 20 and 50 degrees C min(-1), respectively. The results indicated that three stages appeared during pyrolysis; dehydration, primary devolatilization and residual decomposition. Differences in the heating rates resulted in considerable differences in the pyrolysis of E. prolifera. Specifically, the increase of heating rates resulted in shifting of the initial temperature, peak temperature and the maximum weight loss to a higher value. The average activation energy of E. prolifera was 228.1 kJ mol(-1), the pre-exponential factors ranged from 49.93 to 63.29 and the reaction orders ranged from 2.2 to 3.7. In addition, there were kinetic compensation effects between the pre-exponential factors and the activation energy. Finally, the minimum activation energy was obtained when a heating rate of 20 degrees C min(-1) was used. (C) 2009 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved.
Resumo:
Recencion - GREEN, Don W.; PERRY, Robert H., edit. - Perry's chemical engineers'handbook. 8th ed. New York: McGraw Hill, 2008. (Chemical Engineering Series). ISBN 978-0-07142-294-9
Resumo:
This study has considered the optimisation of granola breakfast cereal manufacturing processes by wet granulation and pneumatic conveying. Granola is an aggregated food product used as a breakfast cereal and in cereal bars. Processing of granola involves mixing the dry ingredients (typically oats, nuts, etc.) followed by the addition of a binder which can contain honey, water and/or oil. In this work, the design and operation of two parallel wet granulation processes to produce aggregate granola products were incorporated: a) a high shear mixing granulation process followed by drying/toasting in an oven. b) a continuous fluidised bed followed by drying/toasting in an oven. In high shear granulation the influence of process parameters on key granule aggregate quality attributes such as granule size distribution and textural properties of granola were investigated. The experimental results show that the impeller rotational speed is the single most important process parameter which influences granola physical and textural properties. After that binder addition rate and wet massing time also show significant impacts on granule properties. Increasing the impeller speed and wet massing time increases the median granule size while also presenting a positive correlation with density. The combination of high impeller speed and low binder addition rate resulted in granules with the highest levels of hardness and crispness. In the fluidised bed granulation process the effect of nozzle air pressure and binder spray rate on key aggregate quality attributes were studied. The experimental results show that a decrease in nozzle air pressure leads to larger in mean granule size. The combination of lowest nozzle air pressure and lowest binder spray rate results in granules with the highest levels of hardness and crispness. Overall, the high shear granulation process led to larger, denser, less porous and stronger (less likely to break) aggregates than the fluidised bed process. The study also examined the particle breakage of granola during pneumatic conveying produced by both the high shear granulation and the fluidised bed granulation process. Products were pneumatically conveyed in a purpose built conveying rig designed to mimic product conveying and packaging. Three different conveying rig configurations were employed; a straight pipe, a rig consisting two 45° bends and one with 90° bend. Particle breakage increases with applied pressure drop, and a 90° bend pipe results in more attrition for all conveying velocities relative to other pipe geometry. Additionally for the granules produced in the high shear granulator; those produced at the highest impeller speed, while being the largest also have the lowest levels of proportional breakage while smaller granules produced at the lowest impeller speed have the highest levels of breakage. This effect clearly shows the importance of shear history (during granule production) on breakage during subsequent processing. In terms of the fluidised bed granulation, there was no single operating parameter that was deemed to have a significant effect on breakage during subsequent conveying. Finally, a simple power law breakage model based on process input parameters was developed for both manufacturing processes. It was found suitable for predicting the breakage of granola breakfast cereal at various applied air velocities using a number of pipe configurations, taking into account shear histories.
Resumo:
Abstract: New product design challenges, related to customer needs, product usage and environments, face companies when they expand their product offerings to new markets; Some of the main challenges are: the lack of quantifiable information, product experience and field data. Designing reliable products under such challenges requires flexible reliability assessment processes that can capture the variables and parameters affecting the product overall reliability and allow different design scenarios to be assessed. These challenges also suggest a mechanistic (Physics of Failure-PoF) reliability approach would be a suitable framework to be used for reliability assessment. Mechanistic Reliability recognizes the primary factors affecting design reliability. This research views the designed entity as a “system of components required to deliver specific operations”; it addresses the above mentioned challenges by; Firstly: developing a design synthesis that allows a descriptive operations/ system components relationships to be realized; Secondly: developing component’s mathematical damage models that evaluate components Time to Failure (TTF) distributions given: 1) the descriptive design model, 2) customer usage knowledge and 3) design material properties; Lastly: developing a procedure that integrates components’ damage models to assess the mechanical system’s reliability over time. Analytical and numerical simulation models were developed to capture the relationships between operations and components, the mathematical damage models and the assessment of system’s reliability. The process was able to affect the design form during the conceptual design phase by providing stress goals to meet component’s reliability target. The process was able to numerically assess the reliability of a system based on component’s mechanistic TTF distributions, besides affecting the design of the component during the design embodiment phase. The process was used to assess the reliability of an internal combustion engine manifold during design phase; results were compared to reliability field data and found to produce conservative reliability results. The research focused on mechanical systems, affected by independent mechanical failure mechanisms that are influenced by the design process. Assembly and manufacturing stresses and defects’ influences are not a focus of this research.
Resumo:
Logic-based models are thriving within artificial intelligence. A great number of new logics have been defined, and their theory investigated. Epistemic logics introduce modal operators for knowledge or belief; deontic logics are about norms, and introduce operators of deontic necessity and possibility (i.e., obligation or prohibition). And then we have a much investigated class—temporal logics—to whose application to engineering this special issue is devoted. This kind of formalism deserves increased widespread recognition and application in engineering, a domain where other kinds of temporal models (e.g., Petri nets) are by now a fairly standard part of the modelling toolbox.
Resumo:
The demands of the process of engineering design, particularly for structural integrity, have exploited computational modelling techniques and software tools for decades. Frequently, the shape of structural components or assemblies is determined to optimise the flow distribution or heat transfer characteristics, and to ensure that the structural performance in service is adequate. From the perspective of computational modelling these activities are typically separated into: • fluid flow and the associated heat transfer analysis (possibly with chemical reactions), based upon Computational Fluid Dynamics (CFD) technology • structural analysis again possibly with heat transfer, based upon finite element analysis (FEA) techniques.
Resumo:
Today most of the IC and board designs are undertaken using two-dimensional graphics tools and rule checks. System-in-package is driving three-dimensional design concepts and this is posing a number of challenges for electronic design automation (EDA) software vendors. System-in-package requires three-dimensional EDA tools and design collaboration systems with appropriate manufacturing and assembly rules for these expanding technologies. Simulation and Analysis tools today focus on one aspect of the design requirement, for example, thermal, electrical or mechanical. System-in-Package requires analysis and simulation tools that can easily capture the complex three dimensional structures and provided integrated fast solutions to issues such as thermal management, reliability, electromagnetic interference, etc. This paper discusses some of the challenges faced by the design and analysis community in providing appropriate tools to engineers for System-in-Package design
Resumo:
Design of differential amplifier with high gain accuracy and high linearity is presented in the paper. The amplifier design is based on the negative impedance compensation technique reported by the authors in [1]. A negative impedance with high precision, low sensitivity, wide input signal range and simple structure is used for the compensation of differential amplifier. Analysis and simulation results show that gain accuracy and linearity can be improved significantly with the negative impedance compensation