917 resultados para Statistical mixture-design optimization
Resumo:
The background to this review paper is research we have performed over recent years aimed at developing a simulation system capable of handling large scale, real world applications implemented in an end-to-end parallel, scalable manner. The particular focus of this paper is the use of a Level Set solid modeling geometry kernel within this parallel framework to enable automated design optimization without topological restrictions and on geometries of arbitrary complexity. Also described is another interesting application of Level Sets: their use in guiding the export of a body-conformal mesh from our basic cut-Cartesian background octree - mesh - this permits third party flow solvers to be deployed. As a practical demonstrations meshes of guaranteed quality are generated and flow-solved for a B747 in full landing configuration and an automated optimization is performed on a cooled turbine tip geometry. Copyright © 2009 by W.N.Dawes.
Resumo:
This paper reports on the design, optimization and testing of a self-regulating valve for single-phase liquid cooling of microelectronics. Its purpose is to maintain the integrated circuit (IC) at constant temperature and to reduce power consumption by diminishing flow generated by the pump as a function of the cooling requirements. It uses a thermopneumatic actuation principle that combines the advantages of zero power consumption and small size in combination with a high flow rate and low manufacturing costs. The valve actuation is provided by the thermal expansion of a liquid (actuation fluid) which, at the same time, actuates the valve and provides feed-back sensing. A maximum flow rate of 38 kg h-1 passes through the valve for a heat load up to 500 W. The valve is able to reduce the pumping power by up to 60% and it has the capability to maintain the IC at a more uniform temperature. © 2011 IOP Publishing Ltd.
Resumo:
Two main perspectives have been developed within the Multidisciplinary Design Optimization (MDO) literature for classifying and comparing MDO architectures: a numerical point of view and a formulation/data flow point of view. Although significant work has been done here, these perspectives have not provided much in the way of a priori information or predictive power about architecture performance. In this report, we outline a new perspective, called the geometric perspective, which we believe will be able to provide such predictive power. Using tools from differential geometry, we take several prominent architectures and describe mathematically how each constructs the space through which it moves. We then consider how the architecture moves through the space which it has constructed. Taken together, these investigations show how each architecture relates to the original feasible design manifold, how the architectures relate to each other, and how each architecture deals with the design coupling inherent to the original system. This in turn lays the groundwork for further theoretical comparisons between and analyses of MDO architectures and their behaviour using tools and techniques derived from differential geometry. © 2012 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Resumo:
Multidisciplinary Design Optimization (MDO) is a methodology for optimizing large coupled systems. Over the years, a number of different MDO decomposition strategies, known as architectures, have been developed, and various pieces of analytical work have been done on MDO and its architectures. However, MDO lacks an overarching paradigm which would unify the field and promote cumulative research. In this paper, we propose a differential geometry framework as such a paradigm: Differential geometry comes with its own set of analysis tools and a long history of use in theoretical physics. We begin by outlining some of the mathematics behind differential geometry and then translate MDO into that framework. This initial work gives new tools and techniques for studying MDO and its architectures while producing a naturally arising measure of design coupling. The framework also suggests several new areas for exploration into and analysis of MDO systems. At this point, analogies with particle dynamics and systems of differential equations look particularly promising for both the wealth of extant background theory that they have and the potential predictive and evaluative power that they hold. © 2012 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Resumo:
本文介绍了大范围、高精度 5轴激光加工机器人系统的研究开发情况 .在提高其绝对精度的前提下 ,对大范围框架式机器人的结构、高精度机器人的误差补偿方法进行了探讨 .采用有限元分析的方法对机器人本体进行了优化设计 ,确保了高精度大型激光加工机器人设计的正确性 .基于测量数据 ,建立了机器人误差模型 ,对机器人系统误差进行了补偿 ,取得了较好的结果 ,保证机器人系统的激光加工精度
Resumo:
Understanding and modeling the factors that underlie the growth and evolution of network topologies are basic questions that impact capacity planning, forecasting, and protocol research. Early topology generation work focused on generating network-wide connectivity maps, either at the AS-level or the router-level, typically with an eye towards reproducing abstract properties of observed topologies. But recently, advocates of an alternative "first-principles" approach question the feasibility of realizing representative topologies with simple generative models that do not explicitly incorporate real-world constraints, such as the relative costs of router configurations, into the model. Our work synthesizes these two lines by designing a topology generation mechanism that incorporates first-principles constraints. Our goal is more modest than that of constructing an Internet-wide topology: we aim to generate representative topologies for single ISPs. However, our methods also go well beyond previous work, as we annotate these topologies with representative capacity and latency information. Taking only demand for network services over a given region as input, we propose a natural cost model for building and interconnecting PoPs and formulate the resulting optimization problem faced by an ISP. We devise hill-climbing heuristics for this problem and demonstrate that the solutions we obtain are quantitatively similar to those in measured router-level ISP topologies, with respect to both topological properties and fault-tolerance.
Resumo:
A supported ionic liquid phase (SILP) catalyst prepared from [PrMIM][Ph2P(3-C6H4SO3)] (PrMIM = 1-propyl-3-methylimidazolium), [Rh(CO)(2)(acac)] (acacH = 2,4-pentanedione) [OctMIM]NTf2 (OctMIM = 1-n-octyl-3-methylimidazolium, Tf = CF3SO2) and microporous silica has been used for the continuous flow hydroformylation of 1-octene in the presence of compressed CO2. Statistical experimental design was used to show that the reaction rate is neither much affected by the film thickness (IL loading) nor by the syngas: substrate ratio. However, a factor-dependent interaction between the syngas: substrate ratio and film thickness on the reaction rate was revealed. Increasing the substrate flow led to increased reaction rates but lower overall yields. One of the most important parameters proved to be the phase behaviour of the mobile phase, which was studied by varying the reaction pressure. At low CO2 pressures or when N-2 was used instead of CO2 rates were low because of poor gas diffusion to the catalytic sites in the SILP. Furthermore, leaching of IL and Rh was high because the substrate is liquid and the IL had been designed to dissolve in it. As the CO2 pressure was increased, the reaction rate increased and the IL and Rh leaching were reduced, because an expanded liquid phase developed. Due to its lower viscosity the expanded liquid allows better transport of gases to the catalyst and is a poorer solvent for the IL and the catalyst because of its reduced polarity. Above 100 bar (close to the transition to a single phase at 106 bar), the rate of reaction dropped again with increasing pressure because the flowing phase becomes a better and better solvent for the alkene, reducing its partitioning into the IL film. Under optimised conditions, the catalyst was shown to be stable over at least 40 h of continuous catalysis with a steady state turnover frequency (TOF, mol product (mol Rh)(-1)) of 500 h(-1) at low Rh leaching (0.2 ppm). The selectivity of the catalyst was not much affected by the variation of process parameters. The linear: branched (1:b) ratios were ca. 3, similar to that obtained using the very same catalyst in conventional organic solvents.
Resumo:
In this work, we report on the significance of gate-source/drain extension region (also known as underlap design) optimization in double gate (DG) FETs to improve the performance of an operational transconductance amplifier (OTA). It is demonstrated that high values of intrinsic voltage gain (A(VO_OTA)) > 55 dB and unity gain frequency (f(T_OTA)) similar to 57 GHz in a folded cascode OTA can be achieved with gate-underlap channel design in 60 nm DG MOSFETs. These values correspond to 15 dB improvement in A(VO_OTA) and three fold enhancement in f(T_OTA) over a conventional non-underlap design. OTA performance based on underlap single gate SOI MOSFETs realized in ultra-thin body (UTB) and ultra-thin body BOX (UTBB) technologies is also evaluated. A(VO_OTA) values exhibited by a DG MOSFET-based OTA are 1.3-1.6 times higher as compared to a conventional UTB/UTBB single gate OTA. f(T_OTA) values for DG OTA are 10 GHz higher for UTB OTAs whereas a twofold improvement is observed with respect to UTBB OTAs. The simultaneous improvement in A(VO_OTA) and f(T_OTA) highlights the usefulness of underlap channel architecture in improving gain-bandwidth trade-off in analog circuit design. Underlap channel OTAs demonstrate high degree of tolerance to misalignment/oversize between front and back gates without compromising the performance, thus relaxing crucial process/technology-dependent parameters to achieve 'idealized' DG MOSFETs. Results show that underlap OTAs designed with a spacer-to-straggle (s/sigma) ratio of 3.2 and operated below a bias current (IBIAS) of 80 mu A demonstrate optimum performance. The present work provides new opportunities for realizing future ultra-wide band OTA design with underlap DG MOSFETs.
Resumo:
Background: In this study, the efficiency of Guar gum as a biopolymer has been compared with two other widely used inorganic coagulants, ferric chloride (FeCl3) and aluminum chloride (AlCl3), for the treatment of effluent collected from the rubber-washing tanks of a rubber concentrate factory. Settling velocity distribution curves were plotted to demonstrate the flocculating effect of FeCl3, AlCl3 and Guar gum. FeCl3 and AlCl3 displayed better turbidity removal than Guar gum at all settling velocities.
Result: FeCl3, AlCl3 and Guar gum removed 92.8%, 88.2% and 88.1% turbidity, respectively, of raw wastewater at a settling velocity of 0.1 cm min-1, respectively. Scanning electron microscopic (SEM) study conducted on the flocs revealed that Guar gum and FeCl3produced strong intercoiled honeycomb patterned floc structure capable of entrapping suspended particulate matter. Statistical experimental design Response Surface Methodology (RSM) was used to design all experiments, where the type and dosage of flocculant, pH and mixing speed were taken as control factors and, an optimum operational setting was proposed.
Conclusion: Due to biodegradability issues, the use of Guar gum as a flocculating agent for wastewater treatment in industry is highly recommended.
Resumo:
In this paper, we have developed a low-complexity algorithm for epileptic seizure detection with a high degree of accuracy. The algorithm has been designed to be feasibly implementable as battery-powered low-power implantable epileptic seizure detection system or epilepsy prosthesis. This is achieved by utilizing design optimization techniques at different levels of abstraction. Particularly, user-specific critical parameters are identified at the algorithmic level and are explicitly used along with multiplier-less implementations at the architecture level. The system has been tested on neural data obtained from in-vivo animal recordings and has been implemented in 90nm bulk-Si technology. The results show up to 90 % savings in power as compared to prevalent wavelet based seizure detection technique while achieving 97% average detection rate. Copyright 2010 ACM.
Resumo:
Numerous experimental studies of damage in composite laminates have shown that intralaminar (in-plane) matrix cracks lead to interlaminar delamination (out-of-plane) at ply interfaces. The smearing of in-plane cracks over a volume, as a consequence of the use of continuum damage mechanics, does not always effectively capture the full extent of the interaction between the two failure mechanisms. A more accurate representation is obtained by adopting a discrete crack approach via the use of cohesive elements, for both in-plane and out-of-plane damage. The difficulty with cohesive elements is that their location must be determined a priori in order to generate the model; while ideally the position of the crack migration, and more generally the propagation path, should be obtained as part of the problem’s solution. With the aim of enhancing current modelling capabilities with truly predictive capabilities, a concept of automatic insertion of interface elements is utilized. The consideration of a simple traction criterion in relation to material strength, evaluated at each node of the model (or of the regions of the model where it is estimated cracks might form), allows for the determination of initial crack location and subsequent propagation by the insertion of cohesive elements during the course of the analysis. Several experimental results are modelled using the commercial package ABAQUS/Standard with an automatic insertion subroutine developed in this work, and the results are presented to demonstrate the capabilities of this technique.
Resumo:
In this study, efforts were made in order to put forward an integrated recycling approach for the thermoset based glass fibre reinforced polymer (GPRP) rejects derived from the pultrusion manufacturing industry. Both the recycling process and the development of a new cost-effective end-use application for the recyclates were considered. For this purpose, i) among the several available recycling techniques for thermoset based composite materials, the most suitable one for the envisaged application was selected (mechanical recycling); and ii) an experimental work was carried out in order to assess the added-value of the obtained recyclates as aggregates and reinforcement replacements into concrete-polymer composite materials. Potential recycling solution was assessed by mechanical behaviour of resultant GFRP waste modified concrete-polymer composites with regard to unmodified materials. In the mix design process of the new GFRP waste based composite material, the recyclate content and size grade, and the effect of the incorporation of an adhesion promoter were considered as material factors and systematically tested between reasonable ranges. The optimization process of the modified formulations was supported by the Fuzzy Boolean Nets methodology, which allowed finding the best balance between material parameters that maximizes both flexural and compressive strengths of final composite. Comparing to related end-use applications of GFRP wastes in cementitious based concrete materials, the proposed solution overcome some of the problems found, namely the possible incompatibilities arisen from alkalis-silica reaction and the decrease in the mechanical properties due to high water-cement ratio required to achieve the desirable workability. Obtained results were very promising towards a global cost-effective waste management solution for GFRP industrial wastes and end-of-life products that will lead to a more sustainable composite materials industry.