915 resultados para process optimization
Resumo:
This paper investigated the influence of three micro electrodischarge milling process parameters, which were feed rate, capacitance, and voltage. The response variables were average surface roughness (R a ), maximum peak-to-valley roughness height (R y ), tool wear ratio (TWR), and material removal rate (MRR). Statistical models of these output responses were developed using three-level full factorial design of experiment. The developed models were used for multiple-response optimization by desirability function approach to obtain minimum R a , R y , TWR, and maximum MRR. Maximum desirability was found to be 88%. The optimized values of R a , R y , TWR, and MRR were 0.04, 0.34 μm, 0.044, and 0.08 mg min−1, respectively for 4.79 μm s−1 feed rate, 0.1 nF capacitance, and 80 V voltage. Optimized machining parameters were used in verification experiments, where the responses were found very close to the predicted values.
Resumo:
In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.
Resumo:
Hydrous cerium oxide (HCO) was synthesized by intercalation of solutions of cerium(III) nitrate and sodium hydroxide and evaluated as an adsorbent for the removal of hexavalent chromium from aqueous solutions. Simple batch experiments and a 25 factorial experimental design were employed to screen the variables affecting Cr(VI) removal efficiency. The effects of the process variables; solution pH, initial Cr(VI) concentration, temperature, adsorbent dose and ionic strength were examined. Using the experimental results, a linear mathematical model representing the influence of the different variables and their interactions was obtained. Analysis of variance (ANOVA) demonstrated that Cr(VI) adsorption significantly increases with decreased solution pH, initial concentration and amount of adsorbent used (dose), but slightly decreased with an increase in temperature and ionic strength. The optimization study indicates 99% as the maximum removal at pH 2, 20 °C, 1.923 mM of metal concentration and a sorbent dose of 4 g/dm3. At these optimal conditions, Langmuir, Freundlich and Redlich–Peterson isotherm models were obtained. The maximum adsorption capacity of Cr(VI) adsorbed by HCO was 0.828 mmol/g, calculated by the Langmuir isotherm model. Desorption of chromium indicated that the HCO adsorbent can be regenerated using NaOH solution 0.1 M (up to 85%). The adsorption interactions between the surface sites of HCO and the Cr(VI) ions were found to be a combined effect of both anion exchange and surface complexation with the formation of an inner-sphere complex.
Resumo:
Light emitted from metal/oxide/metal tunnel junctions can originate from the slow-mode surface plasmon polariton supported in the oxide interface region. The effective radiative decay of this mode is constrained by competition with heavy intrinsic damping and by the need to scatter from very small scale surface roughness; the latter requirement arises from the mode's low phase velocity and the usual momentum conservation condition in the scattering process. Computational analysis of conventional devices shows that the desirable goals of decreased intrinsic damping and increased phase velocity are influenced, in order of priority, by the thickness and dielectric function of the oxide layer, the type of metal chosen for each conducting electrode, and temperature. Realizable devices supporting an optimized slow-mode plasmon polariton are suggested. Essentially these consist of thin metal electrodes separated by a dielectric layer which acts as a very thin (a few nm) electron tunneling barrier but a relatively thick (several 10's of nm) optically lossless region. (C) 1995 American Institute of Physics.
Resumo:
In the production process of polyethylene terephthalate (PET) bottles, the initial temperature of preforms plays a central role on the final thickness, intensity and other structural properties of the bottles. Also, the difference between inside and outside temperature profiles could make a significant impact on the final product quality. The preforms are preheated by infrared heating oven system which is often an open loop system and relies heavily on trial and error approach to adjust the lamp power settings. In this paper, a radial basis function (RBF) neural network model, optimized by a two-stage selection (TSS) algorithm combined with partial swarm optimization (PSO), is developed to model the nonlinear relations between the lamp power settings and the output temperature profile of PET bottles. Then an improved PSO method for lamp setting adjustment using the above model is presented. Simulation results based on experimental data confirm the effectiveness of the modelling and optimization method.
Resumo:
In this work, the removal of arsenic from aqueous solutions onto thermally processed dolomite is investigated. The dolomite was thermally processed (charred) at temperatures of 600, 700 and 800 degrees C for 1, 2, 4 and 8 h. Isotherm experiments were carried out on these samples over a wide pH range. A complete arsenic removal was achieved over the pH range studied when using the 800 degrees C charred dolomite. However, at this temperature, thermal degradation of the dolomite weakens its structure due to the decomposition of the magnesium carbonate, leading to a partial dissolution. For this reason, the dolomitic sorbent chosen for further investigations was the 8 h at 700 degrees C material. Isotherm studies indicated that the Langmuir model was successful in describing the process to a better extent than the Freundlich model for the As(V) adsorption on the selected charred dolomite. However, for the As(III) adsorption, the Freundlich model was more successful in describing the process. The maximum adsorption capacities of charred dolomite for arsenite and arsenate ions are 1.846 and 2.157 mg/g, respectively. It was found that both the pseudo first- and second-order kinetic models are able to describe the experimental data (R-2 > 0.980). The data suggest the charring process allows dissociation of the dolomite to calcium carbonate and magnesium oxide, which accelerates the process of arsenic oxide and arsenic carbonate precipitation. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Energy in today's short-range wireless communication is mostly spent on the analog- and digital hardware rather than on radiated power. Hence,purely information-theoretic considerations fail to achieve the lowest energy per information bit and the optimization process must carefully consider the overall transceiver. In this paper, we propose to perform cross-layer optimization, based on an energy-aware rate adaptation scheme combined with a physical layer that is able to properly adjust its processing effort to the data rate and the channel conditions to minimize the energy consumption per information bit. This energy proportional behavior is enabled by extending the classical system modes with additional configuration parameters at the various layers. Fine grained models of the power consumption of the hardware are developed to provide awareness of the physical layer capabilities to the medium access control layer. The joint application of the proposed energy-aware rate adaptation and modifications to the physical layer of an IEEE802.11n system, improves energy-efficiency (averaged over many noise and channel realizations) in all considered scenarios by up to 44%.
Resumo:
An environment has been created for the optimisation of aerofoil profiles with inclusion of small surface features. For TS wave dominated flows, the paper examines the consequences of the addition of a depression on the aerodynamic optimisation of an NLF aerofoil, and describes the geometry definition fidelity and optimisation algorithm employed in the development process. The variables that define the depression for this optimisation investigation have been fixed, however a preliminary study is presented demonstrating the sensitivity of the flow to the depression characteristics. Solutions to the optimisation problem are then presented using both gradient-based and genetic algorithm techniques, and for accurate representation of the inclusion of small surface perturbations it is concluded that a global optimisation method is required for this type of aerofoil optimisation task due to the nature of the response surface generated. When dealing with surface features, changes in the transition onset are likely to be of a non-linear nature so it is highly critical to have an optimisation algorithm that is robust, suggesting that for this framework, gradient-based methods alone are not suited.
Resumo:
The worsening of process variations and the consequent increased spreads in circuit performance and consumed power hinder the satisfaction of the targeted budgets and lead to yield loss. Corner based design and adoption of design guardbands might limit the yield loss. However, in many cases such methods may not be able to capture the real effects which might be way better than the predicted ones leading to increasingly pessimistic designs. The situation is even more severe in memories which consist of substantially different individual building blocks, further complicating the accurate analysis of the impact of variations at the architecture level leaving many potential issues uncovered and opportunities unexploited. In this paper, we develop a framework for capturing non-trivial statistical interactions among all the components of a memory/cache. The developed tool is able to find the optimum memory/cache configuration under various constraints allowing the designers to make the right choices early in the design cycle and consequently improve performance, energy, and especially yield. Our, results indicate that the consideration of the architectural interactions between the memory components allow to relax the pessimistic access times that are predicted by existing techniques.
Resumo:
Camera traps are used to estimate densities or abundances using capture-recapture and, more recently, random encounter models (REMs). We deploy REMs to describe an invasive-native species replacement process, and to demonstrate their wider application beyond abundance estimation. The Irish hare Lepus timidus hibernicus is a high priority endemic of conservation concern. It is threatened by an expanding population of non-native, European hares L. europaeus, an invasive species of global importance. Camera traps were deployed in thirteen 1 km squares, wherein the ratio of invader to native densities were corroborated by night-driven line transect distance sampling throughout the study area of 1652 km2. Spatial patterns of invasive and native densities between the invader’s core and peripheral ranges, and native allopatry, were comparable between methods. Native densities in the peripheral range were comparable to those in native allopatry using REM, or marginally depressed using Distance Sampling. Numbers of the invader were substantially higher than the native in the core range, irrespective of method, with a 5:1 invader-to-native ratio indicating species replacement. We also describe a post hoc optimization protocol for REM which will inform subsequent (re-)surveys, allowing survey effort (camera hours) to be reduced by up to 57% without compromising the width of confidence intervals associated with density estimates. This approach will form the basis of a more cost-effective means of surveillance and monitoring for both the endemic and invasive species. The European hare undoubtedly represents a significant threat to the endemic Irish hare.
Resumo:
The recent drive towards timely multiple product realizations has caused most Manufacturing Enterprises (MEs) to develop more flexible assembly lines supported by better manufacturing design and planning. The aim of this work is to develop a methodology which will support feasibility analyses of assembly tasks, in order to simulate either a manufacturing process or a single work-cell in which digital human models act. The methodology has been applied in a case study relating to a railway industry. Simulations were applied to help standardize the methodology and suggest new solutions for realizing ergonomic and efficient assembly processes in the railway industry.
Resumo:
A relação entre a epidemiologia, a modelação matemática e as ferramentas computacionais permite construir e testar teorias sobre o desenvolvimento e combate de uma doença. Esta tese tem como motivação o estudo de modelos epidemiológicos aplicados a doenças infeciosas numa perspetiva de Controlo Ótimo, dando particular relevância ao Dengue. Sendo uma doença tropical e subtropical transmitida por mosquitos, afecta cerca de 100 milhões de pessoas por ano, e é considerada pela Organização Mundial de Saúde como uma grande preocupação para a saúde pública. Os modelos matemáticos desenvolvidos e testados neste trabalho, baseiam-se em equações diferenciais ordinárias que descrevem a dinâmica subjacente à doença nomeadamente a interação entre humanos e mosquitos. É feito um estudo analítico dos mesmos relativamente aos pontos de equilíbrio, sua estabilidade e número básico de reprodução. A propagação do Dengue pode ser atenuada através de medidas de controlo do vetor transmissor, tais como o uso de inseticidas específicos e campanhas educacionais. Como o desenvolvimento de uma potencial vacina tem sido uma aposta mundial recente, são propostos modelos baseados na simulação de um hipotético processo de vacinação numa população. Tendo por base a teoria de Controlo Ótimo, são analisadas as estratégias ótimas para o uso destes controlos e respetivas repercussões na redução/erradicação da doença aquando de um surto na população, considerando uma abordagem bioeconómica. Os problemas formulados são resolvidos numericamente usando métodos diretos e indiretos. Os primeiros discretizam o problema reformulando-o num problema de optimização não linear. Os métodos indiretos usam o Princípio do Máximo de Pontryagin como condição necessária para encontrar a curva ótima para o respetivo controlo. Nestas duas estratégias utilizam-se vários pacotes de software numérico. Ao longo deste trabalho, houve sempre um compromisso entre o realismo dos modelos epidemiológicos e a sua tratabilidade em termos matemáticos.
Resumo:
Por parte da indústria de estampagem tem-se verificado um interesse crescente em simulações numéricas de processos de conformação de chapa, incluindo também métodos de engenharia inversa. Este facto ocorre principalmente porque as técnicas de tentativa-erro, muito usadas no passado, não são mais competitivas a nível económico. O uso de códigos de simulação é, atualmente, uma prática corrente em ambiente industrial, pois os resultados tipicamente obtidos através de códigos com base no Método dos Elementos Finitos (MEF) são bem aceites pelas comunidades industriais e científicas Na tentativa de obter campos de tensão e de deformação precisos, uma análise eficiente com o MEF necessita de dados de entrada corretos, como geometrias, malhas, leis de comportamento não-lineares, carregamentos, leis de atrito, etc.. Com o objetivo de ultrapassar estas dificuldades podem ser considerados os problemas inversos. No trabalho apresentado, os seguintes problemas inversos, em Mecânica computacional, são apresentados e analisados: (i) problemas de identificação de parâmetros, que se referem à determinação de parâmetros de entrada que serão posteriormente usados em modelos constitutivos nas simulações numéricas e (ii) problemas de definição geométrica inicial de chapas e ferramentas, nos quais o objetivo é determinar a forma inicial de uma chapa ou de uma ferramenta tendo em vista a obtenção de uma determinada geometria após um processo de conformação. São introduzidas e implementadas novas estratégias de otimização, as quais conduzem a parâmetros de modelos constitutivos mais precisos. O objetivo destas estratégias é tirar vantagem das potencialidades de cada algoritmo e melhorar a eficiência geral dos métodos clássicos de otimização, os quais são baseados em processos de apenas um estágio. Algoritmos determinísticos, algoritmos inspirados em processos evolucionários ou mesmo a combinação destes dois são usados nas estratégias propostas. Estratégias de cascata, paralelas e híbridas são apresentadas em detalhe, sendo que as estratégias híbridas consistem na combinação de estratégias em cascata e paralelas. São apresentados e analisados dois métodos distintos para a avaliação da função objetivo em processos de identificação de parâmetros. Os métodos considerados são uma análise com um ponto único ou uma análise com elementos finitos. A avaliação com base num único ponto caracteriza uma quantidade infinitesimal de material sujeito a uma determinada história de deformação. Por outro lado, na análise através de elementos finitos, o modelo constitutivo é implementado e considerado para cada ponto de integração. Problemas inversos são apresentados e descritos, como por exemplo, a definição geométrica de chapas e ferramentas. Considerando o caso da otimização da forma inicial de uma chapa metálica a definição da forma inicial de uma chapa para a conformação de um elemento de cárter é considerado como problema em estudo. Ainda neste âmbito, um estudo sobre a influência da definição geométrica inicial da chapa no processo de otimização é efetuado. Este estudo é realizado considerando a formulação de NURBS na definição da face superior da chapa metálica, face cuja geometria será alterada durante o processo de conformação plástica. No caso dos processos de otimização de ferramentas, um processo de forjamento a dois estágios é apresentado. Com o objetivo de obter um cilindro perfeito após o forjamento, dois métodos distintos são considerados. No primeiro, a forma inicial do cilindro é otimizada e no outro a forma da ferramenta do primeiro estágio de conformação é otimizada. Para parametrizar a superfície livre do cilindro são utilizados diferentes métodos. Para a definição da ferramenta são também utilizados diferentes parametrizações. As estratégias de otimização propostas neste trabalho resolvem eficientemente problemas de otimização para a indústria de conformação metálica.
Resumo:
Dissertação de Mestrado , Engenharia Biológica, Faculdade de Engenharia de Recursos Naturais, Universidade do Algarve, 2008
Resumo:
Dissertação de Mestrado, Engenharia Informática, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2015