30 resultados para Statistical mixture-design optimization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scheduling is a critical function that is present throughout many industries and applications. A great need exists for developing scheduling approaches that can be applied to a number of different scheduling problems with significant impact on performance of business organizations. A challenge is emerging in the design of scheduling support systems for manufacturing environments where dynamic adaptation and optimization become increasingly important. At this scenario, self-optimizing arise as the ability of the agent to monitor its state and performance and proactively tune itself to respond to environmental stimuli.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research work has been focused in the study of gallinaceous feathers, a waste that may be valorised as sorbent, to remove the Dark Blue Astrazon 2RN (DBA) from Dystar. This study was focused on the following aspects: optimization of experimental conditions through factorial design methodology, kinetic studies into a continuous stirred tank adsorber (at pH 7 and 20ºC), equilibrium isotherms (at pH 5, 7 and 9 at 20 and 45ºC) and column studies (at 20ºC, at pH 5, 7 and 9). In order to evaluate the influence of the presence of other components in the sorption of the dyestuff, all experiments were performed both for the dyestuff in aqueous solution and in real textile effluent. The pseudo-first and pseudo-second order kinetic models were fitted to the experimental data, being the latter the best fit for the aqueous solution of dyestuff. For the real effluent both models fit the experimental results and there is no statistical difference between them. The Central Composite Design (CCD) was used to evaluate the effects of temperature (15 - 45ºC) and pH (5 - 9) over the sorption in aqueous solution. The influence of pH was more significant than temperature. The optimal conditions selected were 45ºC and pH 9. Both Langmuir and Freundlich models could fit the equilibrium data. In the concentration range studied, the highest sorbent capacity was obtained for the optimal conditions in aqueous solution, which corresponds to a maximum capacity of 47± 4 mg g-1. The Yoon-Nelson, Thomas and Yan’s models fitted well the column experimental data. The highest breakthrough time for 50% removal, 170 min, was obtained at pH 9 in aqueous solution. The presence of the dyeing agents in the real wastewater decreased the sorption of the dyestuff mostly for pH 9, which is the optimal pH. The effect of pH is less pronounced in the real effluent than in aqueous solution. This work shows that feathers can be used as sorbent in the treatment of textile wastewaters containing DBA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last two decades the research and development of legged locomotion robots has grown steadily. Legged systems present major advantages when compared with ‘traditional’ vehicles, because they allow locomotion in inaccessible terrain to vehicles with wheels and tracks. However, the robustness of legged robots, and especially their energy consumption, among other aspects, still lag behind mechanisms that use wheels and tracks. Therefore, in the present state of development, there are several aspects that need to be improved and optimized. Keeping these ideas in mind, this paper presents the review of the literature of different methods adopted for the optimization of the structure and locomotion gaits of walking robots. Among the distinct possible strategies often used for these tasks are referred approaches such as the mimicking of biological animals, the use of evolutionary schemes to find the optimal parameters and structures, the adoption of sound mechanical design rules, and the optimization of power-based indexes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of scaffolds that combine the delivery of drugs with the physical support provided by electrospun fibres holds great potential in the field of nerve regeneration. Here it is proposed the incorporation of ibuprofen, a well-known non-steroidal anti-inflammatory drug, in electrospun fibres of the statistical copolymer poly(trimethylene carbonate-co-ε-caprolactone) [P(TMC-CL)] to serve as a drug delivery system to enhance axonal regeneration in the context of a spinal cord lesion, by limiting the inflammatory response. P(TMC-CL) fibres were electrospun from mixtures of dichloromethane (DCM) and dimethylformamide (DMF). The solvent mixture applied influenced fibre morphology, as well as mean fibre diameter, which decreased as the DMF content in solution increased. Ibuprofen-loaded fibres were prepared from P(TMC-CL) solutions containing 5% ibuprofen (w/w of polymer). Increasing drug content to 10% led to jet instability, resulting in the formation of a less homogeneous fibrous mesh. Under the optimized conditions, drug-loading efficiency was above 80%. Confocal Raman mapping showed no preferential distribution of ibuprofen in P(TMC-CL) fibres. Under physiological conditions ibuprofen was released in 24h. The release process being diffusion-dependent for fibres prepared from DCM solutions, in contrast to fibres prepared from DCM-DMF mixtures where burst release occurred. The biological activity of the drug released was demonstrated using human-derived macrophages. The release of prostaglandin E2 to the cell culture medium was reduced when cells were incubated with ibuprofen-loaded P(TMC-CL) fibres, confirming the biological significance of the drug delivery strategy presented. Overall, this study constitutes an important contribution to the design of a P(TMC-CL)-based nerve conduit with anti-inflammatory properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Joining of components with structural adhesives is currently one of the most widespread techniques for advanced structures (e.g., aerospace or aeronautical). Adhesive bonding does not involve drilling operations and it distributes the load over a larger area than mechanical joints. However, peak stresses tend to develop near the overlap edges because of differential straining of the adherends and load asymmetry. As a result, premature failures can be expected, especially for brittle adhesives. Moreover, bonded joints are very sensitive to the surface treatment of the material, service temperature, humidity and ageing. To surpass these limitations, the combination of adhesive bonding with spot-welding is a choice to be considered, adding a few advantages like superior static strength and stiffness, higher peeling and fatigue strength and easier fabrication, as fixtures during the adhesive curing are not needed. The experimental and numerical study presented here evaluates hybrid spot-welded/bonded single-lap joints in comparison with the purely spot-welded and bonded equivalents. A parametric study on the overlap length (LO) allowed achieving different strength advantages, up to 58% compared to spot-welded joints and 24% over bonded joints. The Finite Element Method (FEM) and Cohesive Zone Models (CZM) for damage growth were also tested in Abaqus® to evaluate this technique for strength prediction, showing accurate estimations for all kinds of joints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last twenty years genetic algorithms (GAs) were applied in a plethora of fields such as: control, system identification, robotics, planning and scheduling, image processing, and pattern and speech recognition (Bäck et al., 1997). In robotics the problems of trajectory planning, collision avoidance and manipulator structure design considering a single criteria has been solved using several techniques (Alander, 2003). Most engineering applications require the optimization of several criteria simultaneously. Often the problems are complex, include discrete and continuous variables and there is no prior knowledge about the search space. These kind of problems are very more complex, since they consider multiple design criteria simultaneously within the optimization procedure. This is known as a multi-criteria (or multiobjective) optimization, that has been addressed successfully through GAs (Deb, 2001). The overall aim of multi-criteria evolutionary algorithms is to achieve a set of non-dominated optimal solutions known as Pareto front. At the end of the optimization procedure, instead of a single optimal (or near optimal) solution, the decision maker can select a solution from the Pareto front. Some of the key issues in multi-criteria GAs are: i) the number of objectives, ii) to obtain a Pareto front as wide as possible and iii) to achieve a Pareto front uniformly spread. Indeed, multi-objective techniques using GAs have been increasing in relevance as a research area. In 1989, Goldberg suggested the use of a GA to solve multi-objective problems and since then other researchers have been developing new methods, such as the multi-objective genetic algorithm (MOGA) (Fonseca & Fleming, 1995), the non-dominated sorted genetic algorithm (NSGA) (Deb, 2001), and the niched Pareto genetic algorithm (NPGA) (Horn et al., 1994), among several other variants (Coello, 1998). In this work the trajectory planning problem considers: i) robots with 2 and 3 degrees of freedom (dof ), ii) the inclusion of obstacles in the workspace and iii) up to five criteria that are used to qualify the evolving trajectory, namely the: joint traveling distance, joint velocity, end effector / Cartesian distance, end effector / Cartesian velocity and energy involved. These criteria are used to minimize the joint and end effector traveled distance, trajectory ripple and energy required by the manipulator to reach at destination point. Bearing this ideas in mind, the paper addresses the planning of robot trajectories, meaning the development of an algorithm to find a continuous motion that takes the manipulator from a given starting configuration up to a desired end position without colliding with any obstacle in the workspace. The chapter is organized as follows. Section 2 describes the trajectory planning and several approaches proposed in the literature. Section 3 formulates the problem, namely the representation adopted to solve the trajectory planning and the objectives considered in the optimization. Section 4 studies the algorithm convergence. Section 5 studies a 2R manipulator (i.e., a robot with two rotational joints/links) when the optimization trajectory considers two and five objectives. Sections 6 and 7 show the results for the 3R redundant manipulator with five goals and for other complementary experiments are described, respectively. Finally, section 8 draws the main conclusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fractional calculus (FC) is currently being applied in many areas of science and technology. In fact, this mathematical concept helps the researches to have a deeper insight about several phenomena that integer order models overlook. Genetic algorithms (GA) are an important tool to solve optimization problems that occur in engineering. This methodology applies the concepts that describe biological evolution to obtain optimal solution in many different applications. In this line of thought, in this work we use the FC and the GA concepts to implement the electrical fractional order potential. The performance of the GA scheme, and the convergence of the resulting approximation, are analyzed. The results are analyzed for different number of charges and several fractional orders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing use of Carbon-Fibre Reinforced Plastic (CFRP) laminates in high responsibility applications introduces an issue regarding their handling after damage. The availability of efficient repair methods is essential to restore the strength of the structure. The availability of accurate predictive tools for the repairs behaviour is also essential for the reduction of costs and time associated to extensive tests. This work reports on a numerical study of the tensile behaviour of three-dimensional (3D) adhesively-bonded scarf repairs in CFRP structures, using a ductile adhesive. The Finite Element (FE) analysis was performed in ABAQUS® and Cohesive Zone Models (CZM’s) was used for the simulation of damage in the adhesive layer. A parametric study was performed on two geometric parameters. The use of overlaminating plies covering the repaired region at the outer or both repair surfaces was also tested as an attempt to increase the repairs efficiency. The results allowed the proposal of design principles for repairing CFRP structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mathematical models and statistical analysis are key instruments in soil science scientific research as they can describe and/or predict the current state of a soil system. These tools allow us to explore the behavior of soil related processes and properties as well as to generate new hypotheses for future experimentation. A good model and analysis of soil properties variations, that permit us to extract suitable conclusions and estimating spatially correlated variables at unsampled locations, is clearly dependent on the amount and quality of data and of the robustness techniques and estimators. On the other hand, the quality of data is obviously dependent from a competent data collection procedure and from a capable laboratory analytical work. Following the standard soil sampling protocols available, soil samples should be collected according to key points such as a convenient spatial scale, landscape homogeneity (or non-homogeneity), land color, soil texture, land slope, land solar exposition. Obtaining good quality data from forest soils is predictably expensive as it is labor intensive and demands many manpower and equipment both in field work and in laboratory analysis. Also, the sampling collection scheme that should be used on a data collection procedure in forest field is not simple to design as the sampling strategies chosen are strongly dependent on soil taxonomy. In fact, a sampling grid will not be able to be followed if rocks at the predicted collecting depth are found, or no soil at all is found, or large trees bar the soil collection. Considering this, a proficient design of a soil data sampling campaign in forest field is not always a simple process and sometimes represents a truly huge challenge. In this work, we present some difficulties that have occurred during two experiments on forest soil that were conducted in order to study the spatial variation of some soil physical-chemical properties. Two different sampling protocols were considered for monitoring two types of forest soils located in NW Portugal: umbric regosol and lithosol. Two different equipments for sampling collection were also used: a manual auger and a shovel. Both scenarios were analyzed and the results achieved have allowed us to consider that monitoring forest soil in order to do some mathematical and statistical investigations needs a sampling procedure to data collection compatible to established protocols but a pre-defined grid assumption often fail when the variability of the soil property is not uniform in space. In this case, sampling grid should be conveniently adapted from one part of the landscape to another and this fact should be taken into consideration of a mathematical procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demand response can play a very relevant role in the context of power systems with an intensive use of distributed energy resources, from which renewable intermittent sources are a significant part. More active consumers participation can help improving the system reliability and decrease or defer the required investments. Demand response adequate use and management is even more important in competitive electricity markets. However, experience shows difficulties to make demand response be adequately used in this context, showing the need of research work in this area. The most important difficulties seem to be caused by inadequate business models and by inadequate demand response programs management. This paper contributes to developing methodologies and a computational infrastructure able to provide the involved players with adequate decision support on demand response programs and contracts design and use. The presented work uses DemSi, a demand response simulator that has been developed by the authors to simulate demand response actions and programs, which includes realistic power system simulation. It includes an optimization module for the application of demand response programs and contracts using deterministic and metaheuristic approaches. The proposed methodology is an important improvement in the simulator while providing adequate tools for demand response programs adoption by the involved players. A machine learning method based on clustering and classification techniques, resulting in a rule base concerning DR programs and contracts use, is also used. A case study concerning the use of demand response in an incident situation is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aims to optimize the water quality monitoring of a polluted watercourse (Leça River, Portugal) through the principal component analysis (PCA) and cluster analysis (CA). These statistical methodologies were applied to physicochemical, bacteriological and ecotoxicological data (with the marine bacterium Vibrio fischeri and the green alga Chlorella vulgaris) obtained with the analysis of water samples monthly collected at seven monitoring sites and during five campaigns (February, May, June, August, and September 2006). The results of some variables were assigned to water quality classes according to national guidelines. Chemical and bacteriological quality data led to classify Leça River water quality as “bad” or “very bad”. PCA and CA identified monitoring sites with similar pollution pattern, giving to site 1 (located in the upstream stretch of the river) a distinct feature from all other sampling sites downstream. Ecotoxicity results corroborated this classification thus revealing differences in space and time. The present study includes not only physical, chemical and bacteriological but also ecotoxicological parameters, which broadens new perspectives in river water characterization. Moreover, the application of PCA and CA is very useful to optimize water quality monitoring networks, defining the minimum number of sites and their location. Thus, these tools can support appropriate management decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coffee silverskin is a major roasting by-product that could be valued as a source of antioxidant compounds. The effect of the major variables (solvent polarity, temperature and extraction time) affecting the extraction yields of bioactive compounds and antioxidant activity of silverskin extracts was evaluated. The extracts composition varied significantly with the extraction conditions used. A factorial experimental design showed that the use of a hydroalcoholic solvent (50%:50%) at 40 °C for 60 min is a sustainable option to maximize the extraction yield of bioactive compounds and the antioxidant capacity of extracts. Using this set of conditions it was possible to obtain extracts containing total phenolics (302.5 ± 7.1 mg GAE/L), tannins (0.43 ± 0.06 mg TAE/L), and flavonoids (83.0 ± 1.4 mg ECE/L), exhibiting DPPHradical dot scavenging activity (326.0 ± 5.7 mg TE/L) and ferric reducing antioxidant power (1791.9 ± 126.3 mg SFE/L). These conditions allowed, in comparison with other “more effective” for some individual parameters, a cost reduction, saving time and energy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demand response programs and models have been developed and implemented for an improved performance of electricity markets, taking full advantage of smart grids. Studying and addressing the consumers’ flexibility and network operation scenarios makes possible to design improved demand response models and programs. The methodology proposed in the present paper aims to address the definition of demand response programs that consider the demand shifting between periods, regarding the occurrence of multi-period demand response events. The optimization model focuses on minimizing the network and resources operation costs for a Virtual Power Player. Quantum Particle Swarm Optimization has been used in order to obtain the solutions for the optimization model that is applied to a large set of operation scenarios. The implemented case study illustrates the use of the proposed methodology to support the decisions of the Virtual Power Player in what concerns the duration of each demand response event.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O instável mas tendencialmente crescente preço dos combustíveis associado a preocupações ambientais cada vez mais enraizadas nas sociedades, têm vindo a despoletar uma maior atenção à procura de combustíveis alternativos. Por outro lado, várias projecções indicam um aumento muito acentuado do consumo energético global no curto prazo, fruto do aumento da população e do nível de industrialização das sociedades. Neste contexto, o biodiesel (ésteres de ácidos gordos) obtido através da transesterificação de triglicerídeos de origem vegetal ou animal, surge como a alternativa “verde” mais viável para utilização em equipamentos de combustão. A reacção de transesterificação é catalisada, por norma com recurso a catalisadores homogéneos alcalinos (NaOH ou KOH). Este tipo de processo, o único actualmente com expressão a nível industrial, apresenta algumas desvantagens que, para além de aumentarem o custo do produto final, contribuem para reduzir a benignidade do mesmo: a impossibilidade de reutilização do catalisador, o aumento do número e complexidade das etapas de separação e a produção de efluentes resultantes das referidas etapas. Com o intuito de minimizar ou eliminar estes problemas, vários catalisadores heterogéneos têm vindo a ser estudados para esta reacção. Apesar de muitos apresentarem resultados promissores, a grande maioria não tem viabilidade para aplicação industrial seja devido ao seu próprio custo, seja devido aos pré-tratamentos necessários à sua utilização. Entre estes catalisadores, o óxido de cálcio é talvez o que apresenta resultados mais promissores. O crescente número de estudos envolvendo este catalisador em detrimento de outros, é por si mesmo prova do potencial do CaO. A realização deste trabalho pretendia atingir os seguintes objectivos principais: • Avaliar a elegibilidade do óxido de cálcio enquanto catalisador da reacção de transesterificação de óleos alimentares usados com metanol; • Avaliar qual a sua influência nas características dos produtos finais; • Avaliar as diferenças de performance entre o óxido de cálcio activado em atmosfera inerte (N2) e em ar, enquanto catalisadores da reacção de transesterificação de óleos alimentares usados com metanol; • Optimizar as condições da reacção com recurso às ferramentas matemáticas disponibilizadas pelo planeamento factorial, através da variação de quatro factores chave de influência: temperatura, tempo, relação metanol / óleo e massa de catalisador utilizado. O CaO utlizado foi obtido a partir de carbonato de cálcio calcinado numa mufla a 750 °C durante 3 h. Foi posteriormente activado a 900 °C durante 2h, em atmosferas diferentes: azoto (CaO-N2) e ar (CaO-Ar). Avaliaram-se algumas propriedades dos catalisadores assim preparados, força básica, concentração de centros activos e áreas específicas, tendo-se obtido uma força básica situada entre 12 e 14 para ambos os catalisadores, uma concentração de centros activos de 0,0698 mmol/g e 0,0629 mmol/g e áreas específicas de 10 m2/g e 11 m2/g respectivamente para o CaO-N2 e CaO-Ar. Efectuou-se a transesterificação, com catálise homogénea, da mistura de óleos usados utilizada neste trabalho com o objectivo de determinar os limites para o teor de FAME’s (abreviatura do Inglês de Fatty Acid Methyl Esters’) que se poderiam obter. Foi este o parâmetro avaliado em cada uma das amostras obtidas por catálise heterogénea. Os planos factoriais realizados tiveram como objectivo maximizar a sua quantidade recorrendo à relação ideal entre tempo de reacção, temperatura, massa de catalisador e quantidade de metanol. Verificou-se que o valor máximo de FAME’s obtidos a partir deste óleo estava situado ligeiramente acima dos 95 % (m/m). Realizaram-se três planos factoriais com cada um dos catalisadores de CaO até à obtenção das condições óptimas para a reacção. Não se verificou influência significativa da relação entre a quantidade de metanol e a massa de óleo na gama de valores estudada, pelo que se fixou o valor deste factor em 35 ml de metanol / 85g de óleo (relação molar aproximada de 8:1). Verificou-se a elegibilidade do CaO enquanto catalisador para a reacção estudada, não se tendo observado diferenças significativas entre a performance do CaO-N2 e do CaO-Ar. Identificaram-se as condições óptimas para a reacção como sendo os valores de 59 °C para a temperatura, 3h para o tempo e 1,4 % de massa de catalisador relativamente à massa de óleo. Nas referidas condições, obtiveram-se produtos com um teor de FAME’s de 95,7 % na catálise com CaO-N2 e 95,3 % na catálise com CaO-Ar. Alguns autores de estudos consultados no desenvolvimento do presente trabalho, referiam como principal problema da utilização do CaO, a lixiviação de cálcio para os produtos obtidos. Este facto foi confirmado no presente trabalho e na tentativa de o contornar, tentou-se promover a carbonatação do cálcio com a passagem de ar comprimido através dos produtos e subsequente filtração. Após a realização deste tratamento, não mais se observaram alterações nas suas propriedades (aparecimento de turvação ou precipitados), no entanto, nos produtos obtidos nas condições óptimas, a concentração de cálcio determinada foi de 527 mg/kg no produto da reacção catalisada com CaO-N2 e 475 mg/kg com CaO-A. O óxido de cálcio apresentou-se como um excelente catalisador na transesterificação da mistura de óleos alimentares usados utilizada no presente trabalho, apresentando uma performance ao nível da obtida por catálise homogénea básica. Não se observaram diferenças significativas de performance entre o CaO-N2 e o CaO-Ar, sendo possível obter nas mesmas condições reaccionais produtos com teores de FAME’s superiores a 95 % utilizando qualquer um deles como catalisador. O elevado teor de cálcio lixiviado observado nos produtos, apresenta-se como o principal obstáculo à aplicação a nível industrial do óxido de cálcio como catalisador para a transesterificação de óleos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

8th International Workshop on Multiple Access Communications (MACOM2015), Helsinki, Finland.