947 resultados para Bio-inspired optimization techniques
Resumo:
Otto-von-Guericke-Universität Magdeburg, Fakultät für Verfahrens- und Systemtechnik, Dissertation, 2016
Resumo:
In the last decades, the possibility to generate plasma at atmospheric pressure gave rise to a new emerging field called plasma medicine; it deals with the application of cold atmospheric pressure plasmas (CAPs) or plasma-activated solutions on or in the human body for therapeutic effects. Thanks to a blend of synergic biologically active agents and biocompatible temperatures, different CAP sources were successfully employed in many different biomedical applications such as dentistry, dermatology, wound healing, cancer treatment, blood coagulation, etc.… Despite their effectiveness has been verified in the above-mentioned biomedical applications, over the years, researchers throughout the world described numerous CAP sources which are still laboratory devices not optimized for the specific application. In this perspective, the aim of this dissertation was the development and the optimization of techniques and design parameters for the engineering of CAP sources for different biomedical applications and plasma medicine among which cancer treatment, dentistry and bioaerosol decontamination. In the first section, the discharge electrical parameters, the behavior of the plasma streamers and the liquid and the gas phase chemistry of a multiwire device for the treatment of liquids were performed. Moreover, two different plasma-activated liquids were used for the treatment of Epithelial Ovarian Cancer cells and fibroblasts to assess their selectivity. In the second section, in accordance with the most important standard regulations for medical devices, were reported the realization steps of a Plasma Gun device easy to handle and expected to be mounted on a tabletop device that could be used for dental clinical applications. In the third section, in relation to the current COVID-19 pandemic, were reported the first steps for the design, realization, and optimization of a dielectric barrier discharge source suitable for the treatment of different types of bioaerosol.
Resumo:
Several decision and control tasks in cyber-physical networks can be formulated as large- scale optimization problems with coupling constraints. In these "constraint-coupled" problems, each agent is associated to a local decision variable, subject to individual constraints. This thesis explores the use of primal decomposition techniques to develop tailored distributed algorithms for this challenging set-up over graphs. We first develop a distributed scheme for convex problems over random time-varying graphs with non-uniform edge probabilities. The approach is then extended to unknown cost functions estimated online. Subsequently, we consider Mixed-Integer Linear Programs (MILPs), which are of great interest in smart grid control and cooperative robotics. We propose a distributed methodological framework to compute a feasible solution to the original MILP, with guaranteed suboptimality bounds, and extend it to general nonconvex problems. Monte Carlo simulations highlight that the approach represents a substantial breakthrough with respect to the state of the art, thus representing a valuable solution for new toolboxes addressing large-scale MILPs. We then propose a distributed Benders decomposition algorithm for asynchronous unreliable networks. The framework has been then used as starting point to develop distributed methodologies for a microgrid optimal control scenario. We develop an ad-hoc distributed strategy for a stochastic set-up with renewable energy sources, and show a case study with samples generated using Generative Adversarial Networks (GANs). We then introduce a software toolbox named ChoiRbot, based on the novel Robot Operating System 2, and show how it facilitates simulations and experiments in distributed multi-robot scenarios. Finally, we consider a Pickup-and-Delivery Vehicle Routing Problem for which we design a distributed method inspired to the approach of general MILPs, and show the efficacy through simulations and experiments in ChoiRbot with ground and aerial robots.
Resumo:
In this Thesis, a life cycle analysis (LCA) of a biofuel cell designed by a team from the University of Bologna was done. The purpose of this study is to investigate the possible environmental impacts of the production and use of the cell and a possible optimization for an industrial scale-up. To do so, a first part of the paper was devoted to studying the present literature on biomass, and fuel cell treatments and then LCA studies on them. The experimental part presents the work done to create the Life Cycle Inventory and Life Cycle Impact Assessment. Several alternative scenarios were created to study process optimization. Reagents and energy supply were changed. To examine whether this technology can be competitive, a comparison was made with some biofuel cell use scenarios with traditional biomass treatment technologies. The result of this study is that this technology is promising from an environmental point of view in case it is possible to recover nutrients in output, without excessive energy consumption, and to minimize the use of energy used to prepare the solution.
Resumo:
In the field of industrial automation, there is an increasing need to use optimal control systems that have low tracking errors and low power and energy consumption. The motors we are dealing with are mainly Permanent Magnet Synchronous Motors (PMSMs), controlled by 3 different types of controllers: a position controller, a speed controller, and a current controller. In this thesis, therefore, we are going to act on the gains of the first two controllers by going to find, through the TwinCAT 3 software, what might be the best set of parameters. To do this, starting with the default parameters recommended by TwinCAT, two main methods were used and then compared: the method of Ziegler and Nichols, which is a tabular method, and advanced tuning, an auto-tuning software method of TwinCAT. Therefore, in order to analyse which set of parameters was the best,several experiments were performed for each case, using the Motion Control Function Blocks. Moreover, some machines, such as large robotic arms, have vibration problems. To analyse them in detail, it was necessary to use the Bode Plot tool, which, through Bode plots, highlights in which frequencies there are resonance and anti-resonance peaks. This tool also makes it easier to figure out which and where to apply filters to improve control.
Resumo:
In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems.
Resumo:
Modal filters may be obtained by a properly designed weighted sum of the output signals of an array of sensors distributed on the host structure. Although several research groups have been interested in techniques for designing and implementing modal filters based on a given array of sensors, the effect of the array topology on the effectiveness of the modal filter has received much less attention. In particular, it is known that some parameters, such as size, shape and location of a sensor, are very important in determining the observability of a vibration mode. Hence, this paper presents a methodology for the topological optimization of an array of sensors in order to maximize the effectiveness of a set of selected modal filters. This is done using a genetic algorithm optimization technique for the selection of 12 piezoceramic sensors from an array of 36 piezoceramic sensors regularly distributed on an aluminum plate, which maximize the filtering performance, over a given frequency range, of a set of modal filters, each one aiming to isolate one of the first vibration modes. The vectors of the weighting coefficients for each modal filter are evaluated using QR decomposition of the complex frequency response function matrix. Results show that the array topology is not very important for lower frequencies but it greatly affects the filter effectiveness for higher frequencies. Therefore, it is possible to improve the effectiveness and frequency range of a set of modal filters by optimizing the topology of an array of sensors. Indeed, using 12 properly located piezoceramic sensors bonded on an aluminum plate it is shown that the frequency range of a set of modal filters may be enlarged by 25-50%.
Resumo:
Compliant mechanisms can achieve a specified motion as a mechanism without relying on the use of joints and pins. They have broad application in precision mechanical devices and Micro-Electro Mechanical Systems (MEMS) but may lose accuracy and produce undesirable displacements when subjected to temperature changes. These undesirable effects can be reduced by using sensors in combination with control techniques and/or by applying special design techniques to reduce such undesirable effects at the design stage, a process generally termed ""design for precision"". This paper describes a design for precision method based on a topology optimization method (TOM) for compliant mechanisms that includes thermal compensation features. The optimization problem emphasizes actuator accuracy and it is formulated to yield optimal compliant mechanism configurations that maximize the desired output displacement when a force is applied, while minimizing undesirable thermal effects. To demonstrate the effectiveness of the method, two-dimensional compliant mechanisms are designed considering thermal compensation, and their performance is compared with compliant mechanisms designs that do not consider thermal compensation. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Flow pumps are important tools in several engineering areas, such as in the fields of bioengineering and thermal management solutions for electronic devices. Nowadays, many of the new flow pump principles are based on the use of piezoelectric actuators, which present some advantages such as miniaturization potential and lower noise generation. In previous work, authors presented a study of a novel pump configuration based on placing an oscillating bimorph piezoelectric actuator in water to generate flow. It was concluded that this oscillatory behavior (such as fish swimming) yields vortex interaction, generating flow rate due to the action and reaction principle. Thus, following this idea the objective of this work is to explore this oscillatory principle by studying the interaction among generated vortex from two bimorph piezoelectric actuators oscillating inside the same pump channel, which is similar to the interaction of vortex generated by frontal fish and posterior ones when they swim together in a group formation. It is shown that parallel-series configurations of bimorph piezoelectric actuators inside the same pump channel provide higher flow rates and pressure for liquid pumping than simple parallel-series arrangements of corresponding single piezoelectric pumps, respectively. The scope of this work includes structural simulations of bimorph piezoelectric actuators, fluid flow simulations, and prototype construction for result validation.
Resumo:
The ability to control both the minimum size of holes and the minimum size of structural members are essential requirements in the topology optimization design process for manufacturing. This paper addresses both requirements by means of a unified approach involving mesh-independent projection techniques. An inverse projection is developed to control the minimum hole size while a standard direct projection scheme is used to control the minimum length of structural members. In addition, a heuristic scheme combining both contrasting requirements simultaneously is discussed. Two topology optimization implementations are contributed: one in which the projection (either inverse or direct) is used at each iteration; and the other in which a two-phase scheme is explored. In the first phase, the compliance minimization is carried out without any projection until convergence. In the second phase, the chosen projection scheme is applied iteratively until a solution is obtained while satisfying either the minimum member size or minimum hole size. Examples demonstrate the various features of the projection-based techniques presented.
Resumo:
Metaheuristics performance is highly dependent of the respective parameters which need to be tuned. Parameter tuning may allow a larger flexibility and robustness but requires a careful initialization. The process of defining which parameters setting should be used is not obvious. The values for parameters depend mainly on the problem, the instance to be solved, the search time available to spend in solving the problem, and the required quality of solution. This paper presents a learning module proposal for an autonomous parameterization of Metaheuristics, integrated on a Multi-Agent System for the resolution of Dynamic Scheduling problems. The proposed learning module is inspired on Autonomic Computing Self-Optimization concept, defining that systems must continuously and proactively improve their performance. For the learning implementation it is used Case-based Reasoning, which uses previous similar data to solve new cases. In the use of Case-based Reasoning it is assumed that similar cases have similar solutions. After a literature review on topics used, both AutoDynAgents system and Self-Optimization module are described. Finally, a computational study is presented where the proposed module is evaluated, obtained results are compared with previous ones, some conclusions are reached, and some future work is referred. It is expected that this proposal can be a great contribution for the self-parameterization of Metaheuristics and for the resolution of scheduling problems on dynamic environments.
Resumo:
Distributed Energy Resources (DER) scheduling in smart grids presents a new challenge to system operators. The increase of new resources, such as storage systems and demand response programs, results in additional computational efforts for optimization problems. On the other hand, since natural resources, such as wind and sun, can only be precisely forecasted with small anticipation, short-term scheduling is especially relevant requiring a very good performance on large dimension problems. Traditional techniques such as Mixed-Integer Non-Linear Programming (MINLP) do not cope well with large scale problems. This type of problems can be appropriately addressed by metaheuristics approaches. This paper proposes a new methodology called Signaled Particle Swarm Optimization (SiPSO) to address the energy resources management problem in the scope of smart grids, with intensive use of DER. The proposed methodology’s performance is illustrated by a case study with 99 distributed generators, 208 loads, and 27 storage units. The results are compared with those obtained in other methodologies, namely MINLP, Genetic Algorithm, original Particle Swarm Optimization (PSO), Evolutionary PSO, and New PSO. SiPSO performance is superior to the other tested PSO variants, demonstrating its adequacy to solve large dimension problems which require a decision in a short period of time.
Resumo:
The optimal power flow problem has been widely studied in order to improve power systems operation and planning. For real power systems, the problem is formulated as a non-linear and as a large combinatorial problem. The first approaches used to solve this problem were based on mathematical methods which required huge computational efforts. Lately, artificial intelligence techniques, such as metaheuristics based on biological processes, were adopted. Metaheuristics require lower computational resources, which is a clear advantage for addressing the problem in large power systems. This paper proposes a methodology to solve optimal power flow on economic dispatch context using a Simulated Annealing algorithm inspired on the cooling temperature process seen in metallurgy. The main contribution of the proposed method is the specific neighborhood generation according to the optimal power flow problem characteristics. The proposed methodology has been tested with IEEE 6 bus and 30 bus networks. The obtained results are compared with other wellknown methodologies presented in the literature, showing the effectiveness of the proposed method.
Resumo:
To maintain a power system within operation limits, a level ahead planning it is necessary to apply competitive techniques to solve the optimal power flow (OPF). OPF is a non-linear and a large combinatorial problem. The Ant Colony Search (ACS) optimization algorithm is inspired by the organized natural movement of real ants and has been successfully applied to different large combinatorial optimization problems. This paper presents an implementation of Ant Colony optimization to solve the OPF in an economic dispatch context. The proposed methodology has been developed to be used for maintenance and repairing planning with 48 to 24 hours antecipation. The main advantage of this method is its low execution time that allows the use of OPF when a large set of scenarios has to be analyzed. The paper includes a case study using the IEEE 30 bus network. The results are compared with other well-known methodologies presented in the literature.
Resumo:
The filter method is a technique for solving nonlinear programming problems. The filter algorithm has two phases in each iteration. The first one reduces a measure of infeasibility, while in the second the objective function value is reduced. In real optimization problems, usually the objective function is not differentiable or its derivatives are unknown. In these cases it becomes essential to use optimization methods where the calculation of the derivatives or the verification of their existence is not necessary: direct search methods or derivative-free methods are examples of such techniques. In this work we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of simplex and filter methods. This method neither computes nor approximates derivatives, penalty constants or Lagrange multipliers.