923 resultados para Search Engine Optimization Methods
Resumo:
Solving systems of nonlinear equations is a very important task since the problems emerge mostly through the mathematical modelling of real problems that arise naturally in many branches of engineering and in the physical sciences. The problem can be naturally reformulated as a global optimization problem. In this paper, we show that a self-adaptive combination of a metaheuristic with a classical local search method is able to converge to some difficult problems that are not solved by Newton-type methods.
Resumo:
In the last twenty years genetic algorithms (GAs) were applied in a plethora of fields such as: control, system identification, robotics, planning and scheduling, image processing, and pattern and speech recognition (Bäck et al., 1997). In robotics the problems of trajectory planning, collision avoidance and manipulator structure design considering a single criteria has been solved using several techniques (Alander, 2003). Most engineering applications require the optimization of several criteria simultaneously. Often the problems are complex, include discrete and continuous variables and there is no prior knowledge about the search space. These kind of problems are very more complex, since they consider multiple design criteria simultaneously within the optimization procedure. This is known as a multi-criteria (or multiobjective) optimization, that has been addressed successfully through GAs (Deb, 2001). The overall aim of multi-criteria evolutionary algorithms is to achieve a set of non-dominated optimal solutions known as Pareto front. At the end of the optimization procedure, instead of a single optimal (or near optimal) solution, the decision maker can select a solution from the Pareto front. Some of the key issues in multi-criteria GAs are: i) the number of objectives, ii) to obtain a Pareto front as wide as possible and iii) to achieve a Pareto front uniformly spread. Indeed, multi-objective techniques using GAs have been increasing in relevance as a research area. In 1989, Goldberg suggested the use of a GA to solve multi-objective problems and since then other researchers have been developing new methods, such as the multi-objective genetic algorithm (MOGA) (Fonseca & Fleming, 1995), the non-dominated sorted genetic algorithm (NSGA) (Deb, 2001), and the niched Pareto genetic algorithm (NPGA) (Horn et al., 1994), among several other variants (Coello, 1998). In this work the trajectory planning problem considers: i) robots with 2 and 3 degrees of freedom (dof ), ii) the inclusion of obstacles in the workspace and iii) up to five criteria that are used to qualify the evolving trajectory, namely the: joint traveling distance, joint velocity, end effector / Cartesian distance, end effector / Cartesian velocity and energy involved. These criteria are used to minimize the joint and end effector traveled distance, trajectory ripple and energy required by the manipulator to reach at destination point. Bearing this ideas in mind, the paper addresses the planning of robot trajectories, meaning the development of an algorithm to find a continuous motion that takes the manipulator from a given starting configuration up to a desired end position without colliding with any obstacle in the workspace. The chapter is organized as follows. Section 2 describes the trajectory planning and several approaches proposed in the literature. Section 3 formulates the problem, namely the representation adopted to solve the trajectory planning and the objectives considered in the optimization. Section 4 studies the algorithm convergence. Section 5 studies a 2R manipulator (i.e., a robot with two rotational joints/links) when the optimization trajectory considers two and five objectives. Sections 6 and 7 show the results for the 3R redundant manipulator with five goals and for other complementary experiments are described, respectively. Finally, section 8 draws the main conclusions.
Resumo:
Meshless methods are used for their capability of producing excellent solutions without requiring a mesh, avoiding mesh related problems encountered in other numerical methods, such as finite elements. However, node placement is still an open question, specially in strong form collocation meshless methods. The number of used nodes can have a big influence on matrix size and therefore produce ill-conditioned matrices. In order to optimize node position and number, a direct multisearch technique for multiobjective optimization is used to optimize node distribution in the global collocation method using radial basis functions. The optimization method is applied to the bending of isotropic simply supported plates. Using as a starting condition a uniformly distributed grid, results show that the method is capable of reducing the number of nodes in the grid without compromising the accuracy of the solution. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
The bending of simply supported composite plates is analyzed using a direct collocation meshless numerical method. In order to optimize node distribution the Direct MultiSearch (DMS) for multi-objective optimization method is applied. In addition, the method optimizes the shape parameter in radial basis functions. The optimization algorithm was able to find good solutions for a large variety of nodes distribution.
Resumo:
In order to correctly assess the biaxial fatigue material properties one must experimentally test different load conditions and stress levels. With the rise of new in-plane biaxial fatigue testing machines, using smaller and more efficient electrical motors, instead of the conventional hydraulic machines, it is necessary to reduce the specimen size and to ensure that the specimen geometry is appropriate for the load capacity installed. At the present time there are no standard specimen's geometries and the indications on literature how to design an efficient test specimen are insufficient. The main goal of this paper is to present the methodology on how to obtain an optimal cruciform specimen geometry, with thickness reduction in the gauge area, appropriate for fatigue crack initiation, as a function of the base material sheet thickness used to build the specimen. The geometry is optimized for maximum stress using several parameters, ensuring that in the gauge area the stress distributions on the loading directions are uniform and maximum with two limit phase shift loading conditions (delta = 0 degrees and (delta = 180 degrees). Therefore the fatigue damage will always initiate on the center of the specimen, avoiding failure outside this region. Using the Renard Series of preferred numbers for the base material sheet thickness as a reference, the reaming geometry parameters are optimized using a derivative-free methodology, called direct multi search (DMS) method. The final optimal geometry as a function of the base material sheet thickness is proposed, as a guide line for cruciform specimens design, and as a possible contribution for a future standard on in-plane biaxial fatigue tests
Resumo:
This paper presents a modified Particle Swarm Optimization (PSO) methodology to solve the problem of energy resources management with high penetration of distributed generation and Electric Vehicles (EVs) with gridable capability (V2G). The objective of the day-ahead scheduling problem in this work is to minimize operation costs, namely energy costs, regarding the management of these resources in the smart grid context. The modifications applied to the PSO aimed to improve its adequacy to solve the mentioned problem. The proposed Application Specific Modified Particle Swarm Optimization (ASMPSO) includes an intelligent mechanism to adjust velocity limits during the search process, as well as self-parameterization of PSO parameters making it more user-independent. It presents better robustness and convergence characteristics compared with the tested PSO variants as well as better constraint handling. This enables its use for addressing real world large-scale problems in much shorter times than the deterministic methods, providing system operators with adequate decision support and achieving efficient resource scheduling, even when a significant number of alternative scenarios should be considered. The paper includes two realistic case studies with different penetration of gridable vehicles (1000 and 2000). The proposed methodology is about 2600 times faster than Mixed-Integer Non-Linear Programming (MINLP) reference technique, reducing the time required from 25 h to 36 s for the scenario with 2000 vehicles, with about one percent of difference in the objective function cost value.
Resumo:
Optimization is a very important field for getting the best possible value for the optimization function. Continuous optimization is optimization over real intervals. There are many global and local search techniques. Global search techniques try to get the global optima of the optimization problem. However, local search techniques are used more since they try to find a local minimal solution within an area of the search space. In Continuous Constraint Satisfaction Problems (CCSP)s, constraints are viewed as relations between variables, and the computations are supported by interval analysis. The continuous constraint programming framework provides branch-and-prune algorithms for covering sets of solutions for the constraints with sets of interval boxes which are the Cartesian product of intervals. These algorithms begin with an initial crude cover of the feasible space (the Cartesian product of the initial variable domains) which is recursively refined by interleaving pruning and branching steps until a stopping criterion is satisfied. In this work, we try to find a convenient way to use the advantages in CCSP branchand- prune with local search of global optimization applied locally over each pruned branch of the CCSP. We apply local search techniques of continuous optimization over the pruned boxes outputted by the CCSP techniques. We mainly use steepest descent technique with different characteristics such as penalty calculation and step length. We implement two main different local search algorithms. We use “Procure”, which is a constraint reasoning and global optimization framework, to implement our techniques, then we produce and introduce our results over a set of benchmarks.
Resumo:
PhD thesis in Bioengineering
Resumo:
Autonomous underwater vehicles (AUV) represent a challenging control problem with complex, noisy, dynamics. Nowadays, not only the continuous scientific advances in underwater robotics but the increasing number of subsea missions and its complexity ask for an automatization of submarine processes. This paper proposes a high-level control system for solving the action selection problem of an autonomous robot. The system is characterized by the use of reinforcement learning direct policy search methods (RLDPS) for learning the internal state/action mapping of some behaviors. We demonstrate its feasibility with simulated experiments using the model of our underwater robot URIS in a target following task
Resumo:
Immobile location-allocation (LA) problems is a type of LA problem that consists in determining the service each facility should offer in order to optimize some criterion (like the global demand), given the positions of the facilities and the customers. Due to the complexity of the problem, i.e. it is a combinatorial problem (where is the number of possible services and the number of facilities) with a non-convex search space with several sub-optimums, traditional methods cannot be applied directly to optimize this problem. Thus we proposed the use of clustering analysis to convert the initial problem into several smaller sub-problems. By this way, we presented and analyzed the suitability of some clustering methods to partition the commented LA problem. Then we explored the use of some metaheuristic techniques such as genetic algorithms, simulated annealing or cuckoo search in order to solve the sub-problems after the clustering analysis
Resumo:
Nickel, although essential to plants, may be toxic to plants and animals. It is mainly assimilated by food ingestion. However, information about the average levels of elements (including Ni) in edible vegetables from different regions is still scarce in Brazil. The objectives of this study were to: (a) evaluate and optimize a method for preparation of vegetable tissue samples for Ni determination; (b) optimize the analytical procedures for determination by Flame Atomic Absorption Spectrometry (FAAS) and by Electrothermal Atomic Absorption (ETAAS) in vegetable samples and (c) determine the Ni concentration in vegetables consumed in the cities of Lorena and Taubaté in the Vale do Paraíba, State of São Paulo, Brazil. By means of the analytical technique for determination by ETAAS or FAAS, the results were validated by the test of analyte addition and recovery. The most viable method tested for quantification of this element was HClO4-HNO3 wet digestion. All samples but carrot tissue collected in Lorena contained Ni levels above the permitted by the Brazilian Ministry of Health. The most disturbing results, requiring more detailed studies, were the Ni concentrations measured in carrot samples from Taubaté, where levels were five times higher than permitted by Brazilian regulations.
Resumo:
La aplicabilidad, repetibilidad y capacidad de diferentes métodos de análisis para discriminar muestras de aceites con diferentes grados de oxidación fueron evaluadas mediante aceites recogidos en procesos de fritura en continuo en varias empresas españolas. El objetivo de este trabajo fue encontrar métodos complementarios a la determinación del índice de acidez para el control de calidad rutinario de los aceites de fritura empleados en estas empresas. La optimización de la determinación de la constante dieléctrica conllevó una clara mejora de la variabilidad. No obstante, excepto en el caso del índice del ATB, el resto de métodos ensayados mostraron una menor variabilidad. La determinación del índice del ATB fue descartada ya que su sensibilidad fue insuficiente para discriminar entre aceites con diferente grado de oxidación. Los diferentes parámetros de alteración determinados en los aceites de fritura mostraron correlaciones significativas entre el índice de acidez y varios parámetros de oxidación diferentes, como la constante dieléctrica, el índice de p-anisidina, la absorción al ultravioleta y el contenido en polímeros de los triacilgliceroles. El índice de acidez solo evalúa la alteración hidrolítica, por lo que estos parámetros aportan información complementaria al evaluar la alteración termooxidativa.
Resumo:
In recent years, protein-ligand docking has become a powerful tool for drug development. Although several approaches suitable for high throughput screening are available, there is a need for methods able to identify binding modes with high accuracy. This accuracy is essential to reliably compute the binding free energy of the ligand. Such methods are needed when the binding mode of lead compounds is not determined experimentally but is needed for structure-based lead optimization. We present here a new docking software, called EADock, that aims at this goal. It uses an hybrid evolutionary algorithm with two fitness functions, in combination with a sophisticated management of the diversity. EADock is interfaced with the CHARMM package for energy calculations and coordinate handling. A validation was carried out on 37 crystallized protein-ligand complexes featuring 11 different proteins. The search space was defined as a sphere of 15 A around the center of mass of the ligand position in the crystal structure, and on the contrary to other benchmarks, our algorithm was fed with optimized ligand positions up to 10 A root mean square deviation (RMSD) from the crystal structure, excluding the latter. This validation illustrates the efficiency of our sampling strategy, as correct binding modes, defined by a RMSD to the crystal structure lower than 2 A, were identified and ranked first for 68% of the complexes. The success rate increases to 78% when considering the five best ranked clusters, and 92% when all clusters present in the last generation are taken into account. Most failures could be explained by the presence of crystal contacts in the experimental structure. Finally, the ability of EADock to accurately predict binding modes on a real application was illustrated by the successful docking of the RGD cyclic pentapeptide on the alphaVbeta3 integrin, starting far away from the binding pocket.
Resumo:
Blowing and drifting of snow is a major concern for transportation efficiency and road safety in regions where their development is common. One common way to mitigate snow drift on roadways is to install plastic snow fences. Correct design of snow fences is critical for road safety and maintaining the roads open during winter in the US Midwest and other states affected by large snow events during the winter season and to maintain costs related to accumulation of snow on the roads and repair of roads to minimum levels. Of critical importance for road safety is the protection against snow drifting in regions with narrow rights of way, where standard fences cannot be deployed at the recommended distance from the road. Designing snow fences requires sound engineering judgment and a thorough evaluation of the potential for snow blowing and drifting at the construction site. The evaluation includes site-specific design parameters typically obtained with semi-empirical relations characterizing the local transport conditions. Among the critical parameters involved in fence design and assessment of their post-construction efficiency is the quantification of the snow accumulation at fence sites. The present study proposes a joint experimental and numerical approach to monitor snow deposits around snow fences, quantitatively estimate snow deposits in the field, asses the efficiency and improve the design of snow fences. Snow deposit profiles were mapped using GPS based real-time kinematic surveys (RTK) conducted at the monitored field site during and after snow storms. The monitored site allowed testing different snow fence designs under close to identical conditions over four winter seasons. The study also discusses the detailed monitoring system and analysis of weather forecast and meteorological conditions at the monitored sites. A main goal of the present study was to assess the performance of lightweight plastic snow fences with a lower porosity than the typical 50% porosity used in standard designs of such fences. The field data collected during the first winter was used to identify the best design for snow fences with a porosity of 50%. Flow fields obtained from numerical simulations showed that the fence design that worked the best during the first winter induced the formation of an elongated area of small velocity magnitude close to the ground. This information was used to identify other candidates for optimum design of fences with a lower porosity. Two of the designs with a fence porosity of 30% that were found to perform well based on results of numerical simulations were tested in the field during the second winter along with the best performing design for fences with a porosity of 50%. Field data showed that the length of the snow deposit away from the fence was reduced by about 30% for the two proposed lower-porosity (30%) fence designs compared to the best design identified for fences with a porosity of 50%. Moreover, one of the lower-porosity designs tested in the field showed no significant snow deposition within the bottom gap region beneath the fence. Thus, a major outcome of this study is to recommend using plastic snow fences with a porosity of 30%. It is expected that this lower-porosity design will continue to work well for even more severe snow events or for successive snow events occurring during the same winter. The approach advocated in the present study allowed making general recommendations for optimizing the design of lower-porosity plastic snow fences. This approach can be extended to improve the design of other types of snow fences. Some preliminary work for living snow fences is also discussed. Another major contribution of this study is to propose, develop protocols and test a novel technique based on close range photogrammetry (CRP) to quantify the snow deposits trapped snow fences. As image data can be acquired continuously, the time evolution of the volume of snow retained by a snow fence during a storm or during a whole winter season can, in principle, be obtained. Moreover, CRP is a non-intrusive method that eliminates the need to perform man-made measurements during the storms, which are difficult and sometimes dangerous to perform. Presently, there is lots of empiricism in the design of snow fences due to lack of data on fence storage capacity on how snow deposits change with the fence design and snow storm characteristics and in the estimation of the main parameters used by the state DOTs to design snow fences at a given site. The availability of such information from CRP measurements should provide critical data for the evaluation of the performance of a certain snow fence design that is tested by the IDOT. As part of the present study, the novel CRP method is tested at several sites. The present study also discusses some attempts and preliminary work to determine the snow relocation coefficient which is one of the main variables that has to be estimated by IDOT engineers when using the standard snow fence design software (Snow Drift Profiler, Tabler, 2006). Our analysis showed that standard empirical formulas did not produce reasonable values when applied at the Iowa test sites monitored as part of the present study and that simple methods to estimate this variable are not reliable. The present study makes recommendations for the development of a new methodology based on Large Scale Particle Image Velocimetry that can directly measure the snow drift fluxes and the amount of snow relocated by the fence.