957 resultados para Optimization algorithms


Relevância:

70.00% 70.00%

Publicador:

Resumo:

A servo-controlled automatic machine can perform tasks that involve synchronized actuation of a significant number of servo-axes, namely one degree-of-freedom (DoF) electromechanical actuators. Each servo-axis comprises a servo-motor, a mechanical transmission and an end-effector, and is responsible for generating the desired motion profile and providing the power required to achieve the overall task. The design of a such a machine must involve a detailed study from a mechatronic viewpoint, due to its electric and mechanical nature. The first objective of this thesis is the development of an overarching electromechanical model for a servo-axis. Every loss source is taken into account, be it mechanical or electrical. The mechanical transmission is modeled by means of a sequence of lumped-parameter blocks. The electric model of the motor and the inverter takes into account winding losses, iron losses and controller switching losses. No experimental characterizations are needed to implement the electric model, since the parameters are inferred from the data available in commercial catalogs. With the global model at disposal, a second objective of this work is to perform the optimization analysis, in particular, the selection of the motor-reducer unit. The optimal transmission ratios that minimize several objective functions are found. An optimization process is carried out and repeated for each candidate motor. Then, we present a novel method where the discrete set of available motor is extended to a continuous domain, by fitting manufacturer data. The problem becomes a two-dimensional nonlinear optimization subject to nonlinear constraints, and the solution gives the optimal choice for the motor-reducer system. The presented electromechanical model, along with the implementation of optimization algorithms, forms a complete and powerful simulation tool for servo-controlled automatic machines. The tool allows for determining a wide range of electric and mechanical parameters and the behavior of the system in different operating conditions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a fast method for finding optimal parameters for a low-resolution (threading) force field intended to distinguish correct from incorrect folds for a given protein sequence. In contrast to other methods, the parameterization uses information from >10(7) misfolded structures as well as a set of native sequence-structure pairs. In addition to testing the resulting force field's performance on the protein sequence threading problem, results are shown that characterize the number of parameters necessary for effective structure recognition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Kinematic redundancy occurs when a manipulator possesses more degrees of freedom than those required to execute a given task. Several kinematic techniques for redundant manipulators control the gripper through the pseudo-inverse of the Jacobian, but lead to a kind of chaotic inner motion with unpredictable arm configurations. Such algorithms are not easy to adapt to optimization schemes and, moreover, often there are multiple optimization objectives that can conflict between them. Unlike single optimization, where one attempts to find the best solution, in multi-objective optimization there is no single solution that is optimum with respect to all indices. Therefore, trajectory planning of redundant robots remains an important area of research and more efficient optimization algorithms are needed. This paper presents a new technique to solve the inverse kinematics of redundant manipulators, using a multi-objective genetic algorithm. This scheme combines the closed-loop pseudo-inverse method with a multi-objective genetic algorithm to control the joint positions. Simulations for manipulators with three or four rotational joints, considering the optimization of two objectives in a workspace without and with obstacles are developed. The results reveal that it is possible to choose several solutions from the Pareto optimal front according to the importance of each individual objective.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The trajectory planning of redundant robots is an important area of research and efficient optimization algorithms are needed. The pseudoinverse control is not repeatable, causing drift in joint space which is undesirable for physical control. This paper presents a new technique that combines the closed-loop pseudoinverse method with genetic algorithms, leading to an optimization criterion for repeatable control of redundant manipulators, and avoiding the joint angle drift problem. Computer simulations performed based on redundant and hyper-redundant planar manipulators show that, when the end-effector traces a closed path in the workspace, the robot returns to its initial configuration. The solution is repeatable for a workspace with and without obstacles in the sense that, after executing several cycles, the initial and final states of the manipulator are very close.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trabalho Final de mestrado para obtenção do grau de Mestre em engenharia Mecância

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A liberalização dos mercados de energia e a utilização intensiva de produção distribuída tem vindo a provocar uma alteração no paradigma de operação das redes de distribuição de energia elétrica. A continuidade da fiabilidade das redes de distribuição no contexto destes novos paradigmas requer alterações estruturais e funcionais. O conceito de Smart Grid vem permitir a adaptação das redes de distribuição ao novo contexto. Numa Smart Grid os pequenos e médios consumidores são chamados ao plano ativo das participações. Este processo é conseguido através da aplicação de programas de demand response e da existência de players agregadores. O uso de programas de demand response para alcançar benefícios para a rede encontra-se atualmente a ser estudado no meio científico. Porém, existe a necessidade de estudos que procurem benefícios para os pequenos e médios consumidores. O alcance dos benefícios para os pequenos e médios consumidores não é apenas vantajoso para o consumidor, como também o é para a rede elétrica de distribuição. A participação, dos pequenos e médios consumidores, em programas de demand response acontece significativamente através da redução de consumos energéticos. De modo a evitar os impactos negativos que podem provir dessas reduções, o trabalho aqui proposto faz uso de otimizações que recorrem a técnicas de aprendizagem através da utilização redes neuronais artificiais. Para poder efetuar um melhor enquadramento do trabalho com as Smart Grids, será desenvolvido um sistema multiagente capaz de simular os principais players de uma Smart Grid. O foco deste sistema multiagente será o agente responsável pela simulação do pequeno e médio consumidor. Este agente terá não só que replicar um pequeno e médio consumidor, como terá ainda que possibilitar a integração de cargas reais e virtuais. Como meio de interação com o pequeno e médio consumidor, foi desenvolvida no âmbito desta dissertação um sistema móvel. No final do trabalho obteve-se um sistema multiagente capaz de simular uma Smart Grid e a execução de programas de demand response, sSendo o agente representante do pequeno e médio consumidor capaz de tomar ações e reações de modo a poder responder autonomamente aos programas de demand response lançados na rede. O desenvolvimento do sistema permite: o estudo e análise da integração dos pequenos e médios consumidores nas Smart Grids por meio de programas de demand response; a comparação entre múltiplos algoritmos de otimização; e a integração de métodos de aprendizagem. De modo a demonstrar e viabilizar as capacidades de todo o sistema, a dissertação inclui casos de estudo para as várias vertentes que podem ser exploradas com o sistema desenvolvido.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Civil

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Report for the scientific sojourn at the University of California at Berkeley between September 2007 to February 2008. The globalization combined with the success of containerization has brought about tremendous increases in the transportation of containers across the world. This leads to an increasing size of container ships which causes higher demands on seaport container terminals and their equipment. In this situation, the success of container terminals resides in a fast transhipment process with reduced costs. For these reasons it is necessary to optimize the terminal’s processes. There are three main logistic processes in a seaport container terminal: loading and unloading of containerships, storage, and reception/deliver of containers from/to the hinterland. Moreover there is an additional process that ensures the interconnection between previous logistic activities: the internal transport subsystem. The aim of this paper is to optimize the internal transport cycle in a marine container terminal managed by straddle carriers, one of the most used container transfer technologies. Three sub-systems are analyzed in detail: the landside transportation, the storage of containers in the yard, and the quayside transportation. The conflicts and decisions that arise from these three subsystems are analytically investigated, and optimization algorithms are proposed. Moreover, simulation has been applied to TCB (Barcelona Container Terminal) to test these algorithms and compare different straddle carrier’s operation strategies, such as single cycle versus double cycle, and different sizes of the handling equipment fleet. The simulation model is explained in detail and the main decision-making algorithms from the model are presented and formulated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

objetivo de minimizar el retraso total en un ambiente con preparaciones quedependen de la secuencia. Se comparan los resultados obtenidos mediante laaplicación de los procedimientos de exploración de entornos AED, ANED,Recocido Simulado, Algoritmos Genéticos, Búsqueda Tabú y GRASP alproblema planteado. Los resultados sugieren que la Búsqueda Tabú es unatécnica viable de solución que puede proporcionar buenas soluciones cuandose considera el objetivo retraso total con tiempos de preparación dependientesde la secuencia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this article, a methodology is used for the simultaneous determination of the effective diffusivity and the convective mass transfer coefficient in porous solids, which can be considered as an infinite cylinder during drying. Two models are used for optimization and drying simulation: model 1 (constant volume and diffusivity, with equilibrium boundary condition), and model 2 (constant volume and diffusivity with convective boundary condition). Optimization algorithms based on the inverse method were coupled to the analytical solutions, and these solutions can be adjusted to experimental data of the drying kinetics. An application of optimization methodology was made to describe the drying kinetics of whole bananas, using experimental data available in the literature. The statistical indicators enable to affirm that the solution of diffusion equation with convective boundary condition generates results superior than those with the equilibrium boundary condition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Afin d'enrichir les données de corpus bilingues parallèles, il peut être judicieux de travailler avec des corpus dits comparables. En effet dans ce type de corpus, même si les documents dans la langue cible ne sont pas l'exacte traduction de ceux dans la langue source, on peut y retrouver des mots ou des phrases en relation de traduction. L'encyclopédie libre Wikipédia constitue un corpus comparable multilingue de plusieurs millions de documents. Notre travail consiste à trouver une méthode générale et endogène permettant d'extraire un maximum de phrases parallèles. Nous travaillons avec le couple de langues français-anglais mais notre méthode, qui n'utilise aucune ressource bilingue extérieure, peut s'appliquer à tout autre couple de langues. Elle se décompose en deux étapes. La première consiste à détecter les paires d’articles qui ont le plus de chance de contenir des traductions. Nous utilisons pour cela un réseau de neurones entraîné sur un petit ensemble de données constitué d'articles alignés au niveau des phrases. La deuxième étape effectue la sélection des paires de phrases grâce à un autre réseau de neurones dont les sorties sont alors réinterprétées par un algorithme d'optimisation combinatoire et une heuristique d'extension. L'ajout des quelques 560~000 paires de phrases extraites de Wikipédia au corpus d'entraînement d'un système de traduction automatique statistique de référence permet d'améliorer la qualité des traductions produites. Nous mettons les données alignées et le corpus extrait à la disposition de la communauté scientifique.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new database of weather and circulation type catalogs is presented comprising 17 automated classification methods and five subjective classifications. It was compiled within COST Action 733 "Harmonisation and Applications of Weather Type Classifications for European regions" in order to evaluate different methods for weather and circulation type classification. This paper gives a technical description of the included methods using a new conceptual categorization for classification methods reflecting the strategy for the definition of types. Methods using predefined types include manual and threshold based classifications while methods producing types derived from the input data include those based on eigenvector techniques, leader algorithms and optimization algorithms. In order to allow direct comparisons between the methods, the circulation input data and the methods' configuration were harmonized for producing a subset of standard catalogs of the automated methods. The harmonization includes the data source, the climatic parameters used, the classification period as well as the spatial domain and the number of types. Frequency based characteristics of the resulting catalogs are presented, including variation of class sizes, persistence, seasonal and inter-annual variability as well as trends of the annual frequency time series. The methodological concept of the classifications is partly reflected by these properties of the resulting catalogs. It is shown that the types of subjective classifications compared to automated methods show higher persistence, inter-annual variation and long-term trends. Among the automated classifications optimization methods show a tendency for longer persistence and higher seasonal variation. However, it is also concluded that the distance metric used and the data preprocessing play at least an equally important role for the properties of the resulting classification compared to the algorithm used for type definition and assignment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The bidimensional periodic structures called frequency selective surfaces have been well investigated because of their filtering properties. Similar to the filters that work at the traditional radiofrequency band, such structures can behave as band-stop or pass-band filters, depending on the elements of the array (patch or aperture, respectively) and can be used for a variety of applications, such as: radomes, dichroic reflectors, waveguide filters, artificial magnetic conductors, microwave absorbers etc. To provide high-performance filtering properties at microwave bands, electromagnetic engineers have investigated various types of periodic structures: reconfigurable frequency selective screens, multilayered selective filters, as well as periodic arrays printed on anisotropic dielectric substrates and composed by fractal elements. In general, there is no closed form solution directly from a given desired frequency response to a corresponding device; thus, the analysis of its scattering characteristics requires the application of rigorous full-wave techniques. Besides that, due to the computational complexity of using a full-wave simulator to evaluate the frequency selective surface scattering variables, many electromagnetic engineers still use trial-and-error process until to achieve a given design criterion. As this procedure is very laborious and human dependent, optimization techniques are required to design practical periodic structures with desired filter specifications. Some authors have been employed neural networks and natural optimization algorithms, such as the genetic algorithms and the particle swarm optimization for the frequency selective surface design and optimization. This work has as objective the accomplishment of a rigorous study about the electromagnetic behavior of the periodic structures, enabling the design of efficient devices applied to microwave band. For this, artificial neural networks are used together with natural optimization techniques, allowing the accurate and efficient investigation of various types of frequency selective surfaces, in a simple and fast manner, becoming a powerful tool for the design and optimization of such structures