839 resultados para Multi-robot systems
Resumo:
Non-linear functional representation of the aerodynamic response provides a convenient mathematical model for motion-induced unsteady transonic aerodynamic loads response, that accounts for both complex non-linearities and time-history effects. A recent development, based on functional approximation theory, has established a novel functional form; namely, the multi-layer functional. For a large class of non-linear dynamic systems, such multi-layer functional representations can be realised via finite impulse response (FIR) neural networks. Identification of an appropriate FIR neural network model is facilitated by means of a supervised training process in which a limited sample of system input-output data sets is presented to the temporal neural network. The present work describes a procedure for the systematic identification of parameterised neural network models of motion-induced unsteady transonic aerodynamic loads response. The training process is based on a conventional genetic algorithm to optimise the network architecture, combined with a simplified random search algorithm to update weight and bias values. Application of the scheme to representative transonic aerodynamic loads response data for a bidimensional airfoil executing finite-amplitude motion in transonic flow is used to demonstrate the feasibility of the approach. The approach is shown to furnish a satisfactory generalisation property to different motion histories over a range of Mach numbers in the transonic regime.
Resumo:
One of the problems that slows the development of off-line programming is the low static and dynamic positioning accuracy of robots. Robot calibration improves the positioning accuracy and can also be used as a diagnostic tool in robot production and maintenance. A large number of robot measurement systems are now available commercially. Yet, there is a dearth of systems that are portable, accurate and low cost. In this work a measurement system that can fill this gap in local calibration is presented. The measurement system consists of a single CCD camera mounted on the robot tool flange with a wide angle lens, and uses space resection models to measure the end-effector pose relative to a world coordinate system, considering radial distortions. Scale factors and image center are obtained with innovative techniques, making use of a multiview approach. The target plate consists of a grid of white dots impressed on a black photographic paper, and mounted on the sides of a 90-degree angle plate. Results show that the achieved average accuracy varies from 0.2mm to 0.4mm, at distances from the target from 600mm to 1000mm respectively, with different camera orientations.
Resumo:
The assembly and maintenance of the International Thermonuclear Experimental Reactor (ITER) vacuum vessel (VV) is highly challenging since the tasks performed by the robot involve welding, material handling, and machine cutting from inside the VV. The VV is made of stainless steel, which has poor machinability and tends to work harden very rapidly, and all the machining operations need to be carried out from inside of the ITER VV. A general industrial robot cannot be used due to its poor stiffness in the heavy duty machining process, and this will cause many problems, such as poor surface quality, tool damage, low accuracy. Therefore, one of the most suitable options should be a light weight mobile robot which is able to move around inside of the VV and perform different machining tasks by replacing different cutting tools. Reducing the mass of the robot manipulators offers many advantages: reduced material costs, reduced power consumption, the possibility of using smaller actuators, and a higher payload-to-robot weight ratio. Offsetting these advantages, the lighter weight robot is more flexible, which makes it more difficult to control. To achieve good machining surface quality, the tracking of the end effector must be accurate, and an accurate model for a more flexible robot must be constructed. This thesis studies the dynamics and control of a 10 degree-of-freedom (DOF) redundant hybrid robot (4-DOF serial mechanism and 6-DOF 6-UPS hexapod parallel mechanisms) hydraulically driven with flexible rods under the influence of machining forces. Firstly, the flexibility of the bodies is described using the floating frame of reference method (FFRF). A finite element model (FEM) provided the Craig-Bampton (CB) modes needed for the FFRF. A dynamic model of the system of six closed loop mechanisms was assembled using the constrained Lagrange equations and the Lagrange multiplier method. Subsequently, the reaction forces between the parallel and serial parts were used to study the dynamics of the serial robot. A PID control based on position predictions was implemented independently to control the hydraulic cylinders of the robot. Secondly, in machining, to achieve greater end effector trajectory tracking accuracy for surface quality, a robust control of the actuators for the flexible link has to be deduced. This thesis investigates the intelligent control of a hydraulically driven parallel robot part based on the dynamic model and two schemes of intelligent control for a hydraulically driven parallel mechanism based on the dynamic model: (1) a fuzzy-PID self-tuning controller composed of the conventional PID control and with fuzzy logic, and (2) adaptive neuro-fuzzy inference system-PID (ANFIS-PID) self-tuning of the gains of the PID controller, which are implemented independently to control each hydraulic cylinder of the parallel mechanism based on rod length predictions. The serial component of the hybrid robot can be analyzed using the equilibrium of reaction forces at the universal joint connections of the hexa-element. To achieve precise positional control of the end effector for maximum precision machining, the hydraulic cylinder should be controlled to hold the hexa-element. Thirdly, a finite element approach of multibody systems using the Special Euclidean group SE(3) framework is presented for a parallel mechanism with flexible piston rods under the influence of machining forces. The flexibility of the bodies is described using the nonlinear interpolation method with an exponential map. The equations of motion take the form of a differential algebraic equation on a Lie group, which is solved using a Lie group time integration scheme. The method relies on the local description of motions, so that it provides a singularity-free formulation, and no parameterization of the nodal variables needs to be introduced. The flexible slider constraint is formulated using a Lie group and used for modeling a flexible rod sliding inside a cylinder. The dynamic model of the system of six closed loop mechanisms was assembled using Hamilton’s principle and the Lagrange multiplier method. A linearized hydraulic control system based on rod length predictions was implemented independently to control the hydraulic cylinders. Consequently, the results of the simulations demonstrating the behavior of the robot machine are presented for each case study. In conclusion, this thesis studies the dynamic analysis of a special hybrid (serialparallel) robot for the above-mentioned special task involving the ITER and investigates different control algorithms that can significantly improve machining performance. These analyses and results provide valuable insight into the design and control of the parallel robot with flexible rods.
Resumo:
The main objective of the present study was to design an agricultural robot, which work is based on the generation of the electricity by the solar panel. To achieve the proper operation of the robot according to the assumed working cycle the detailed design of the main equipment was made. By analysing the possible areas of implementation together with developments, the economic forecast was held. As a result a decision about possibility of such device working in agricultural sector was made and the probable topics of the further study were found out.
Resumo:
The aim of this thesis is to propose a novel control method for teleoperated electrohydraulic servo systems that implements a reliable haptic sense between the human and manipulator interaction, and an ideal position control between the manipulator and the task environment interaction. The proposed method has the characteristics of a universal technique independent of the actual control algorithm and it can be applied with other suitable control methods as a real-time control strategy. The motivation to develop this control method is the necessity for a reliable real-time controller for teleoperated electrohydraulic servo systems that provides highly accurate position control based on joystick inputs with haptic capabilities. The contribution of the research is that the proposed control method combines a directed random search method and a real-time simulation to develop an intelligent controller in which each generation of parameters is tested on-line by the real-time simulator before being applied to the real process. The controller was evaluated on a hydraulic position servo system. The simulator of the hydraulic system was built based on Markov chain Monte Carlo (MCMC) method. A Particle Swarm Optimization algorithm combined with the foraging behavior of E. coli bacteria was utilized as the directed random search engine. The control strategy allows the operator to be plugged into the work environment dynamically and kinetically. This helps to ensure the system has haptic sense with high stability, without abstracting away the dynamics of the hydraulic system. The new control algorithm provides asymptotically exact tracking of both, the position and the contact force. In addition, this research proposes a novel method for re-calibration of multi-axis force/torque sensors. The method makes several improvements to traditional methods. It can be used without dismantling the sensor from its application and it requires smaller number of standard loads for calibration. It is also more cost efficient and faster in comparison to traditional calibration methods. The proposed method was developed in response to re-calibration issues with the force sensors utilized in teleoperated systems. The new approach aimed to avoid dismantling of the sensors from their applications for applying calibration. A major complication with many manipulators is the difficulty accessing them when they operate inside a non-accessible environment; especially if those environments are harsh; such as in radioactive areas. The proposed technique is based on design of experiment methodology. It has been successfully applied to different force/torque sensors and this research presents experimental validation of use of the calibration method with one of the force sensors which method has been applied to.
Resumo:
Crystal properties, product quality and particle size are determined by the operating conditions in the crystallization process. Thus, in order to obtain desired end-products, the crystallization process should be effectively controlled based on reliable kinetic information, which can be provided by powerful analytical tools such as Raman spectrometry and thermal analysis. The present research work studied various crystallization processes such as reactive crystallization, precipitation with anti-solvent and evaporation crystallization. The goal of the work was to understand more comprehensively the fundamentals, phenomena and utilizations of crystallization, and establish proper methods to control particle size distribution, especially for three phase gas-liquid-solid crystallization systems. As a part of the solid-liquid equilibrium studies in this work, prediction of KCl solubility in a MgCl2-KCl-H2O system was studied theoretically. Additionally, a solubility prediction model by Pitzer thermodynamic model was investigated based on solubility measurements of potassium dihydrogen phosphate with the presence of non-electronic organic substances in aqueous solutions. The prediction model helps to extend literature data and offers an easy and economical way to choose solvent for anti-solvent precipitation. Using experimental and modern analytical methods, precipitation kinetics and mass transfer in reactive crystallization of magnesium carbonate hydrates with magnesium hydroxide slurry and CO2 gas were systematically investigated. The obtained results gave deeper insight into gas-liquid-solid interactions and the mechanisms of this heterogeneous crystallization process. The research approach developed can provide theoretical guidance and act as a useful reference to promote development of gas-liquid reactive crystallization. Gas-liquid mass transfer of absorption in the presence of solid particles in a stirred tank was investigated in order to gain understanding of how different-sized particles interact with gas bubbles. Based on obtained volumetric mass transfer coefficient values, it was found that the influence of the presence of small particles on gas-liquid mass transfer cannot be ignored since there are interactions between bubbles and particles. Raman spectrometry was successfully applied for liquid and solids analysis in semi-batch anti-solvent precipitation and evaporation crystallization. Real-time information such as supersaturation, formation of precipitates and identification of crystal polymorphs could be obtained by Raman spectrometry. The solubility prediction models, monitoring methods for precipitation and empirical model for absorption developed in this study together with the methodologies used gives valuable information for aspects of industrial crystallization. Furthermore, Raman analysis was seen to be a potential controlling method for various crystallization processes.
Resumo:
Many, if not all, aspects of our everyday lives are related to computers and control. Microprocessors and wireless communications are involved in our lives. Embedded systems are an attracting field because they combine three key factors, small size, low power consumption and high computing capabilities. The aim of this thesis is to study how Linux communicates with the hardware, to answer the question if it is possible to use an operating system like Debian for embedded systems and finally, to build a Mechatronic real time application. In the thesis a presentation of Linux and the Xenomai real time patch is given, the bootloader and communication with the hardware is analyzed. BeagleBone the evaluation board is presented along with the application project consisted of a robot cart with a driver circuit, a line sensor reading a black line and two Xbee antennas. It makes use of Xenomai threads, the real time kernel. According to the obtained results, Linux is able to operate as a real time operating system. The issue of future research is the area of embedded Linux is also discussed.
Resumo:
Cette thèse porte sur le rôle de l’espace dans l’organisation et dans la dynamique des communautés écologiques multi-espèces. Deux carences peuvent être identifiées dans les études théoriques actuelles portant sur la dimension spatiale des communautés écologiques : l’insuffisance de modèles multi-espèces représentant la dimension spatiale explicitement, et le manque d’attention portée aux interactions positives, tel le mutualisme, en dépit de la reconnaissance de leur ubiquité dans les systèmes écologiques. Cette thèse explore cette problématique propre à l’écologie des communautés, en utilisant une approche théorique s’inspirant de la théorie des systèmes complexes et de la mécanique statistique. Selon cette approche, les communautés d’espèces sont considérées comme des systèmes complexes dont les propriétés globales émergent des interactions locales entre les organismes qui les composent, et des interactions locales entre ces organismes et leur environnement. Le premier objectif de cette thèse est de développer un modèle de métacommunauté multi-espèces, explicitement spatial, orienté à l’échelle des individus et basé sur un réseau d’interactions interspécifiques générales comprenant à la fois des interactions d’exploitation, de compétition et de mutualisme. Dans ce modèle, les communautés locales sont formées par un processus d’assemblage des espèces à partir d’un réservoir régional. La croissance des populations est restreinte par une capacité limite et leur dynamique évolue suivant des mécanismes simples de reproduction et de dispersion des individus. Ces mécanismes sont dépendants des conditions biotiques et abiotiques des communautés locales et leur effet varie en fonction des espèces, du temps et de l’espace. Dans un deuxième temps, cette thèse a pour objectif de déterminer l’impact d’une connectivité spatiale croissante sur la dynamique spatiotemporelle et sur les propriétés structurelles et fonctionnelles de cette métacommunauté. Plus précisément, nous évaluons différentes propriétés des communautés en fonction du niveau de dispersion des espèces : i) la similarité dans la composition des communautés locales et ses patrons de corrélations spatiales; ii) la biodiversité locale et régionale, et la distribution locale de l’abondance des espèces; iii) la biomasse, la productivité et la stabilité dynamique aux échelles locale et régionale; et iv) la structure locale des interactions entre les espèces. Ces propriétés sont examinées selon deux schémas spatiaux. D’abord nous employons un environnement homogène et ensuite nous employons un environnement hétérogène où la capacité limite des communautés locales évoluent suivant un gradient. De façon générale, nos résultats révèlent que les communautés écologiques spatialement distribuées sont extrêmement sensibles aux modes et aux niveaux de dispersion des organismes. Leur dynamique spatiotemporelle et leurs propriétés structurelles et fonctionnelles peuvent subir des changements profonds sous forme de transitions significatives suivant une faible variation du niveau de dispersion. Ces changements apparaissent aussi par l’émergence de patrons spatiotemporels dans la distribution spatiale des populations qui sont typiques des transitions de phases observées généralement dans les systèmes physiques. La dynamique de la métacommunauté présente deux régimes. Dans le premier régime, correspondant aux niveaux faibles de dispersion des espèces, la dynamique d’assemblage favorise l’émergence de communautés stables, peu diverses et formées d’espèces abondantes et fortement mutualistes. La métacommunauté possède une forte diversité régionale puisque les communautés locales sont faiblement connectées et que leur composition demeure ainsi distincte. Par ailleurs dans le second régime, correspondant aux niveaux élevés de dispersion, la diversité régionale diminue au profit d’une augmentation de la diversité locale. Les communautés locales sont plus productives mais leur stabilité dynamique est réduite suite à la migration importante d’individus. Ce régime est aussi caractérisé par des assemblages incluant une plus grande diversité d’interactions interspécifiques. Ces résultats suggèrent qu’une augmentation du niveau de dispersion des organismes permet de coupler les communautés locales entre elles ce qui accroît la coexistence locale et favorise la formation de communautés écologiques plus riches et plus complexes. Finalement, notre étude suggère que le mutualisme est fondamentale à l’organisation et au maintient des communautés écologiques. Les espèces mutualistes dominent dans les habitats caractérisés par une capacité limite restreinte et servent d’ingénieurs écologiques en facilitant l’établissement de compétiteurs, prédateurs et opportunistes qui bénéficient de leur présence.
Resumo:
Les débats éthiques sur l’architecture ont traditionnellement abordé trois thématiques récurrentes : la beauté, la solidité et l’utilité de l’œuvre architecturale. Plus récemment, les nouvelles connaissances provenant du domaine de la gestion des projets et du développement durable ont apporté d’importantes contributions à la compréhension de la gouvernance de projets. Cependant, la démarche de réalisation des projets d’architecture est tributaire des caractéristiques propres à l’industrie du bâtiment; une industrie qui fonctionne grâce à la mise en place d’équipes temporaires formées par des organisations hautement spécialisées. L’analyse systémique d’études de cas permet d’identifier la complexité des équipes qui interviennent dans les projets d’architecture. Nous examinons dans cet article trois caractéristiques de l’industrie du bâtiment : (i) la complexité organisationnelle du donneur d’ouvrage, (ii) l’influence des parties prenantes, et (iii) les divers niveaux de proximité entre l’architecte et les usagers. L’identification des diverses configurations organisationnelles met en évidence les effets de ces caractéristiques sur les relations formelles et informelles entre l’architecte et les donneurs d’ouvrage ainsi que celles entre toutes les parties prenantes. L’architecte est contraint de travailler sur un projet qui devient, de plus en plus, l’objet de négociation entre les diverses parties prenantes. Face à ce défi, il doit tenir compte de la complexité des relations entre tous les acteurs au sein du système social du projet et créer les scénarios adéquats à la participation, à la négociation et aux échanges entre eux.
Resumo:
Les systèmes logiciels sont devenus de plus en plus répondus et importants dans notre société. Ainsi, il y a un besoin constant de logiciels de haute qualité. Pour améliorer la qualité de logiciels, l’une des techniques les plus utilisées est le refactoring qui sert à améliorer la structure d'un programme tout en préservant son comportement externe. Le refactoring promet, s'il est appliqué convenablement, à améliorer la compréhensibilité, la maintenabilité et l'extensibilité du logiciel tout en améliorant la productivité des programmeurs. En général, le refactoring pourra s’appliquer au niveau de spécification, conception ou code. Cette thèse porte sur l'automatisation de processus de recommandation de refactoring, au niveau code, s’appliquant en deux étapes principales: 1) la détection des fragments de code qui devraient être améliorés (e.g., les défauts de conception), et 2) l'identification des solutions de refactoring à appliquer. Pour la première étape, nous traduisons des régularités qui peuvent être trouvés dans des exemples de défauts de conception. Nous utilisons un algorithme génétique pour générer automatiquement des règles de détection à partir des exemples de défauts. Pour la deuxième étape, nous introduisons une approche se basant sur une recherche heuristique. Le processus consiste à trouver la séquence optimale d'opérations de refactoring permettant d'améliorer la qualité du logiciel en minimisant le nombre de défauts tout en priorisant les instances les plus critiques. De plus, nous explorons d'autres objectifs à optimiser: le nombre de changements requis pour appliquer la solution de refactoring, la préservation de la sémantique, et la consistance avec l’historique de changements. Ainsi, réduire le nombre de changements permets de garder autant que possible avec la conception initiale. La préservation de la sémantique assure que le programme restructuré est sémantiquement cohérent. De plus, nous utilisons l'historique de changement pour suggérer de nouveaux refactorings dans des contextes similaires. En outre, nous introduisons une approche multi-objective pour améliorer les attributs de qualité du logiciel (la flexibilité, la maintenabilité, etc.), fixer les « mauvaises » pratiques de conception (défauts de conception), tout en introduisant les « bonnes » pratiques de conception (patrons de conception).
Resumo:
Coded OFDM is a transmission technique that is used in many practical communication systems. In a coded OFDM system, source data are coded, interleaved and multiplexed for transmission over many frequency sub-channels. In a conventional coded OFDM system, the transmission power of each subcarrier is the same regardless of the channel condition. However, some subcarrier can suffer deep fading with multi-paths and the power allocated to the faded subcarrier is likely to be wasted. In this paper, we compute the FER and BER bounds of a coded OFDM system given as convex functions for a given channel coder, inter-leaver and channel response. The power optimization is shown to be a convex optimization problem that can be solved numerically with great efficiency. With the proposed power optimization scheme, near-optimum power allocation for a given coded OFDM system and channel response to minimize FER or BER under a constant transmission power constraint is obtained
Resumo:
A new localization approach to increase the navigational capabilities and object manipulation of autonomous mobile robots, based on an encoded infrared sheet of light beacon system, which provides position errors smaller than 0.02m is presented in this paper. To achieve this minimal position error, a resolution enhancement technique has been developed by utilising an inbuilt odometric/optical flow sensor information. This system respects strong low cost constraints by using an innovative assembly for the digitally encoded infrared transmitter. For better guidance of mobile robot vehicles, an online traffic signalling capability is also incorporated. Other added features are its less computational complexity and online localization capability all these without any estimation uncertainty. The constructional details, experimental results and computational methodologies of the system are also described
Resumo:
A Multi-Objective Antenna Placement Genetic Algorithm (MO-APGA) has been proposed for the synthesis of matched antenna arrays on complex platforms. The total number of antennas required, their position on the platform, location of loads, loading circuit parameters, decoupling and matching network topology, matching network parameters and feed network parameters are optimized simultaneously. The optimization goal was to provide a given minimum gain, specific gain discrimination between the main and back lobes and broadband performance. This algorithm is developed based on the non-dominated sorting genetic algorithm (NSGA-II) and Minimum Spanning Tree (MST) technique for producing diverse solutions when the number of objectives is increased beyond two. The proposed method is validated through the design of a wideband airborne SAR
Resumo:
This report gives a detailed discussion on the system, algorithms, and techniques that we have applied in order to solve the Web Service Challenges (WSC) of the years 2006 and 2007. These international contests are focused on semantic web service composition. In each challenge of the contests, a repository of web services is given. The input and output parameters of the services in the repository are annotated with semantic concepts. A query to a semantic composition engine contains a set of available input concepts and a set of wanted output concepts. In order to employ an offered service for a requested role, the concepts of the input parameters of the offered operations must be more general than requested (contravariance). In contrast, the concepts of the output parameters of the offered service must be more specific than requested (covariance). The engine should respond to a query by providing a valid composition as fast as possible. We discuss three different methods for web service composition: an uninformed search in form of an IDDFS algorithm, a greedy informed search based on heuristic functions, and a multi-objective genetic algorithm.
Resumo:
Conceptual Information Systems provide a multi-dimensional conceptually structured view on data stored in relational databases. On restricting the expressiveness of the retrieval language, they allow the visualization of sets of realted queries in conceptual hierarchies, hence supporting the search of something one does not have a precise description, but only a vague idea of. Information Retrieval is considered as the process of finding specific objects (documents etc.) out of a large set of objects which fit to some description. In some data analysis and knowledge discovery applications, the dual task is of interest: The analyst needs to determine, for a subset of objects, a description for this subset. In this paper we discuss how Conceptual Information Systems can be extended to support also the second task.