965 resultados para Optimization techniques


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper addresses the problem of Biological Inspired Optimization Techniques (BIT) parameterization, considering the importance of this issue in the design of BIT especially when considering real world situations, subject to external perturbations. A learning module with the objective to permit a Multi-Agent Scheduling System to automatically select a Meta-heuristic and its parameterization to use in the optimization process is proposed. For the learning process, Casebased Reasoning was used, allowing the system to learn from experience, in the resolution of similar problems. Analyzing the obtained results we conclude about the advantages of its use.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Several projects in the recent past have aimed at promoting Wireless Sensor Networks as an infrastructure technology, where several independent users can submit applications that execute concurrently across the network. Concurrent multiple applications cause significant energy-usage overhead on sensor nodes, that cannot be eliminated by traditional schemes optimized for single-application scenarios. In this paper, we outline two main optimization techniques for reducing power consumption across applications. First, we describe a compiler based approach that identifies redundant sensing requests across applications and eliminates those. Second, we cluster the radio transmissions together by concatenating packets from independent applications based on Rate-Harmonized Scheduling.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The non-technical loss is not a problem with trivial solution or regional character and its minimization represents the guarantee of investments in product quality and maintenance of power systems, introduced by a competitive environment after the period of privatization in the national scene. In this paper, we show how to improve the training phase of a neural network-based classifier using a recently proposed meta-heuristic technique called Charged System Search, which is based on the interactions between electrically charged particles. The experiments were carried out in the context of non-technical loss in power distribution systems in a dataset obtained from a Brazilian electrical power company, and have demonstrated the robustness of the proposed technique against with several others natureinspired optimization techniques for training neural networks. Thus, it is possible to improve some applications on Smart Grids.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La gestion des ressources, équipements, équipes de travail, et autres, devrait être prise en compte lors de la conception de tout plan réalisable pour le problème de conception de réseaux de services. Cependant, les travaux de recherche portant sur la gestion des ressources et la conception de réseaux de services restent limités. La présente thèse a pour objectif de combler cette lacune en faisant l’examen de problèmes de conception de réseaux de services prenant en compte la gestion des ressources. Pour ce faire, cette thèse se décline en trois études portant sur la conception de réseaux. La première étude considère le problème de capacitated multi-commodity fixed cost network design with design-balance constraints(DBCMND). La structure multi-produits avec capacité sur les arcs du DBCMND, de même que ses contraintes design-balance, font qu’il apparaît comme sous-problème dans de nombreux problèmes reliés à la conception de réseaux de services, d’où l’intérêt d’étudier le DBCMND dans le contexte de cette thèse. Nous proposons une nouvelle approche pour résoudre ce problème combinant la recherche tabou, la recomposition de chemin, et une procédure d’intensification de la recherche dans une région particulière de l’espace de solutions. Dans un premier temps la recherche tabou identifie de bonnes solutions réalisables. Ensuite la recomposition de chemin est utilisée pour augmenter le nombre de solutions réalisables. Les solutions trouvées par ces deux méta-heuristiques permettent d’identifier un sous-ensemble d’arcs qui ont de bonnes chances d’avoir un statut ouvert ou fermé dans une solution optimale. Le statut de ces arcs est alors fixé selon la valeur qui prédomine dans les solutions trouvées préalablement. Enfin, nous utilisons la puissance d’un solveur de programmation mixte en nombres entiers pour intensifier la recherche sur le problème restreint par le statut fixé ouvert/fermé de certains arcs. Les tests montrent que cette approche est capable de trouver de bonnes solutions aux problèmes de grandes tailles dans des temps raisonnables. Cette recherche est publiée dans la revue scientifique Journal of heuristics. La deuxième étude introduit la gestion des ressources au niveau de la conception de réseaux de services en prenant en compte explicitement le nombre fini de véhicules utilisés à chaque terminal pour le transport de produits. Une approche de solution faisant appel au slope-scaling, la génération de colonnes et des heuristiques basées sur une formulation en cycles est ainsi proposée. La génération de colonnes résout une relaxation linéaire du problème de conception de réseaux, générant des colonnes qui sont ensuite utilisées par le slope-scaling. Le slope-scaling résout une approximation linéaire du problème de conception de réseaux, d’où l’utilisation d’une heuristique pour convertir les solutions obtenues par le slope-scaling en solutions réalisables pour le problème original. L’algorithme se termine avec une procédure de perturbation qui améliore les solutions réalisables. Les tests montrent que l’algorithme proposé est capable de trouver de bonnes solutions au problème de conception de réseaux de services avec un nombre fixe des ressources à chaque terminal. Les résultats de cette recherche seront publiés dans la revue scientifique Transportation Science. La troisième étude élargie nos considérations sur la gestion des ressources en prenant en compte l’achat ou la location de nouvelles ressources de même que le repositionnement de ressources existantes. Nous faisons les hypothèses suivantes: une unité de ressource est nécessaire pour faire fonctionner un service, chaque ressource doit retourner à son terminal d’origine, il existe un nombre fixe de ressources à chaque terminal, et la longueur du circuit des ressources est limitée. Nous considérons les alternatives suivantes dans la gestion des ressources: 1) repositionnement de ressources entre les terminaux pour tenir compte des changements de la demande, 2) achat et/ou location de nouvelles ressources et leur distribution à différents terminaux, 3) externalisation de certains services. Nous présentons une formulation intégrée combinant les décisions reliées à la gestion des ressources avec les décisions reliées à la conception des réseaux de services. Nous présentons également une méthode de résolution matheuristique combinant le slope-scaling et la génération de colonnes. Nous discutons des performances de cette méthode de résolution, et nous faisons une analyse de l’impact de différentes décisions de gestion des ressources dans le contexte de la conception de réseaux de services. Cette étude sera présentée au XII International Symposium On Locational Decision, en conjonction avec XXI Meeting of EURO Working Group on Locational Analysis, Naples/Capri (Italy), 2014. En résumé, trois études différentes sont considérées dans la présente thèse. La première porte sur une nouvelle méthode de solution pour le "capacitated multi-commodity fixed cost network design with design-balance constraints". Nous y proposons une matheuristique comprenant la recherche tabou, la recomposition de chemin, et l’optimisation exacte. Dans la deuxième étude, nous présentons un nouveau modèle de conception de réseaux de services prenant en compte un nombre fini de ressources à chaque terminal. Nous y proposons une matheuristique avancée basée sur la formulation en cycles comprenant le slope-scaling, la génération de colonnes, des heuristiques et l’optimisation exacte. Enfin, nous étudions l’allocation des ressources dans la conception de réseaux de services en introduisant des formulations qui modèlent le repositionnement, l’acquisition et la location de ressources, et l’externalisation de certains services. À cet égard, un cadre de solution slope-scaling développé à partir d’une formulation en cycles est proposé. Ce dernier comporte la génération de colonnes et une heuristique. Les méthodes proposées dans ces trois études ont montré leur capacité à trouver de bonnes solutions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

L'apprentissage profond est un domaine de recherche en forte croissance en apprentissage automatique qui est parvenu à des résultats impressionnants dans différentes tâches allant de la classification d'images à la parole, en passant par la modélisation du langage. Les réseaux de neurones récurrents, une sous-classe d'architecture profonde, s'avèrent particulièrement prometteurs. Les réseaux récurrents peuvent capter la structure temporelle dans les données. Ils ont potentiellement la capacité d'apprendre des corrélations entre des événements éloignés dans le temps et d'emmagasiner indéfiniment des informations dans leur mémoire interne. Dans ce travail, nous tentons d'abord de comprendre pourquoi la profondeur est utile. Similairement à d'autres travaux de la littérature, nos résultats démontrent que les modèles profonds peuvent être plus efficaces pour représenter certaines familles de fonctions comparativement aux modèles peu profonds. Contrairement à ces travaux, nous effectuons notre analyse théorique sur des réseaux profonds acycliques munis de fonctions d'activation linéaires par parties, puisque ce type de modèle est actuellement l'état de l'art dans différentes tâches de classification. La deuxième partie de cette thèse porte sur le processus d'apprentissage. Nous analysons quelques techniques d'optimisation proposées récemment, telles l'optimisation Hessian free, la descente de gradient naturel et la descente des sous-espaces de Krylov. Nous proposons le cadre théorique des méthodes à région de confiance généralisées et nous montrons que plusieurs de ces algorithmes développés récemment peuvent être vus dans cette perspective. Nous argumentons que certains membres de cette famille d'approches peuvent être mieux adaptés que d'autres à l'optimisation non convexe. La dernière partie de ce document se concentre sur les réseaux de neurones récurrents. Nous étudions d'abord le concept de mémoire et tentons de répondre aux questions suivantes: Les réseaux récurrents peuvent-ils démontrer une mémoire sans limite? Ce comportement peut-il être appris? Nous montrons que cela est possible si des indices sont fournis durant l'apprentissage. Ensuite, nous explorons deux problèmes spécifiques à l'entraînement des réseaux récurrents, à savoir la dissipation et l'explosion du gradient. Notre analyse se termine par une solution au problème d'explosion du gradient qui implique de borner la norme du gradient. Nous proposons également un terme de régularisation conçu spécifiquement pour réduire le problème de dissipation du gradient. Sur un ensemble de données synthétique, nous montrons empiriquement que ces mécanismes peuvent permettre aux réseaux récurrents d'apprendre de façon autonome à mémoriser des informations pour une période de temps indéfinie. Finalement, nous explorons la notion de profondeur dans les réseaux de neurones récurrents. Comparativement aux réseaux acycliques, la définition de profondeur dans les réseaux récurrents est souvent ambiguë. Nous proposons différentes façons d'ajouter de la profondeur dans les réseaux récurrents et nous évaluons empiriquement ces propositions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Le foie est un organe vital ayant une capacité de régénération exceptionnelle et un rôle crucial dans le fonctionnement de l’organisme. L’évaluation du volume du foie est un outil important pouvant être utilisé comme marqueur biologique de sévérité de maladies hépatiques. La volumétrie du foie est indiquée avant les hépatectomies majeures, l’embolisation de la veine porte et la transplantation. La méthode la plus répandue sur la base d'examens de tomodensitométrie (TDM) et d'imagerie par résonance magnétique (IRM) consiste à délimiter le contour du foie sur plusieurs coupes consécutives, un processus appelé la «segmentation». Nous présentons la conception et la stratégie de validation pour une méthode de segmentation semi-automatisée développée à notre institution. Notre méthode représente une approche basée sur un modèle utilisant l’interpolation variationnelle de forme ainsi que l’optimisation de maillages de Laplace. La méthode a été conçue afin d’être compatible avec la TDM ainsi que l' IRM. Nous avons évalué la répétabilité, la fiabilité ainsi que l’efficacité de notre méthode semi-automatisée de segmentation avec deux études transversales conçues rétrospectivement. Les résultats de nos études de validation suggèrent que la méthode de segmentation confère une fiabilité et répétabilité comparables à la segmentation manuelle. De plus, cette méthode diminue de façon significative le temps d’interaction, la rendant ainsi adaptée à la pratique clinique courante. D’autres études pourraient incorporer la volumétrie afin de déterminer des marqueurs biologiques de maladie hépatique basés sur le volume tels que la présence de stéatose, de fer, ou encore la mesure de fibrose par unité de volume.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Research on transition-metal nanoalloy clusters composed of a few atoms is fascinating by their unusual properties due to the interplay among the structure, chemical order and magnetism. Such nanoalloy clusters, can be used to construct nanometer devices for technological applications by manipulating their remarkable magnetic, chemical and optical properties. Determining the nanoscopic features exhibited by the magnetic alloy clusters signifies the need for a systematic global and local exploration of their potential-energy surface in order to identify all the relevant energetically low-lying magnetic isomers. In this thesis the sampling of the potential-energy surface has been performed by employing the state-of-the-art spin-polarized density-functional theory in combination with graph theory and the basin-hopping global optimization techniques. This combination is vital for a quantitative analysis of the quantum mechanical energetics. The first approach, i.e., spin-polarized density-functional theory together with the graph theory method, is applied to study the Fe$_m$Rh$_n$ and Co$_m$Pd$_n$ clusters having $N = m+n \leq 8$ atoms. We carried out a thorough and systematic sampling of the potential-energy surface by taking into account all possible initial cluster topologies, all different distributions of the two kinds of atoms within the cluster, the entire concentration range between the pure limits, and different initial magnetic configurations such as ferro- and anti-ferromagnetic coupling. The remarkable magnetic properties shown by FeRh and CoPd nanoclusters are attributed to the extremely reduced coordination number together with the charge transfer from 3$d$ to 4$d$ elements. The second approach, i.e., spin-polarized density-functional theory together with the basin-hopping method is applied to study the small Fe$_6$, Fe$_3$Rh$_3$ and Rh$_6$ and the larger Fe$_{13}$, Fe$_6$Rh$_7$ and Rh$_{13}$ clusters as illustrative benchmark systems. This method is able to identify the true ground-state structures of Fe$_6$ and Fe$_3$Rh$_3$ which were not obtained by using the first approach. However, both approaches predict a similar cluster for the ground-state of Rh$_6$. Moreover, the computational time taken by this approach is found to be significantly lower than the first approach. The ground-state structure of Fe$_{13}$ cluster is found to be an icosahedral structure, whereas Rh$_{13}$ and Fe$_6$Rh$_7$ isomers relax into cage-like and layered-like structures, respectively. All the clusters display a remarkable variety of structural and magnetic behaviors. It is observed that the isomers having similar shape with small distortion with respect to each other can exhibit quite different magnetic moments. This has been interpreted as a probable artifact of spin-rotational symmetry breaking introduced by the spin-polarized GGA. The possibility of combining the spin-polarized density-functional theory with some other global optimization techniques such as minima-hopping method could be the next step in this direction. This combination is expected to be an ideal sampling approach having the advantage of avoiding efficiently the search over irrelevant regions of the potential energy surface.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The traditional task of a central bank is to preserve price stability and, in doing so, not to impair the real economy more than necessary. To meet this challenge, it is of great relevance whether inflation is only driven by inflation expectations and the current output gap or whether it is, in addition, influenced by past inflation. In the former case, as described by the New Keynesian Phillips curve, the central bank can immediately and simultaneously achieve price stability and equilibrium output, the so-called ‘divine coincidence’ (Blanchard and Galí 2007). In the latter case, the achievement of price stability is costly in terms of output and will be pursued over several periods. Similarly, it is important to distinguish this latter case, which describes ‘intrinsic’ inflation persistence, from that of ‘extrinsic’ inflation persistence, where the sluggishness of inflation is not a ‘structural’ feature of the economy but merely ‘inherited’ from the sluggishness of the other driving forces, inflation expectations and output. ‘Extrinsic’ inflation persistence is usually considered to be the less challenging case, as policy-makers are supposed to fight against the persistence in the driving forces, especially to reduce the stickiness of inflation expectations by a credible monetary policy, in order to reestablish the ‘divine coincidence’. The scope of this dissertation is to contribute to the vast literature and ongoing discussion on inflation persistence: Chapter 1 describes the policy consequences of inflation persistence and summarizes the empirical and theoretical literature. Chapter 2 compares two models of staggered price setting, one with a fixed two-period duration and the other with a stochastic duration of prices. I show that in an economy with a timeless optimizing central bank the model with the two-period alternating price-setting (for most parameter values) leads to more persistent inflation than the model with stochastic price duration. This result amends earlier work by Kiley (2002) who found that the model with stochastic price duration generates more persistent inflation in response to an exogenous monetary shock. Chapter 3 extends the two-period alternating price-setting model to the case of 3- and 4-period price durations. This results in a more complex Phillips curve with a negative impact of past inflation on current inflation. As simulations show, this multi-period Phillips curve generates a too low degree of autocorrelation and too early turnings points of inflation and is outperformed by a simple Hybrid Phillips curve. Chapter 4 starts from the critique of Driscoll and Holden (2003) on the relative real-wage model of Fuhrer and Moore (1995). While taking the critique seriously that Fuhrer and Moore’s model will collapse to a much simpler one without intrinsic inflation persistence if one takes their arguments literally, I extend the model by a term for inequality aversion. This model extension is not only in line with experimental evidence but results in a Hybrid Phillips curve with inflation persistence that is observably equivalent to that presented by Fuhrer and Moore (1995). In chapter 5, I present a model that especially allows to study the relationship between fairness attitudes and time preference (impatience). In the model, two individuals take decisions in two subsequent periods. In period 1, both individuals are endowed with resources and are able to donate a share of their resources to the other individual. In period 2, the two individuals might join in a common production after having bargained on the split of its output. The size of the production output depends on the relative share of resources at the end of period 1 as the human capital of the individuals, which is built by means of their resources, cannot fully be substituted one against each other. Therefore, it might be rational for a well-endowed individual in period 1 to act in a seemingly ‘fair’ manner and to donate own resources to its poorer counterpart. This decision also depends on the individuals’ impatience which is induced by the small but positive probability that production is not possible in period 2. As a general result, the individuals in the model economy are more likely to behave in a ‘fair’ manner, i.e., to donate resources to the other individual, the lower their own impatience and the higher the productivity of the other individual. As the (seemingly) ‘fair’ behavior is modelled as an endogenous outcome and as it is related to the aspect of time preference, the presented framework might help to further integrate behavioral economics and macroeconomics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Quantum technology, exploiting entanglement and the wave nature of matter, relies on the ability to accurately control quantum systems. Quantum control is often compromised by the interaction of the system with its environment since this causes loss of amplitude and phase. However, when the dynamics of the open quantum system is non-Markovian, amplitude and phase flow not only from the system into the environment but also back. Interaction with the environment is then not necessarily detrimental. We show that the back-flow of amplitude and phase can be exploited to carry out quantum control tasks that could not be realized if the system was isolated. The control is facilitated by a few strongly coupled, sufficiently isolated environmental modes. Our paradigmatic example considers a weakly anharmonic ladder with resonant amplitude control only, restricting realizable operations to SO(N). The coupling to the environment, when harnessed with optimization techniques, allows for full SU(N) controllability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis describes Optimist, an optimizing compiler for the Concurrent Smalltalk language developed by the Concurrent VLSI Architecture Group. Optimist compiles Concurrent Smalltalk to the assembly language of the Message-Driven Processor (MDP). The compiler includes numerous optimization techniques such as dead code elimination, dataflow analysis, constant folding, move elimination, concurrency analysis, duplicate code merging, tail forwarding, use of register variables, as well as various MDP-specific optimizations in the code generator. The MDP presents some unique challenges and opportunities for compilation. Due to the MDP's small memory size, it is critical that the size of the generated code be as small as possible. The MDP is an inherently concurrent processor with efficient mechanisms for sending and receiving messages; the compiler takes advantage of these mechanisms. The MDP's tagged architecture allows very efficient support of object-oriented languages such as Concurrent Smalltalk. The initial goals for the MDP were to have the MDP execute about twenty instructions per method and contain 4096 words of memory. This compiler shows that these goals are too optimistic -- most methods are longer, both in terms of code size and running time. Thus, the memory size of the MDP should be increased.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dynamic optimization methods have become increasingly important over the last years in economics. Within the dynamic optimization techniques employed, optimal control has emerged as the most powerful tool for the theoretical economic analysis. However, there is the need to advance further and take account that many dynamic economic processes are, in addition, dependent on some other parameter different than time. One can think of relaxing the assumption of a representative (homogeneous) agent in macro- and micro-economic applications allowing for heterogeneity among the agents. For instance, the optimal adaptation and diffusion of a new technology over time, may depend on the age of the person that adopted the new technology. Therefore, the economic models must take account of heterogeneity conditions within the dynamic framework. This thesis intends to accomplish two goals. The first goal is to analyze and revise existing environmental policies that focus on defining the optimal management of natural resources over time, by taking account of the heterogeneity of environmental conditions. Thus, the thesis makes a policy orientated contribution in the field of environmental policy by defining the necessary changes to transform an environmental policy based on the assumption of homogeneity into an environmental policy which takes account of heterogeneity. As a result the newly defined environmental policy will be more efficient and likely also politically more acceptable since it is tailored more specifically to the heterogeneous environmental conditions. Additionally to its policy orientated contribution, this thesis aims making a methodological contribution by applying a new optimization technique for solving problems where the control variables depend on two or more arguments --- the so-called two-stage solution approach ---, and by applying a numerical method --- the Escalator Boxcar Train Method --- for solving distributed optimal control problems, i.e., problems where the state variables, in addition to the control variables, depend on two or more arguments. Chapter 2 presents a theoretical framework to determine optimal resource allocation over time for the production of a good by heterogeneous producers, who generate a stock externalit and derives government policies to modify the behavior of competitive producers in order to achieve optimality. Chapter 3 illustrates the method in a more specific context, and integrates the aspects of quality and time, presenting a theoretical model that allows to determine the socially optimal outcome over time and space for the problem of waterlogging in irrigated agricultural production. Chapter 4 of this thesis concentrates on forestry resources and analyses the optimal selective-logging regime of a size-distributed forest.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We have designed a highly parallel design for a simple genetic algorithm using a pipeline of systolic arrays. The systolic design provides high throughput and unidirectional pipelining by exploiting the implicit parallelism in the genetic operators. The design is significant because, unlike other hardware genetic algorithms, it is independent of both the fitness function and the particular chromosome length used in a problem. We have designed and simulated a version of the mutation array using Xilinix FPGA tools to investigate the feasibility of hardware implementation. A simple 5-chromosome mutation array occupies 195 CLBs and is capable of performing more than one million mutations per second. I. Introduction Genetic algorithms (GAs) are established search and optimization techniques which have been applied to a range of engineering and applied problems with considerable success [1]. They operate by maintaining a population of trial solutions encoded, using a suitable encoding scheme.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main objective of this thesis work is to develop communication link between Runrev Revolution (IDE) and JADE (Multi-Agent System) through Socket programming using TCP/IP layer. These two independent platforms are connected using socket programming technique. Socket programming is considered to be newly emerging technology among these two platforms, the work done in this thesis work is considered to be a prototype.A Graphical simulation model is developed by salixphere (Company in Hedemora) to simulate logistic problems using Runrev Revolution (IDE). The simulation software/program is called “BIOSIM”. The logistic problems are complex, and conventional optimization techniques are unlikely very successful. “BIOSIM” can demonstrate the graphical representation of logistic problems depending upon the problem domains. As this simulation model is developed in revolution programming language (Transcript) which is dynamically typed and English-like language, it is quite slow compared to other high level programming languages. The object of this thesis work is to add intelligent behaviour in graphical objects and develop communication link between Runrev revolution (IDE) and JADE (Multi-Agent System) using TCP/IP layers.The test shows the intelligent behaviour in the graphical objects and successful communication between Runrev Revolution (IDE) and JADE (Multi-Agent System).