854 resultados para Distributed system architecture


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Com o intuito de utilizar uma rede com protocolo IP para a implementação de malhas fechadas de controle, este trabalho propõe-se a realizar um estudo da operação de um sistema de controle dinâmico distribuído, comparando-o com a operação de um sistema de controle local convencional. Em geral, a decisão de projetar uma arquitetura de controle distribuído é feita baseada na simplicidade, na redução dos custos e confiabilidade; portanto, um diferencial bastante importante é a utilização da rede IP. O objetivo de uma rede de controle não é transmitir dados digitais, mas dados analógicos amostrados. Assim, métricas usuais em redes de computadores, como quantidade de dados e taxa de transferências, tornam-se secundárias em uma rede de controle. São propostas técnicas para tratar os pacotes que sofrem atrasos e recuperar o desempenho do sistema de controle através da rede IP. A chave para este método é realizar a estimação do conteúdo dos pacotes que sofrem atrasos com base no modelo dinâmico do sistema, mantendo o sistema com um nível adequado de desempenho. O sistema considerado é o controle de um manipulador antropomórfico com dois braços e uma cabeça de visão estéreo totalizando 18 juntas. Os resultados obtidos mostram que se pode recuperar boa parte do desempenho do sistema.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article presents the implementation of a distributed system of virtual reality, through the integration of services offered by the CORBA platform (Common Object Request Broker Architecture) and by the environment of development of 3D graphic applications in real time, the WorldToolkit, of Sense8. The developed application for the validation of this integration is that of a virtual city, with an emphasis on its traffic ways, vehicles (movable objects) and buildings (immovable objects). In this virtual world, several users can interact, each one controlling his/her own car. Since the modelling of the application took into consideration the criteria and principles of the Transport Engineering, the aim is to use it in the planning, project and construction of traffic ways for vehicles. The system was structured according to the approach client/server utilizing multicast communication among the participating nodes. The chosen implementation for the CORBA was the Iona's ORBIX software. The performance results obtained are presented and discussed in the end.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The optical resonances of metallic nanoparticles placed at nanometer distances from a metal plane were investigated. At certain wavelengths, these “sphere-on-plane” systems become resonant with the incident electromagnetic field and huge enhancements of the field are predicted localized in the small gaps created between the nanoparticle and the plane. An experimental architecture to fabricate sphere-on-plane systems was successfully achieved in which in addition to the commonly used alkanethiols, polyphenylene dendrimers were used as molecular spacers to separate the metallic nanoparticles from the metal planes. They allow for a defined nanoparticle-plane separation and some often are functionalized with a chromophore core which is therefore positioned exactly in the gap. The metal planes used in the system architecture consisted of evaporated thin films of gold or silver. Evaporated gold or silver films have a smooth interface with their substrate and a rougher top surface. To investigate the influence of surface roughness on the optical response of such a film, two gold films were prepared with a smooth and a rough side which were as similar as possible. Surface plasmons were excited in Kretschmann configuration both on the rough and on the smooth side. Their reflectivity could be well modeled by a single gold film for each individual measurement. The film has to be modeled as two layers with significantly different optical constants. The smooth side, although polycrystalline, had an optical response that was very similar to a monocrystalline surface while for the rough side the standard response of evaporated gold is retrieved. For investigations on thin non-absorbing dielectric films though, this heterogeneity introduces only a negligible error. To determine the resonant wavelength of the sphere-on-plane systems a strategy was developed which is based on multi-wavelength surface plasmon spectroscopy experiments in Kretschmann-configuration. The resonant behavior of the system lead to characteristic changes in the surface plasmon dispersion. A quantitative analysis was performed by calculating the polarisability per unit area /A treating the sphere-on-plane systems as an effective layer. This approach completely avoids the ambiguity in the determination of thickness and optical response of thin films in surface plasmon spectroscopy. Equal area densities of polarisable units yielded identical response irrespective of the thickness of the layer they are distributed in. The parameter range where the evaluation of surface plasmon data in terms of /A is applicable was determined for a typical experimental situation. It was shown that this analysis yields reasonable quantitative agreement with a simple theoretical model of the sphere-on-plane resonators and reproduces the results from standard extinction experiments having a higher information content and significantly increased signal-to-noise ratio. With the objective to acquire a better quantitative understanding of the dependence of the resonance wavelength on the geometry of the sphere-on-plane systems, different systems were fabricated in which the gold nanoparticle size, type of spacer and ambient medium were varied and the resonance wavelength of the system was determined. The gold nanoparticle radius was varied in the range from 10 nm to 80 nm. It could be shown that the polyphenylene dendrimers can be used as molecular spacers to fabricate systems which support gap resonances. The resonance wavelength of the systems could be tuned in the optical region between 550 nm and 800 nm. Based on a simple analytical model, a quantitative analysis was developed to relate the systems’ geometry with the resonant wavelength and surprisingly good agreement of this simple model with the experiment without any adjustable parameters was found. The key feature ascribed to sphere-on-plane systems is a very large electromagnetic field localized in volumes in the nanometer range. Experiments towards a quantitative understanding of the field enhancements taking place in the gap of the sphere-on-plane systems were done by monitoring the increase in fluorescence of a metal-supported monolayer of a dye-loaded dendrimer upon decoration of the surface with nanoparticles. The metal used (gold and silver), the colloid mean size and the surface roughness were varied. Large silver crystallites on evaporated silver surfaces lead to the most pronounced fluorescence enhancements in the order of 104. They constitute a very promising sample architecture for the study of field enhancements.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis explores system performance for reconfigurable distributed systems and provides an analytical model for determining throughput of theoretical systems based on the OpenSPARC FPGA Board and the SIRC Communication Framework. This model was developed by studying a small set of variables that together determine a system¿s throughput. The importance of this model is in assisting system designers to make decisions as to whether or not to commit to designing a reconfigurable distributed system based on the estimated performance and hardware costs. Because custom hardware design and distributed system design are both time consuming and costly, it is important for designers to make decisions regarding system feasibility early in the development cycle. Based on experimental data the model presented in this paper shows a close fit with less than 10% experimental error on average. The model is limited to a certain range of problems, but it can still be used given those limitations and also provides a foundation for further development of modeling reconfigurable distributed systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis presents two frameworks- a software framework and a hardware core manager framework- which, together, can be used to develop a processing platform using a distributed system of field-programmable gate array (FPGA) boards. The software framework providesusers with the ability to easily develop applications that exploit the processing power of FPGAs while the hardware core manager framework gives users the ability to configure and interact with multiple FPGA boards and/or hardware cores. This thesis describes the design and development of these frameworks and analyzes the performance of a system that was constructed using the frameworks. The performance analysis included measuring the effect of incorporating additional hardware components into the system and comparing the system to a software-only implementation. This work draws conclusions based on the provided results of the performance analysis and offers suggestions for future work.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents the capabilities of a Space-Based Space Surveillance (SBSS) demonstration mission for Space Surveillance and Tracking (SST) based on a micro- satellite platform. The results have been produced in the frame of ESA’s "As sessment Study for Space Based Space Surveillance Demonstration Mission (Phase A) " performed by the Airbus DS consortium. Space Surveillance and Tracking is part of Space Situational Awareness (SSA) and covers the detection, tracking and cataloguing of spa ce debris and satellites. Derived SST services comprise a catalogue of these man-made objects, collision warning, detection and characterisation of in-orbit fragmentations, sub-catalogue debris characterisation, etc. The assessment of SBSS in an SST system architecture has shown that both an operational SBSS and also already a well - designed space-based demonstrator can provide substantial performance in terms of surveillance and tracking of beyond - LEO objects. Especially the early deployment of a demonstrator, possible by using standard equipment, could boost initial operating capability and create a self-maintained object catalogue. Unlike classical technology demonstration missions, the primary goal is the demonstration and optimisation of the functional elements in a complex end-to-end chain (mission planning, observation strategies, data acquisition, processing and fusion, etc.) until the final products can be offered to the users. The presented SBSS system concept takes the ESA SST System Requirements (derived within the ESA SSA Preparatory Program) into account and aims at fulfilling some of the SST core requirements in a stand-alone manner. The evaluation of the concept has shown that an according solution can be implemented with low technological effort and risk. The paper presents details of the system concept, candidate micro - satellite platforms, the observation strategy and the results of performance simulations for GEO coverage and cataloguing accuracy

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents the capabilities of a Space-Based Space Surveillance (SBSS) demonstration mission for Space Surveillance and Tracking (SST) based on a micro-satellite platform. The results have been produced in the frame of ESA’s "Assessment Study for Space Based Space Surveillance Demonstration Mission" performed by the Airbus Defence and Space consortium. The assessment of SBSS in an SST system architecture has shown that both an operational SBSS and also already a well- designed space-based demonstrator can provide substantial performance in terms of surveillance and tracking of beyond-LEO objects. Especially the early deployment of a demonstrator, possible by using standard equipment, could boost initial operating capability and create a self-maintained object catalogue. Furthermore, unique statistical information about small-size LEO debris (mm size) can be collected in-situ. Unlike classical technology demonstration missions, the primary goal is the demonstration and optimisation of the functional elements in a complex end-to-end chain (mission planning, observation strategies, data acquisition, processing, etc.) until the final products can be offered to the users and with low technological effort and risk. The SBSS system concept takes the ESA SST System Requirements into account and aims at fulfilling SST core requirements in a stand-alone manner. Additionally, requirements for detection and characterisation of small-sizedLEO debris are considered. The paper presents details of the system concept, candidate micro-satellite platforms, the instrument design and the operational modes. Note that the detailed results of performance simulations for space debris coverage and cataloguing accuracy are presented in a separate paper “Capability of a Space-based Space Surveillance System to Detect and Track Objects in GEO, MEO and LEO Orbits” by J. Silha (AIUB) et al., IAC-14, A6, 1.1x25640.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cloud computing is one the most relevant computing paradigms available nowadays. Its adoption has increased during last years due to the large investment and research from business enterprises and academia institutions. Among all the services cloud providers usually offer, Infrastructure as a Service has reached its momentum for solving HPC problems in a more dynamic way without the need of expensive investments. The integration of a large number of providers is a major goal as it enables the improvement of the quality of the selected resources in terms of pricing, speed, redundancy, etc. In this paper, we propose a system architecture, based on semantic solutions, to build an interoperable scheduler for federated clouds that works with several IaaS (Infrastructure as a Service) providers in a uniform way. Based on this architecture we implement a proof-of-concept prototype and test it with two different cloud solutions to provide some experimental results about the viability of our approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

CIAO is an advanced programming environment supporting Logic and Constraint programming. It offers a simple concurrent kernel on top of which declarative and non-declarative extensions are added via librarles. Librarles are available for supporting the ISOProlog standard, several constraint domains, functional and higher order programming, concurrent and distributed programming, internet programming, and others. The source language allows declaring properties of predicates via assertions, including types and modes. Such properties are checked at compile-time or at run-time. The compiler and system architecture are designed to natively support modular global analysis, with the two objectives of proving properties in assertions and performing program optimizations, including transparently exploiting parallelism in programs. The purpose of this paper is to report on recent progress made in the context of the CIAO system, with special emphasis on the capabilities of the compiler, the techniques used for supporting such capabilities, and the results in the áreas of program analysis and transformation already obtained with the system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Con el surgir de los problemas irresolubles de forma eficiente en tiempo polinomial en base al dato de entrada, surge la Computación Natural como alternativa a la computación clásica. En esta disciplina se trata de o bien utilizar la naturaleza como base de cómputo o bien, simular su comportamiento para obtener mejores soluciones a los problemas que los encontrados por la computación clásica. Dentro de la computación natural, y como una representación a nivel celular, surge la Computación con Membranas. La primera abstracción de las membranas que se encuentran en las células, da como resultado los P sistemas de transición. Estos sistemas, que podrían ser implementados en medios biológicos o electrónicos, son la base de estudio de esta Tesis. En primer lugar, se estudian las implementaciones que se han realizado, con el fin de centrarse en las implementaciones distribuidas, que son las que pueden aprovechar las características intrínsecas de paralelismo y no determinismo. Tras un correcto estudio del estado actual de las distintas etapas que engloban a la evolución del sistema, se concluye con que las distribuciones que buscan un equilibrio entre las dos etapas (aplicación y comunicación), son las que mejores resultados presentan. Para definir estas distribuciones, es necesario definir completamente el sistema, y cada una de las partes que influyen en su transición. Además de los trabajos de otros investigadores, y junto a ellos, se realizan variaciones a los proxies y arquitecturas de distribución, para tener completamente definidos el comportamiento dinámico de los P sistemas. A partir del conocimiento estático –configuración inicial– del P sistema, se pueden realizar distribuciones de membranas en los procesadores de un clúster para obtener buenos tiempos de evolución, con el fin de que la computación del P sistema sea realizada en el menor tiempo posible. Para realizar estas distribuciones, hay que tener presente las arquitecturas –o forma de conexión– de los procesadores del clúster. La existencia de 4 arquitecturas, hace que el proceso de distribución sea dependiente de la arquitectura a utilizar, y por tanto, aunque con significativas semejanzas, los algoritmos de distribución deben ser realizados también 4 veces. Aunque los propulsores de las arquitecturas han estudiado el tiempo óptimo de cada arquitectura, la inexistencia de distribuciones para estas arquitecturas ha llevado a que en esta Tesis se probaran las 4, hasta que sea posible determinar que en la práctica, ocurre lo mismo que en los estudios teóricos. Para realizar la distribución, no existe ningún algoritmo determinista que consiga una distribución que satisfaga las necesidades de la arquitectura para cualquier P sistema. Por ello, debido a la complejidad de dicho problema, se propone el uso de metaheurísticas de Computación Natural. En primer lugar, se propone utilizar Algoritmos Genéticos, ya que es posible realizar alguna distribución, y basada en la premisa de que con la evolución, los individuos mejoran, con la evolución de dichos algoritmos, las distribuciones también mejorarán obteniéndose tiempos cercanos al óptimo teórico. Para las arquitecturas que preservan la topología arbórea del P sistema, han sido necesarias realizar nuevas representaciones, y nuevos algoritmos de cruzamiento y mutación. A partir de un estudio más detallado de las membranas y las comunicaciones entre procesadores, se ha comprobado que los tiempos totales que se han utilizado para la distribución pueden ser mejorados e individualizados para cada membrana. Así, se han probado los mismos algoritmos, obteniendo otras distribuciones que mejoran los tiempos. De igual forma, se han planteado el uso de Optimización por Enjambres de Partículas y Evolución Gramatical con reescritura de gramáticas (variante de Evolución Gramatical que se presenta en esta Tesis), para resolver el mismo cometido, obteniendo otro tipo de distribuciones, y pudiendo realizar una comparativa de las arquitecturas. Por último, el uso de estimadores para el tiempo de aplicación y comunicación, y las variaciones en la topología de árbol de membranas que pueden producirse de forma no determinista con la evolución del P sistema, hace que se deba de monitorizar el mismo, y en caso necesario, realizar redistribuciones de membranas en procesadores, para seguir obteniendo tiempos de evolución razonables. Se explica, cómo, cuándo y dónde se deben realizar estas modificaciones y redistribuciones; y cómo es posible realizar este recálculo. Abstract Natural Computing is becoming a useful alternative to classical computational models since it its able to solve, in an efficient way, hard problems in polynomial time. This discipline is based on biological behaviour of living organisms, using nature as a basis of computation or simulating nature behaviour to obtain better solutions to problems solved by the classical computational models. Membrane Computing is a sub discipline of Natural Computing in which only the cellular representation and behaviour of nature is taken into account. Transition P Systems are the first abstract representation of membranes belonging to cells. These systems, which can be implemented in biological organisms or in electronic devices, are the main topic studied in this thesis. Implementations developed in this field so far have been studied, just to focus on distributed implementations. Such distributions are really important since they can exploit the intrinsic parallelism and non-determinism behaviour of living cells, only membranes in this case study. After a detailed survey of the current state of the art of membranes evolution and proposed algorithms, this work concludes that best results are obtained using an equal assignment of communication and rules application inside the Transition P System architecture. In order to define such optimal distribution, it is necessary to fully define the system, and each one of the elements that influence in its transition. Some changes have been made in the work of other authors: load distribution architectures, proxies definition, etc., in order to completely define the dynamic behaviour of the Transition P System. Starting from the static representation –initial configuration– of the Transition P System, distributions of membranes in several physical processors of a cluster is algorithmically done in order to get a better performance of evolution so that the computational complexity of the Transition P System is done in less time as possible. To build these distributions, the cluster architecture –or connection links– must be considered. The existence of 4 architectures, makes that the process of distribution depends on the chosen architecture, and therefore, although with significant similarities, the distribution algorithms must be implemented 4 times. Authors who proposed such architectures have studied the optimal time of each one. The non existence of membrane distributions for these architectures has led us to implement a dynamic distribution for the 4. Simulations performed in this work fix with the theoretical studies. There is not any deterministic algorithm that gets a distribution that meets the needs of the architecture for any Transition P System. Therefore, due to the complexity of the problem, the use of meta-heuristics of Natural Computing is proposed. First, Genetic Algorithm heuristic is proposed since it is possible to make a distribution based on the premise that along with evolution the individuals improve, and with the improvement of these individuals, also distributions enhance, obtaining complexity times close to theoretical optimum time. For architectures that preserve the tree topology of the Transition P System, it has been necessary to make new representations of individuals and new algorithms of crossover and mutation operations. From a more detailed study of the membranes and the communications among processors, it has been proof that the total time used for the distribution can be improved and individualized for each membrane. Thus, the same algorithms have been tested, obtaining other distributions that improve the complexity time. In the same way, using Particle Swarm Optimization and Grammatical Evolution by rewriting grammars (Grammatical Evolution variant presented in this thesis), to solve the same distribution task. New types of distributions have been obtained, and a comparison of such genetic and particle architectures has been done. Finally, the use of estimators for the time of rules application and communication, and variations in tree topology of membranes that can occur in a non-deterministic way with evolution of the Transition P System, has been done to monitor the system, and if necessary, perform a membrane redistribution on processors to obtain reasonable evolution time. How, when and where to make these changes and redistributions, and how it can perform this recalculation, is explained.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes a novel deployment of an intelligent user-centered HVAC (Heating, Ventilating and Air Conditioner) control system. The main objective of this system is to optimize user comfort and to reduce energy consumption in office buildings. Existing commercial HVAC control systems work in a fixed and predetermined way. The novelty of the proposed system is that it adapts dynamically to the user and to the building environment. For this purpose the system architecture has been designed under the paradigm of Ambient Intelligence. A prototype of the system proposed has been tested in a real-world environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Estamos viviendo la era de la Internetificación. A día de hoy, las conexiones a Internet se asumen presentes en nuestro entorno como una necesidad más. La Web, se ha convertido en un lugar de generación de contenido por los usuarios. Una información generada, que sobrepasa la idea con la que surgió esta, ya que en la mayoría de casos, su contenido no se ha diseñado más que para ser consumido por humanos, y no por máquinas. Esto supone un cambio de mentalidad en la forma en que diseñamos sistemas capaces de soportar una carga computacional y de almacenamiento que crece sin un fin aparente. Al mismo tiempo, vivimos un momento de crisis de la educación superior: los altos costes de una educación de calidad suponen una amenaza para el mundo académico. Mediante el uso de la tecnología, se puede lograr un incremento de la productividad, y una reducción en dichos costes en un campo, en el que apenas se ha avanzado desde el Renacimiento. En CloudRoom se ha diseñado una plataforma MOOC con una arquitectura ajustada a las últimas convenciones en Cloud Computing, que implica el uso de Servicios REST, bases de datos NoSQL, y que hace uso de las últimas recomendaciones del W3C en materia de desarrollo web y Linked Data. Para su construcción, se ha hecho uso de métodos ágiles de Ingeniería del Software, técnicas de Interacción Persona-Ordenador, y tecnologías de última generación como Neo4j, Redis, Node.js, AngularJS, Bootstrap, HTML5, CSS3 o Amazon Web Services. Se ha realizado un trabajo integral de Ingeniería Informática, combinando prácticamente la totalidad de aquellas áreas de conocimiento fundamentales en Informática. En definitiva se han ideado las bases de un sistema distribuido robusto, mantenible, con características sociales y semánticas, que puede ser ejecutado en múltiples dispositivos, y que es capaz de responder ante millones de usuarios. We are living through an age of Internetification. Nowadays, Internet connections are a utility whose presence one can simply assume. The web has become a place of generation of content by users. The information generated surpasses the notion with which the World Wide Web emerged because, in most cases, this content has been designed to be consumed by humans and not by machines. This fact implies a change of mindset in the way that we design systems; these systems should be able to support a computational and storage capacity that apparently grows endlessly. At the same time, our education system is in a state of crisis: the high costs of high-quality education threaten the academic world. With the use of technology, we could achieve an increase of productivity and quality, and a reduction of these costs in this field, which has remained largely unchanged since the Renaissance. In CloudRoom, a MOOC platform has been designed with an architecture that satisfies the last conventions on Cloud Computing; which involves the use of REST services, NoSQL databases, and uses the last recommendations from W3C in terms of web development and Linked Data. For its building process, agile methods of Software Engineering, Human-Computer Interaction techniques, and state of the art technologies such as Neo4j, Redis, Node.js, AngularJS, Bootstrap, HTML5, CSS3 or Amazon Web Services have been used. Furthermore, a comprehensive Informatics Engineering work has been performed, by combining virtually all of the areas of knowledge in Computer Science. Summarizing, the pillars of a robust, maintainable, and distributed system have been devised; a system with social and semantic capabilities, which runs in multiple devices, and scales to millions of users.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This document contains detailed description of the design and the implementation of a multi-agent application controlling traffic lights in a city together with a system for simulating traffic and testing. The goal of this thesis is to design and build a simplified intelligent and distributed solution to the problem with the traffic in the big cities following different good practices in order to allow future refining of the model of the real world. The problem of the traffic in the big cities is still a problem that cannot be solved. Not only is the increasing number of cars a reason for the traffic jams, but also the way the traffic is organized. Usually, the intersections with traffic lights are replaced by roundabouts or interchanges to increase the number of cars that can cross the intersection in certain time. But still there are places where the infrastructure cannot be changed and the traffic light semaphores are the only way to control the car flows. In real life, the traffic lights have a predefined plan for change or they receive information from a centralized system when and how they have to change. But what if the traffic lights can cooperate and decide on their own when and how to change? Using this problem, the purpose of the thesis is to explore different agent-based software engineering approaches to design and build a non-conventional distributed system. From the software engineering point of view, the goal of the thesis is to apply the knowledge and use the skills, acquired during the various courses of the master program in Software Engineering, while solving a practical and complex problem such as the traffic in the cities.