873 resultados para multi-agent systems
Resumo:
Multi-agent algorithms inspired by the division of labour in social insects and by markets, are applied to a constrained problem of distributed task allocation. The efficiency (average number of tasks performed), the flexibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved efficiency and robustness. We employ nature inspired particle swarm optimisation to obtain optimised parameters for all algorithms in a range of representative environments. Although results are obtained for large population sizes to avoid finite size effects, the influence of population size on the performance is also analysed. From a theoretical point of view, we analyse the causes of efficiency loss, derive theoretical upper bounds for the efficiency, and compare these with the experimental results.
Resumo:
A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task selection in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without a priori knowledge of the available mail at the cities or inter-agent communication. In order to process a different mail type than the previous one, agents must undergo a change-over during which it remains inactive. We propose a threshold based algorithm in order to maximise the overall efficiency (the average amount of mail collected). We show that memory, i.e. the possibility for agents to develop preferences for certain cities, not only leads to emergent cooperation between agents, but also to a significant increase in efficiency (above the theoretical upper limit for any memoryless algorithm), and we systematically investigate the influence of the various model parameters. Finally, we demonstrate the flexibility of the algorithm to changes in circumstances, and its excellent scalability.
Resumo:
Within project Distributed eLearning Center (DeLC) we are developing a system for distance and eLearning, which offers fixed and mobile access to electronic content and services. Mobile access is based on InfoStation architecture, which provides Bluetooth and WiFi connectivity. On InfoStation network we are developing multi-agent middleware that provides context-aware, adaptive and personalized access to the mobile services to the users. For more convenient testing and optimization of the middleware a simulation environment, called CA3 SiEnv, is being created.
Resumo:
Electrical energy is an essential resource for the modern world. Unfortunately, its price has almost doubled in the last decade. Furthermore, energy production is also currently one of the primary sources of pollution. These concerns are becoming more important in data-centers. As more computational power is required to serve hundreds of millions of users, bigger data-centers are becoming necessary. This results in higher electrical energy consumption. Of all the energy used in data-centers, including power distribution units, lights, and cooling, computer hardware consumes as much as 80%. Consequently, there is opportunity to make data-centers more energy efficient by designing systems with lower energy footprint. Consuming less energy is critical not only in data-centers. It is also important in mobile devices where battery-based energy is a scarce resource. Reducing the energy consumption of these devices will allow them to last longer and re-charge less frequently. Saving energy in computer systems is a challenging problem. Improving a system's energy efficiency usually comes at the cost of compromises in other areas such as performance or reliability. In the case of secondary storage, for example, spinning-down the disks to save energy can incur high latencies if they are accessed while in this state. The challenge is to be able to increase the energy efficiency while keeping the system as reliable and responsive as before. This thesis tackles the problem of improving energy efficiency in existing systems while reducing the impact on performance. First, we propose a new technique to achieve fine grained energy proportionality in multi-disk systems; Second, we design and implement an energy-efficient cache system using flash memory that increases disk idleness to save energy; Finally, we identify and explore solutions for the page fetch-before-update problem in caching systems that can: (a) control better I/O traffic to secondary storage and (b) provide critical performance improvement for energy efficient systems.
Resumo:
In their study - From Clerk and Cashier to Guest Service Agent - by Nancy J. Allin, Director of Quality Assurance and Training and Kelly Halpine, Assistant Director of Quality Assurance and Training, The Waldorf-Astoria, New York, the authors state at the outset: “The Waldorf-Astoria has taken the positions of registration clerk and cashier and combined them to provide excellent guest service and efficient systems operation. The authors tell how and why the combination works. That thesis statement defines the article, and puts it squarely in the crosshairs of the service category. Allin and Halpine use their positions at the Waldorf-Astoria in New York City to frame their observations “The allocation of staff hours has been a challenge to many front office managers who try their hardest to schedule for the norm but provide excellent, efficient service throughout the peaks,” Allin and Halpine allude. “…the decision [to combine the positions of registration clerk and cashier] was driven by a desire to improve guest service where its impact is most obvious, at the front desk. Cross-trained employees speed the check-in and check-out process by performing both functions, as the traffic at the desk dictates,” the authors say. Making such a move has resulted in positive benefits for both the guests and the hotel. “Benefits to the hotel, in addition to those brought to bear by increased guest satisfaction, include greater flexibility in weekly scheduling and in granting vacations while maintaining adequate staffing at the desk,” say Allin and Halpine . “Another expected outcome, net payroll savings, should also be realized as a consequence of the ability to schedule more efficiently.” The authors point to communication as the key to designing a successful combination such as this, with the least amount of service disruption. They bullet-point what that communication should entail. Issues of seniority, wage and salary rates, organizational charting, filing, scheduling, possible probationary periods, position titles, and physical layouts are all discussed. “It is critical that each of the management issues be addressed and resolved before any training is begun,” Allin and Halpine suggest. “Unresolved issues project confusion and lack of conviction to line employees and the result is frustration and a lack of commitment to the combination process,” they push the thought Allin and Halpine insist: “Once begun, training must be ongoing and consistent.” In the practical sense, the authors provide that authorizing overtime is helpful in accomplishing training. “Training must address the fact that employees will be faced with guest situations which are new to them, for example: an employee previously functioning as a cashier will be faced with walking guests. Specific exercises should be included to address these needs,” say the authors.
Resumo:
Postprint
Resumo:
With increasing prevalence and capabilities of autonomous systems as part of complex heterogeneous manned-unmanned environments (HMUEs), an important consideration is the impact of the introduction of automation on the optimal assignment of human personnel. The US Navy has implemented optimal staffing techniques before in the 1990's and 2000's with a "minimal staffing" approach. The results were poor, leading to the degradation of Naval preparedness. Clearly, another approach to determining optimal staffing is necessary. To this end, the goal of this research is to develop human performance models for use in determining optimal manning of HMUEs. The human performance models are developed using an agent-based simulation of the aircraft carrier flight deck, a representative safety-critical HMUE. The Personnel Multi-Agent Safety and Control Simulation (PMASCS) simulates and analyzes the effects of introducing generalized maintenance crew skill sets and accelerated failure repair times on the overall performance and safety of the carrier flight deck. A behavioral model of four operator types (ordnance officers, chocks and chains, fueling officers, plane captains, and maintenance operators) is presented here along with an aircraft failure model. The main focus of this work is on the maintenance operators and aircraft failure modeling, since they have a direct impact on total launch time, a primary metric for carrier deck performance. With PMASCS I explore the effects of two variables on total launch time of 22 aircraft: 1) skill level of maintenance operators and 2) aircraft failure repair times while on the catapult (referred to as Phase 4 repair times). It is found that neither introducing a generic skill set to maintenance crews nor introducing a technology to accelerate Phase 4 aircraft repair times improves the average total launch time of 22 aircraft. An optimal manning level of 3 maintenance crews is found under all conditions, the point at which any additional maintenance crews does not reduce the total launch time. An additional discussion is included about how these results change if the operations are relieved of the bottleneck of installing the holdback bar at launch time.
Resumo:
Multiuser selection scheduling concept has been recently proposed in the literature in order to increase the multiuser diversity gain and overcome the significant feedback requirements for the opportunistic scheduling schemes. The main idea is that reducing the feedback overhead saves per-user power that could potentially be added for the data transmission. In this work, the authors propose to integrate the principle of multiuser selection and the proportional fair scheduling scheme. This is aimed especially at power-limited, multi-device systems in non-identically distributed fading channels. For the performance analysis, they derive closed-form expressions for the outage probabilities and the average system rate of the delay-sensitive and the delay-tolerant systems, respectively, and compare them with the full feedback multiuser diversity schemes. The discrete rate region is analytically presented, where the maximum average system rate can be obtained by properly choosing the number of partial devices. They optimise jointly the number of partial devices and the per-device power saving in order to maximise the average system rate under the power requirement. Through the authors’ results, they finally demonstrate that the proposed scheme leveraging the saved feedback power to add for the data transmission can outperform the full feedback multiuser diversity, in non-identical Rayleigh fading of devices’ channels.
Resumo:
In the past years, we could observe a significant amount of new robotic systems in science, industry, and everyday life. To reduce the complexity of these systems, the industry constructs robots that are designated for the execution of a specific task such as vacuum cleaning, autonomous driving, observation, or transportation operations. As a result, such robotic systems need to combine their capabilities to accomplish complex tasks that exceed the abilities of individual robots. However, to achieve emergent cooperative behavior, multi-robot systems require a decision process that copes with the communication challenges of the application domain. This work investigates a distributed multi-robot decision process, which addresses unreliable and transient communication. This process composed by five steps, which we embedded into the ALICA multi-agent coordination language guided by the PROViDE negotiation middleware. The first step encompasses the specification of the decision problem, which is an integral part of the ALICA implementation. In our decision process, we describe multi-robot problems by continuous nonlinear constraint satisfaction problems. The second step addresses the calculation of solution proposals for this problem specification. Here, we propose an efficient solution algorithm that integrates incomplete local search and interval propagation techniques into a satisfiability solver, which forms a satisfiability modulo theories (SMT) solver. In the third decision step, the PROViDE middleware replicates the solution proposals among the robots. This replication process is parameterized with a distribution method, which determines the consistency properties of the proposals. In a fourth step, we investigate the conflict resolution. Therefore, an acceptance method ensures that each robot supports one of the replicated proposals. As we integrated the conflict resolution into the replication process, a sound selection of the distribution and acceptance methods leads to an eventual convergence of the robot proposals. In order to avoid the execution of conflicting proposals, the last step comprises a decision method, which selects a proposal for implementation in case the conflict resolution fails. The evaluation of our work shows that the usage of incomplete solution techniques of the constraint satisfaction solver outperforms the runtime of other state-of-the-art approaches for many typical robotic problems. We further show by experimental setups and practical application in the RoboCup environment that our decision process is suitable for making quick decisions in the presence of packet loss and delay. Moreover, PROViDE requires less memory and bandwidth compared to other state-of-the-art middleware approaches.
Resumo:
As emoções são consideradas a regra central de nossas vidas, tendo grande impacto na tomada de decisões, ações, memória, atenção, etc. Sendo assim, existe grande interesse em simulá-las em ambientes computacionais, possibilitando que situações do cotidiano humano possam ser estudadas em ambientes controlados. Embora existam modelos teóricos para o funcionamento de emoções, estes por si só são insuficientes para uma simulação precisa em meios computacionais. Tendo como base um destes modelos, o modelo OCC, essa dissertação propõe a simulação de emoções em ambientes mutiagentes através da criação de uma rede Bayesiana capaz de traduzir estímulos gerados neste ambiente em emoções. A utilização de redes Bayesianas combinadas à estrutura do modelo OCC busca a adição de imprevisibilidade ao modelo, além de fornecê-lo uma estrutura computacional. A aplicação do modelo proposto a um sistema multiagentes proporciona o estudo da influência das emoções sobre as ações e comportamento dos agentes, possibilitando um estudo de comparação entre os resultados obtidos ao se realizar uma simulação multiagentes clássica e uma simulação multiagentes contendo emoções. De forma a validar e avaliar seu funcionamento, é apresentado o estudo da aplicação da rede Bayesiana de emoções sobre um modelo multiagentes exemplo, observando as variações que as emoções provocam sobre o comportamento dos agentes.
Resumo:
The major function of this model is to access the UCI Wisconsin Breast Cancer data-set[1] and classify the data items into two categories, which are normal and anomalous. This kind of classification can be referred as anomaly detection, which discriminates anomalous behaviour from normal behaviour in computer systems. One popular solution for anomaly detection is Artificial Immune Systems (AIS). AIS are adaptive systems inspired by theoretical immunology and observed immune functions, principles and models which are applied to problem solving. The Dendritic Cell Algorithm (DCA)[2] is an AIS algorithm that is developed specifically for anomaly detection. It has been successfully applied to intrusion detection in computer security. It is believed that agent-based modelling is an ideal approach for implementing AIS, as intelligent agents could be the perfect representations of immune entities in AIS. This model evaluates the feasibility of re-implementing the DCA in an agent-based simulation environment called AnyLogic, where the immune entities in the DCA are represented by intelligent agents. If this model can be successfully implemented, it makes it possible to implement more complicated and adaptive AIS models in the agent-based simulation environment.
Resumo:
Multi-phase electrical drives are potential candidates for the employment in innovative electric vehicle powertrains, in response to the request for high efficiency and reliability of this type of application. In addition to the multi-phase technology, in the last decades also, multilevel technology has been developed. These two technologies are somewhat complementary since both allow increasing the power rating of the system without increasing the current and voltage ratings of the single power switches of the inverter. In this thesis, some different topics concerning the inverter, the motor and the fault diagnosis of an electric vehicle powertrain are addressed. In particular, the attention is focused on multi-phase and multilevel technologies and their potential advantages with respect to traditional technologies. First of all, the mathematical models of two multi-phase machines, a five-phase induction machine and an asymmetrical six-phase permanent magnet synchronous machines are developed using the Vector Space Decomposition approach. Then, a new modulation technique for multi-phase multilevel T-type inverters, which solves the voltage balancing problem of the DC-link capacitors, ensuring flexible management of the capacitor voltages, is developed. The technique is based on the proper selection of the zero-sequence component of the modulating signals. Subsequently, a diagnostic technique for detecting the state of health of the rotor magnets in a six-phase permanent magnet synchronous machine is established. The technique is based on analysing the electromotive force induced in the stator windings by the rotor magnets. Furthermore, an innovative algorithm able to extend the linear modulation region for five-phase inverters, taking advantage of the multiple degrees of freedom available in multi-phase systems is presented. Finally, the mathematical model of an eighteen-phase squirrel cage induction motor is defined. This activity aims to develop a motor drive able to change the number of poles of the machine during the machine operation.
Resumo:
The integration of distributed and ubiquitous intelligence has emerged over the last years as the mainspring of transformative advancements in mobile radio networks. As we approach the era of “mobile for intelligence”, next-generation wireless networks are poised to undergo significant and profound changes. Notably, the overarching challenge that lies ahead is the development and implementation of integrated communication and learning mechanisms that will enable the realization of autonomous mobile radio networks. The ultimate pursuit of eliminating human-in-the-loop constitutes an ambitious challenge, necessitating a meticulous delineation of the fundamental characteristics that artificial intelligence (AI) should possess to effectively achieve this objective. This challenge represents a paradigm shift in the design, deployment, and operation of wireless networks, where conventional, static configurations give way to dynamic, adaptive, and AI-native systems capable of self-optimization, self-sustainment, and learning. This thesis aims to provide a comprehensive exploration of the fundamental principles and practical approaches required to create autonomous mobile radio networks that seamlessly integrate communication and learning components. The first chapter of this thesis introduces the notion of Predictive Quality of Service (PQoS) and adaptive optimization and expands upon the challenge to achieve adaptable, reliable, and robust network performance in dynamic and ever-changing environments. The subsequent chapter delves into the revolutionary role of generative AI in shaping next-generation autonomous networks. This chapter emphasizes achieving trustworthy uncertainty-aware generation processes with the use of approximate Bayesian methods and aims to show how generative AI can improve generalization while reducing data communication costs. Finally, the thesis embarks on the topic of distributed learning over wireless networks. Distributed learning and its declinations, including multi-agent reinforcement learning systems and federated learning, have the potential to meet the scalability demands of modern data-driven applications, enabling efficient and collaborative model training across dynamic scenarios while ensuring data privacy and reducing communication overhead.
Resumo:
Lo scopo della ricerca è quello di sviluppare un metodo di design che integri gli apporti delle diverse discipline di architettura, ingegneria e fabbricazione all’interno del progetto, utilizzando come caso di studio l’uso di una tettonica ad elementi planari in legno per la costruzione di superfici a guscio da utilizzare come padiglioni temporanei. La maniera in cui ci si propone di raggiungere tale scopo è tramite l’utilizzo di un agent based system che funge da mediatore tra i vari obbiettivi che si vogliono considerare, in questo caso tra parametri estetici, legati alla geometria scelta, e di fabbricazione. Si sceglie di applicare questo sistema allo studio di una struttura a guscio, che grazie alla sua naturale rigidezza integra forma e capacità strutturale, tramite una tassellazione planare della superficie stessa. Il sistema studiato si basa sull’algoritmo di circle relaxation, che viene integrato tramite dei comportamenti che tengano conto della curvatura della superficie in questione e altri comportamenti scelti appositamente per agevolare il processo di tassellazione tramite tangent plane intersection. La scelta di studiare elementi planari è finalizzata ad una maggiore facilità di fabbricazione ed assemblaggio prevedendo l’uso di macchine a controllo numerico per la fabbricazione e un assemblaggio interamente a secco e che non necessita di impalcature . Il risultato proposto è quello quindi di un padiglione costituito da elementi planari ricomponibili in legno, con particolare attenzione alla facilità e velocità di montaggio degli stessi, utile per possibili strutture temporanee e/o di emergenza.
Resumo:
Squeezed light is of interest as an example of a non-classical state of the electromagnetic field and because of its applications both in technology and in fundamental quantum physics. This review concentrates on one aspect of squeezed light, namely its application in atomic spectroscopy. The general properties, detection and application of squeezed light are first reviewed. The basic features of the main theoretical methods (master equations, quantum Langevin equations, coupled systems) used to treat squeezed light spectroscopy are then outlined. The physics of squeezed light interactions with atomic systems is dealt with first for the simpler case of two-level atoms and then for the more complex situation of multi-level atoms and multi-atom systems. Finally the specific applications of squeezed light spectroscopy are reviewed.