873 resultados para Multi-Agent Model
Resumo:
While the use of distributed intelligence has been incrementally spreading in the design of a great number of intelligent systems, the field of Artificial Intelligence in Real Time Strategy games has remained mostly a centralized environment. Despite turn-based games have attained AIs of world-class level, the fast paced nature of RTS games has proven to be a significant obstacle to the quality of its AIs. Chapter 1 introduces RTS games describing their characteristics, mechanics and elements. Chapter 2 introduces Multi-Agent Systems and the use of the Beliefs-Desires-Intentions abstraction, analysing the possibilities given by self-computing properties. In Chapter 3 the current state of AI development in RTS games is analyzed highlighting the struggles of the gaming industry to produce valuable. The focus on improving multiplayer experience has impacted gravely on the quality of the AIs thus leaving them with serious flaws that impair their ability to challenge and entertain players. Chapter 4 explores different aspects of AI development for RTS, evaluating the potential strengths and weaknesses of an agent-based approach and analysing which aspects can benefit the most against centralized AIs. Chapter 5 describes a generic agent-based framework for RTS games where every game entity becomes an agent, each of which having its own knowledge and set of goals. Different aspects of the game, like economy, exploration and warfare are also analysed, and some agent-based solutions are outlined. The possible exploitation of self-computing properties to efficiently organize the agents activity is then inspected. Chapter 6 presents the design and implementation of an AI for an existing Open Source game in beta development stage: 0 a.d., an historical RTS game on ancient warfare which features a modern graphical engine and evolved mechanics. The entities in the conceptual framework are implemented in a new agent-based platform seamlessly nested inside the existing game engine, called ABot, widely described in Chapters 7, 8 and 9. Chapter 10 and 11 include the design and realization of a new agent based language useful for defining behavioural modules for the agents in ABot, paving the way for a wider spectrum of contributors. Chapter 12 concludes the work analysing the outcome of tests meant to evaluate strategies, realism and pure performance, finally drawing conclusions and future works in Chapter 13.
Resumo:
La ricerca proposta si pone l’obiettivo di definire e sperimentare un metodo per un’articolata e sistematica lettura del territorio rurale, che, oltre ad ampliare la conoscenza del territorio, sia di supporto ai processi di pianificazione paesaggistici ed urbanistici e all’attuazione delle politiche agricole e di sviluppo rurale. Un’approfondita disamina dello stato dell’arte riguardante l’evoluzione del processo di urbanizzazione e le conseguenze dello stesso in Italia e in Europa, oltre che del quadro delle politiche territoriali locali nell’ambito del tema specifico dello spazio rurale e periurbano, hanno reso possibile, insieme a una dettagliata analisi delle principali metodologie di analisi territoriale presenti in letteratura, la determinazione del concept alla base della ricerca condotta. E’ stata sviluppata e testata una metodologia multicriteriale e multilivello per la lettura del territorio rurale sviluppata in ambiente GIS, che si avvale di algoritmi di clustering (quale l’algoritmo IsoCluster) e classificazione a massima verosimiglianza, focalizzando l’attenzione sugli spazi agricoli periurbani. Tale metodo si incentra sulla descrizione del territorio attraverso la lettura di diverse componenti dello stesso, quali quelle agro-ambientali e socio-economiche, ed opera una sintesi avvalendosi di una chiave interpretativa messa a punto allo scopo, l’Impronta Agroambientale (Agro-environmental Footprint - AEF), che si propone di quantificare il potenziale impatto degli spazi rurali sul sistema urbano. In particolare obiettivo di tale strumento è l’identificazione nel territorio extra-urbano di ambiti omogenei per caratteristiche attraverso una lettura del territorio a differenti scale (da quella territoriale a quella aziendale) al fine di giungere ad una sua classificazione e quindi alla definizione delle aree classificabili come “agricole periurbane”. La tesi propone la presentazione dell’architettura complessiva della metodologia e la descrizione dei livelli di analisi che la compongono oltre che la successiva sperimentazione e validazione della stessa attraverso un caso studio rappresentativo posto nella Pianura Padana (Italia).
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.
Resumo:
Aerosolpartikel beeinflussen das Klima durch Streuung und Absorption von Strahlung sowie als Nukleations-Kerne für Wolkentröpfchen und Eiskristalle. Darüber hinaus haben Aerosole einen starken Einfluss auf die Luftverschmutzung und die öffentliche Gesundheit. Gas-Partikel-Wechselwirkunge sind wichtige Prozesse, weil sie die physikalischen und chemischen Eigenschaften von Aerosolen wie Toxizität, Reaktivität, Hygroskopizität und optische Eigenschaften beeinflussen. Durch einen Mangel an experimentellen Daten und universellen Modellformalismen sind jedoch die Mechanismen und die Kinetik der Gasaufnahme und der chemischen Transformation organischer Aerosolpartikel unzureichend erfasst. Sowohl die chemische Transformation als auch die negativen gesundheitlichen Auswirkungen von toxischen und allergenen Aerosolpartikeln, wie Ruß, polyzyklische aromatische Kohlenwasserstoffe (PAK) und Proteine, sind bislang nicht gut verstanden.rn Kinetische Fluss-Modelle für Aerosoloberflächen- und Partikelbulk-Chemie wurden auf Basis des Pöschl-Rudich-Ammann-Formalismus für Gas-Partikel-Wechselwirkungen entwickelt. Zunächst wurde das kinetische Doppelschicht-Oberflächenmodell K2-SURF entwickelt, welches den Abbau von PAK auf Aerosolpartikeln in Gegenwart von Ozon, Stickstoffdioxid, Wasserdampf, Hydroxyl- und Nitrat-Radikalen beschreibt. Kompetitive Adsorption und chemische Transformation der Oberfläche führen zu einer stark nicht-linearen Abhängigkeit der Ozon-Aufnahme bezüglich Gaszusammensetzung. Unter atmosphärischen Bedingungen reicht die chemische Lebensdauer von PAK von wenigen Minuten auf Ruß, über mehrere Stunden auf organischen und anorganischen Feststoffen bis hin zu Tagen auf flüssigen Partikeln. rn Anschließend wurde das kinetische Mehrschichtenmodell KM-SUB entwickelt um die chemische Transformation organischer Aerosolpartikel zu beschreiben. KM-SUB ist in der Lage, Transportprozesse und chemische Reaktionen an der Oberfläche und im Bulk von Aerosol-partikeln explizit aufzulösen. Es erforder im Gegensatz zu früheren Modellen keine vereinfachenden Annahmen über stationäre Zustände und radiale Durchmischung. In Kombination mit Literaturdaten und neuen experimentellen Ergebnissen wurde KM-SUB eingesetzt, um die Effekte von Grenzflächen- und Bulk-Transportprozessen auf die Ozonolyse und Nitrierung von Protein-Makromolekülen, Ölsäure, und verwandten organischen Ver¬bin-dungen aufzuklären. Die in dieser Studie entwickelten kinetischen Modelle sollen als Basis für die Entwicklung eines detaillierten Mechanismus für Aerosolchemie dienen sowie für das Herleiten von vereinfachten, jedoch realistischen Parametrisierungen für großskalige globale Atmosphären- und Klima-Modelle. rn Die in dieser Studie durchgeführten Experimente und Modellrechnungen liefern Beweise für die Bildung langlebiger reaktiver Sauerstoff-Intermediate (ROI) in der heterogenen Reaktion von Ozon mit Aerosolpartikeln. Die chemische Lebensdauer dieser Zwischenformen beträgt mehr als 100 s, deutlich länger als die Oberflächen-Verweilzeit von molekularem O3 (~10-9 s). Die ROIs erklären scheinbare Diskrepanzen zwischen früheren quantenmechanischen Berechnungen und kinetischen Experimenten. Sie spielen eine Schlüsselrolle in der chemischen Transformation sowie in den negativen Gesundheitseffekten von toxischen und allergenen Feinstaubkomponenten, wie Ruß, PAK und Proteine. ROIs sind vermutlich auch an der Zersetzung von Ozon auf mineralischem Staub und an der Bildung sowie am Wachstum von sekundären organischen Aerosolen beteiligt. Darüber hinaus bilden ROIs eine Verbindung zwischen atmosphärischen und biosphärischen Mehrphasenprozessen (chemische und biologische Alterung).rn Organische Verbindungen können als amorpher Feststoff oder in einem halbfesten Zustand vorliegen, der die Geschwindigkeit von heterogenen Reaktionenen und Mehrphasenprozessen in Aerosolen beeinflusst. Strömungsrohr-Experimente zeigen, dass die Ozonaufnahme und die oxidative Alterung von amorphen Proteinen durch Bulk-Diffusion kinetisch limitiert sind. Die reaktive Gasaufnahme zeigt eine deutliche Zunahme mit zunehmender Luftfeuchte, was durch eine Verringerung der Viskosität zu erklären ist, bedingt durch einen Phasenübergang der amorphen organischen Matrix von einem glasartigen zu einem halbfesten Zustand (feuchtigkeitsinduzierter Phasenübergang). Die chemische Lebensdauer reaktiver Verbindungen in organischen Partikeln kann von Sekunden bis zu Tagen ansteigen, da die Diffusionsrate in der halbfesten Phase bei niedriger Temperatur oder geringer Luftfeuchte um Größenordnungen absinken kann. Die Ergebnisse dieser Studie zeigen wie halbfeste Phasen die Auswirkung organischeer Aerosole auf Luftqualität, Gesundheit und Klima beeinflussen können. rn
Resumo:
Corruption is, in the last two decades, considered as one of the biggest problems within the international community, which harms not only a particular state or society but the whole world. The discussion on corruption in law and economics approach is mainly run under the veil of Public choice theory and principal-agent model. Based on this approach the strong international initiatives taken by the UN, the OECD and the Council of Europe, provided various measures and tools in order to support and guide countries in their combat against corruption. These anti-corruption policies created a repression -prevention-transparency model for corruption combat. Applying this model, countries around the world adopted anti-corruption strategies as part of their legal rules. Nevertheless, the recent researches on the effects of this move show non impressive results. Critics argue that “one size does not fit all” because the institutional setting of countries around the world varies. Among the countries which experience problems of corruption, even though they follow the dominant anti-corruption trends, are transitional, post-socialist countries. To this group belong the countries which are emerging from centrally planned to an open market economy. The socialist past left traces on institutional setting, mentality of the individuals and their interrelation, particularly in the domain of public administration. If the idiosyncrasy of these countries is taken into account the suggestion in this thesis is that in public administration in post-socialist countries, instead of dominant anti-corruption scheme repression-prevention-transparency, corruption combat should be improved through the implementation of a new one, structure-conduct-performance. The implementation of this model is based on three regulatory pyramids: anti-corruption, disciplinary anti-corruption and criminal anti-corruption pyramid. This approach asks public administration itself to engage in corruption combat, leaving criminal justice system as the ultimate weapon, used only for the very harmful misdeeds.
Resumo:
Il sempre crescente numero di applicazioni di reti di sensori, robot cooperanti e formazioni di veicoli, ha fatto sì che le problematiche legate al coordinamento di sistemi multi-agente (MAS) diventassero tra le più studiate nell’ambito della teoria dei controlli. Esistono numerosi approcci per affrontare il problema, spesso profondamente diversi tra loro. La strategia studiata in questa tesi è basata sulla Teoria del Consenso, che ha una natura distribuita e completamente leader-less; inoltre il contenuto informativo scambiato tra gli agenti è ridotto al minimo. I primi 3 capitoli introducono ed analizzano le leggi di interazione (Protocolli di Consenso) che permettono di coordinare un Network di sistemi dinamici. Nel capitolo 4 si pensa all'applicazione della teoria al problema del "loitering" circolare di più robot volanti attorno ad un obiettivo in movimento. Si sviluppa a tale scopo una simulazione in ambiente Matlab/Simulink, che genera le traiettorie di riferimento di raggio e centro impostabili, a partire da qualunque posizione iniziale degli agenti. Tale simulazione è stata utilizzata presso il “Center for Research on Complex Automated Systems” (CASY-DEI Università di Bologna) per implementare il loitering di una rete di quadrirotori "CrazyFlie". I risultati ed il setup di laboratorio sono riportati nel capitolo 5. Sviluppi futuri si concentreranno su algoritmi locali che permettano agli agenti di evitare collisioni durante i transitori: il controllo di collision-avoidance dovrà essere completamente indipendente da quello di consenso, per non snaturare il protocollo di Consenso stesso.
Resumo:
Today’s material flow systems for mass customization or dynamic productions are usually realized with manual transportation systems. However new concepts in the domain of material flow and device control like function-oriented modularization and intelligent multi-agent-systems offer the possibility to employ changeable and automated material flow systems in dynamic production structures. These systems need the ability to react on unplanned and unexpected events autonomously.
Resumo:
Image-based modeling of tumor growth combines methods from cancer simulation and medical imaging. In this context, we present a novel approach to adapt a healthy brain atlas to MR images of tumor patients. In order to establish correspondence between a healthy atlas and a pathologic patient image, tumor growth modeling in combination with registration algorithms is employed. In a first step, the tumor is grown in the atlas based on a new multi-scale, multi-physics model including growth simulation from the cellular level up to the biomechanical level, accounting for cell proliferation and tissue deformations. Large-scale deformations are handled with an Eulerian approach for finite element computations, which can operate directly on the image voxel mesh. Subsequently, dense correspondence between the modified atlas and patient image is established using nonrigid registration. The method offers opportunities in atlasbased segmentation of tumor-bearing brain images as well as for improved patient-specific simulation and prognosis of tumor progression.
Resumo:
Introduction Commercial treatment planning systems employ a variety of dose calculation algorithms to plan and predict the dose distributions a patient receives during external beam radiation therapy. Traditionally, the Radiological Physics Center has relied on measurements to assure that institutions participating in the National Cancer Institute sponsored clinical trials administer radiation in doses that are clinically comparable to those of other participating institutions. To complement the effort of the RPC, an independent dose calculation tool needs to be developed that will enable a generic method to determine patient dose distributions in three dimensions and to perform retrospective analysis of radiation delivered to patients who enrolled in past clinical trials. Methods A multi-source model representing output for Varian 6 MV and 10 MV photon beams was developed and evaluated. The Monte Carlo algorithm, know as the Dose Planning Method (DPM), was used to perform the dose calculations. The dose calculations were compared to measurements made in a water phantom and in anthropomorphic phantoms. Intensity modulated radiation therapy and stereotactic body radiation therapy techniques were used with the anthropomorphic phantoms. Finally, past patient treatment plans were selected and recalculated using DPM and contrasted against a commercial dose calculation algorithm. Results The multi-source model was validated for the Varian 6 MV and 10 MV photon beams. The benchmark evaluations demonstrated the ability of the model to accurately calculate dose for the Varian 6 MV and the Varian 10 MV source models. The patient calculations proved that the model was reproducible in determining dose under similar conditions described by the benchmark tests. Conclusions The dose calculation tool that relied on a multi-source model approach and used the DPM code to calculate dose was developed, validated, and benchmarked for the Varian 6 MV and 10 MV photon beams. Several patient dose distributions were contrasted against a commercial algorithm to provide a proof of principal to use as an application in monitoring clinical trial activity.
Resumo:
INTRODUCTION Anemia and renal impairment are important co-morbidities among patients with coronary artery disease undergoing Percutaneous Coronary Intervention (PCI). Disease progression to eventual death can be understood as the combined effect of baseline characteristics and intermediate outcomes. METHODS Using data from a prospective cohort study, we investigated clinical pathways reflecting the transitions from PCI through intermediate ischemic or hemorrhagic events to all-cause mortality in a multi-state analysis as a function of anemia (hemoglobin concentration <120 g/l and <130 g/l, for women and men, respectively) and renal impairment (creatinine clearance <60 ml/min) at baseline. RESULTS Among 6029 patients undergoing PCI, anemia and renal impairment were observed isolated or in combination in 990 (16.4%), 384 (6.4%), and 309 (5.1%) patients, respectively. The most frequent transition was from PCI to death (6.7%, 95% CI 6.1-7.3), followed by ischemic events (4.8%, 95 CI 4.3-5.4) and bleeding (3.4%, 95% CI 3.0-3.9). Among patients with both anemia and renal impairment, the risk of death was increased 4-fold as compared to the reference group (HR 3.9, 95% CI 2.9-5.4) and roughly doubled as compared to patients with either anemia (HR 1.7, 95% CI 1.3-2.2) or renal impairment (HR 2.1, 95% CI 1.5-2.9) alone. Hazard ratios indicated an increased risk of bleeding in all three groups compared to patients with neither anemia nor renal impairment. CONCLUSIONS Applying a multi-state model we found evidence for a gradient of risk for the composite of bleeding, ischemic events, or death as a function of hemoglobin value and estimated glomerular filtration rate at baseline.
Resumo:
Trabecular bone score (TBS) rests on the textural analysis of DXA to reflect the decay in trabecular structure characterising osteoporosis. Yet, its discriminative power in fracture studies remains incomprehensible as prior biomechanical tests found no correlation with vertebral strength. To verify this result possibly due to an unrealistic set-up and to cover a wide range of loading scenarios, the data from three previous biomechanical studies using different experimental settings was used. They involved the compressive failure of 62 human lumbar vertebrae loaded 1) via intervertebral discs to mimic the in vivo situation (“full vertebra”), 2) via the classical endplate embedding (“vertebral body”) or 3) via a ball joint to induce anterior wedge failure (“vertebral section”). HR-pQCT scans acquired prior testing were used to simulate anterior-posterior DXA from which areal bone mineral density (aBMD) and the initial slope of the variogram (ISV), the early definition of TBS, were evaluated. Finally, the relation of aBMD and ISV with failure load (Fexp) and apparent failure stress (σexp) was assessed and their relative contribution to a multi-linear model was quantified via ANOVA. We found that, unlike aBMD, ISV did not significantly correlate with Fexp and σexp, except for the “vertebral body” case (r2 = 0.396, p = 0.028). Aside from the “vertebra section” set-up where it explained only 6.4% of σexp (p = 0.037), it brought no significant improvement to aBMD. These results indicate that ISV, a replica of TBS, is a poor surrogate for vertebral strength no matter the testing set-up, which supports the prior observations and raises a fortiori the question of the deterministic factors underlying the statistical relationship between TBS and vertebral fracture risk.
Resumo:
Starting off from the usual language of modal logic for multi-agent systems dealing with the agents’ knowledge/belief and common knowledge/belief we define so-called epistemic Kripke structures for intu- itionistic (common) knowledge/belief. Then we introduce corresponding deductive systems and show that they are sound and complete with respect to these semantics.
Resumo:
BACKGROUND Pleomorphic rhabdomyosarcoma (RMS) is a rare sub-type of RMS. Optimal treatment remains undefined. PATIENTS AND METHODS Between 1995 and 2014, 45 patients were diagnosed and treated in three tertiary sarcoma Centers (United Kingdom, Switzerland and Germany). Treatment characteristics and outcomes were analyzed. RESULTS The median age at diagnosis was 71.5 years (range=28.4-92.8 years). Median survival for those with localised (n=32, 71.1%) and metastatic disease (n=13, 28.9%) were 12.8 months (95% confidence interval=8.2-34.4) and 7.1 months (95% confidence interval=3.8-11.3) respectively. The relapse rate was 53.8% (four local and 10 distant relapses). In total, 14 (31.1%) patients received first line palliative chemotherapy including multi-agent paediatric chemotherapy schedules (n=3), ifosfamide-doxorubicin (n=4) and single-agent doxorubicin (n=7). Response to chemotherapy was poor (one partial remission with vincristine-actinomycin D-cyclophosphamide and six cases with stable disease). Median progression-free survival was 2.3 (range=1.2-7.3) months. CONCLUSION Pleomorphic RMS is an aggressive neoplasm mainly affecting older patients, associated with a high relapse rate, a poor and short-lived response to standard chemotherapy and an overall poor prognosis for both localised and metastatic disease.
Resumo:
The Canadian unemployment insurance program is designed to reflect the varying risk of joblessness across regions. Regions that are considered low-risk areas subsidize higher risk ones. A region's risk is typically proxied by its relative unemployment rate. We use a dynamic, heterogeneous-agent model calibrated to Canada to analyze voters preferences between a uniformly generous unemployment insurance and the current system with asymmetric generosity. We find that Canada's unusual unemployment insurance system is surprisingly close to what voters would choose in spite of the possibilities of moral hazard and self-insurance through asset build-up.
Resumo:
The salvage of historic shipwrecks involves a debate between profit-oriented salvagers, who wish to maximize profit, and archeologists, who wish to maximize historical value. We use a principal-agent model to derive the optimal reward scheme for salvagers, including a minimum duty of care in conducting the salvage operation. A review of U.S. and international law suggests that, while there is an emerging recognition of the need to devote greater care to salvaging those wrecks that are located, current doctrines provide inadequate incentives to locate historic wrecks in the first place.