28 resultados para Distributed artificial intelligence - multiagent systems
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Many research fields are pushing the engineering of large-scale, mobile, and open systems towards the adoption of techniques inspired by self-organisation: pervasive computing, but also distributed artificial intelligence, multi-agent systems, social networks, peer-topeer and grid architectures exploit adaptive techniques to make global system properties emerge in spite of the unpredictability of interactions and behaviour. Such a trend is visible also in coordination models and languages, whenever a coordination infrastructure needs to cope with managing interactions in highly dynamic and unpredictable environments. As a consequence, self-organisation can be regarded as a feasible metaphor to define a radically new conceptual coordination framework. The resulting framework defines a novel coordination paradigm, called self-organising coordination, based on the idea of spreading coordination media over the network, and charge them with services to manage interactions based on local criteria, resulting in the emergence of desired and fruitful global coordination properties of the system. Features like topology, locality, time-reactiveness, and stochastic behaviour play a key role in both the definition of such a conceptual framework and the consequent development of self-organising coordination services. According to this framework, the thesis presents several self-organising coordination techniques developed during the PhD course, mainly concerning data distribution in tuplespace-based coordination systems. Some of these techniques have been also implemented in ReSpecT, a coordination language for tuple spaces, based on logic tuples and reactions to events occurring in a tuple space. In addition, the key role played by simulation and formal verification has been investigated, leading to analysing how automatic verification techniques like probabilistic model checking can be exploited in order to formally prove the emergence of desired behaviours when dealing with coordination approaches based on self-organisation. To this end, a concrete case study is presented and discussed.
Resumo:
Massive parallel robots (MPRs) driven by discrete actuators are force regulated robots that undergo continuous motions despite being commanded through a finite number of states only. Designing a real-time control of such systems requires fast and efficient methods for solving their inverse static analysis (ISA), which is a challenging problem and the subject of this thesis. In particular, five Artificial intelligence methods are proposed to investigate the on-line computation and the generalization error of ISA problem of a class of MPRs featuring three-state force actuators and one degree of revolute motion.
Resumo:
This work deals with the development of calibration procedures and control systems to improve the performance and efficiency of modern spark ignition turbocharged engines. The algorithms developed are used to optimize and manage the spark advance and the air-to-fuel ratio to control the knock and the exhaust gas temperature at the turbine inlet. The described work falls within the activity that the research group started in the previous years with the industrial partner Ferrari S.p.a. . The first chapter deals with the development of a control-oriented engine simulator based on a neural network approach, with which the main combustion indexes can be simulated. The second chapter deals with the development of a procedure to calibrate offline the spark advance and the air-to-fuel ratio to run the engine under knock-limited conditions and with the maximum admissible exhaust gas temperature at the turbine inlet. This procedure is then converted into a model-based control system and validated with a Software in the Loop approach using the engine simulator developed in the first chapter. Finally, it is implemented in a rapid control prototyping hardware to manage the combustion in steady-state and transient operating conditions at the test bench. The third chapter deals with the study of an innovative and cheap sensor for the in-cylinder pressure measurement, which is a piezoelectric washer that can be installed between the spark plug and the engine head. The signal generated by this kind of sensor is studied, developing a specific algorithm to adjust the value of the knock index in real-time. Finally, with the engine simulator developed in the first chapter, it is demonstrated that the innovative sensor can be coupled with the control system described in the second chapter and that the performance obtained could be the same reachable with the standard in-cylinder pressure sensors.
Resumo:
Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.
Resumo:
This Thesis is composed of a collection of works written in the period 2019-2022, whose aim is to find methodologies of Artificial Intelligence (AI) and Machine Learning to detect and classify patterns and rules in argumentative and legal texts. We define our approach “hybrid”, since we aimed at designing hybrid combinations of symbolic and sub-symbolic AI, involving both “top-down” structured knowledge and “bottom-up” data-driven knowledge. A first group of works is dedicated to the classification of argumentative patterns. Following the Waltonian model of argument and the related theory of Argumentation Schemes, these works focused on the detection of argumentative support and opposition, showing that argumentative evidences can be classified at fine-grained levels without resorting to highly engineered features. To show this, our methods involved not only traditional approaches such as TFIDF, but also some novel methods based on Tree Kernel algorithms. After the encouraging results of this first phase, we explored the use of a some emerging methodologies promoted by actors like Google, which have deeply changed NLP since 2018-19 — i.e., Transfer Learning and language models. These new methodologies markedly improved our previous results, providing us with best-performing NLP tools. Using Transfer Learning, we also performed a Sequence Labelling task to recognize the exact span of argumentative components (i.e., claims and premises), thus connecting portions of natural language to portions of arguments (i.e., to the logical-inferential dimension). The last part of our work was finally dedicated to the employment of Transfer Learning methods for the detection of rules and deontic modalities. In this case, we explored a hybrid approach which combines structured knowledge coming from two LegalXML formats (i.e., Akoma Ntoso and LegalRuleML) with sub-symbolic knowledge coming from pre-trained (and then fine-tuned) neural architectures.
Resumo:
Hematological cancers are a heterogeneous family of diseases that can be divided into leukemias, lymphomas, and myelomas, often called “liquid tumors”. Since they cannot be surgically removable, chemotherapy represents the mainstay of their treatment. However, it still faces several challenges like drug resistance and low response rate, and the need for new anticancer agents is compelling. The drug discovery process is long-term, costly, and prone to high failure rates. With the rapid expansion of biological and chemical "big data", some computational techniques such as machine learning tools have been increasingly employed to speed up and economize the whole process. Machine learning algorithms can create complex models with the aim to determine the biological activity of compounds against several targets, based on their chemical properties. These models are defined as multi-target Quantitative Structure-Activity Relationship (mt-QSAR) and can be used to virtually screen small and large chemical libraries for the identification of new molecules with anticancer activity. The aim of my Ph.D. project was to employ machine learning techniques to build an mt-QSAR classification model for the prediction of cytotoxic drugs simultaneously active against 43 hematological cancer cell lines. For this purpose, first, I constructed a large and diversified dataset of molecules extracted from the ChEMBL database. Then, I compared the performance of different ML classification algorithms, until Random Forest was identified as the one returning the best predictions. Finally, I used different approaches to maximize the performance of the model, which achieved an accuracy of 88% by correctly classifying 93% of inactive molecules and 72% of active molecules in a validation set. This model was further applied to the virtual screening of a small dataset of molecules tested in our laboratory, where it showed 100% accuracy in correctly classifying all molecules. This result is confirmed by our previous in vitro experiments.
Resumo:
This research investigates the use of Artificial Intelligence (AI) systems for profiling and decision-making, and the consequences that it poses to rights and freedoms of individuals. In particular, the research considers that automated decision-making systems (ADMs) are opaque, can be biased, and their logic is correlation-based. For these reasons, ADMs do not take decisions as human beings do. Against this background, the risks for the rights of individuals combined with the demand for transparency of algorithms have created a debate on the need for a new 'right to explanation'. Assuming that, except in cases provided for by law, a decision made by a human does not entitle to a right to explanation, the question has been raised as to whether – if the decision is made by an algorithm – it is necessary to configure a right to explanation for the decision-subject. Therefore, the research addresses a right to explanation of automated decision-making, examining the relation between today’s technology and legal concepts of explanation, reasoning, and transparency. In particular, it focuses on the existence and scope of the right to explanation, considering legal and technical issues surrounding the use of ADMs. The research analyses the use of AI and the problems arising from it from a legal perspective, studying the EU legal framework – especially in the data protection field. In this context, a part of the research is focused on transparency requirements under the GDPR (namely, Articles 13–15, 22, as well as Recital 71). The research aims to outline an interpretative framework of such a right and make recommendations about its development, aiming to provide guidelines for an adequate explanation of automated decisions. Hence, the thesis analyses what an explanation might consist of, and the benefits of explainable AI – examined from legal and technical perspectives.
Resumo:
Distributed argumentation technology is a computational approach incorporating argumentation reasoning mechanisms within multi-agent systems. For the formal foundations of distributed argumentation technology, in this thesis we conduct a principle-based analysis of structured argumentation as well as abstract multi-agent and abstract bipolar argumentation. The results of the principle-based approach of these theories provide an overview and guideline for further applications of the theories. Moreover, in this thesis we explore distributed argumentation technology using distributed ledgers. We envision an Intelligent Human-input-based Blockchain Oracle (IHiBO), an artificial intelligence tool for storing argumentation reasoning. We propose a decentralized and secure architecture for conducting decision-making, addressing key concerns of trust, transparency, and immutability. We model fund management with agent argumentation in IHiBO and analyze its compliance with European fund management legal frameworks. We illustrate how bipolar argumentation balances pros and cons in legal reasoning in a legal divorce case, and how the strength of arguments in natural language can be represented in structured arguments. Finally, we discuss how distributed argumentation technology can be used to advance risk management, regulatory compliance of distributed ledgers for financial securities, and dialogue techniques.
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.
Resumo:
Aedes albopictus is a vector able to transmit several arboviruses. Due to its high impact on human health, it is important to develop an efficient control strategy for this pest. Nowadays, control based on chemical insecticides is limited by the number of available active principles and the occurrence of resistance. A valuable alternative to the conventional control strategies is the sterile insect technique (SIT) which relies on releasing sterile males of the target insect. Mating between wild females and sterile males results in no viable offspring. A crucial aspect of SIT is the production of a large number of sterile males with a low presence of females that can bite and transmit viruses. The present thesis aimed to find, implement and study the most reliable mechanical sex sorter and protocol to implement male productivity and reduce female contamination. In addition, I evaluated different variables and sorting protocols to enable female recovery for breeding purposes. Furthermore, I studied the creation of a hyper-protandric strain potentially able to produce only males. I also assessed the integration of artificial intelligence with an optical unit to identify sexes at the adult stage. All these applications helped to realise a mass production model in Italy with a potential weekly production of 1 million males. Moreover, I studied and applied for aerial sterile male release in an urban environment. This technology could allow the release of males in a wide area, overcoming environmental and urban obstacles. However, the development and application of drone technologies in a metropolitan area close to airports, such as in Bologna area, must fit specific requirements. Lastly, at Réunion Island, during a Short Term Scientific Mission France (AIM-COST Action), Indian Ocean, I studied the Boosted SIT application. Coating sterile males with Pyriproxyfen may help spread the insecticide into the larval breeding sites.
Resumo:
In recent years, IoT technology has radically transformed many crucial industrial and service sectors such as healthcare. The multi-facets heterogeneity of the devices and the collected information provides important opportunities to develop innovative systems and services. However, the ubiquitous presence of data silos and the poor semantic interoperability in the IoT landscape constitute a significant obstacle in the pursuit of this goal. Moreover, achieving actionable knowledge from the collected data requires IoT information sources to be analysed using appropriate artificial intelligence techniques such as automated reasoning. In this thesis work, Semantic Web technologies have been investigated as an approach to address both the data integration and reasoning aspect in modern IoT systems. In particular, the contributions presented in this thesis are the following: (1) the IoT Fitness Ontology, an OWL ontology that has been developed in order to overcome the issue of data silos and enable semantic interoperability in the IoT fitness domain; (2) a Linked Open Data web portal for collecting and sharing IoT health datasets with the research community; (3) a novel methodology for embedding knowledge in rule-defined IoT smart home scenarios; and (4) a knowledge-based IoT home automation system that supports a seamless integration of heterogeneous devices and data sources.
Resumo:
This thesis investigates the legal, ethical, technical, and psychological issues of general data processing and artificial intelligence practices and the explainability of AI systems. It consists of two main parts. In the initial section, we provide a comprehensive overview of the big data processing ecosystem and the main challenges we face today. We then evaluate the GDPR’s data privacy framework in the European Union. The Trustworthy AI Framework proposed by the EU’s High-Level Expert Group on AI (AI HLEG) is examined in detail. The ethical principles for the foundation and realization of Trustworthy AI are analyzed along with the assessment list prepared by the AI HLEG. Then, we list the main big data challenges the European researchers and institutions identified and provide a literature review on the technical and organizational measures to address these challenges. A quantitative analysis is conducted on the identified big data challenges and the measures to address them, which leads to practical recommendations for better data processing and AI practices in the EU. In the subsequent part, we concentrate on the explainability of AI systems. We clarify the terminology and list the goals aimed at the explainability of AI systems. We identify the reasons for the explainability-accuracy trade-off and how we can address it. We conduct a comparative cognitive analysis between human reasoning and machine-generated explanations with the aim of understanding how explainable AI can contribute to human reasoning. We then focus on the technical and legal responses to remedy the explainability problem. In this part, GDPR’s right to explanation framework and safeguards are analyzed in-depth with their contribution to the realization of Trustworthy AI. Then, we analyze the explanation techniques applicable at different stages of machine learning and propose several recommendations in chronological order to develop GDPR-compliant and Trustworthy XAI systems.
Resumo:
The integration of distributed and ubiquitous intelligence has emerged over the last years as the mainspring of transformative advancements in mobile radio networks. As we approach the era of “mobile for intelligence”, next-generation wireless networks are poised to undergo significant and profound changes. Notably, the overarching challenge that lies ahead is the development and implementation of integrated communication and learning mechanisms that will enable the realization of autonomous mobile radio networks. The ultimate pursuit of eliminating human-in-the-loop constitutes an ambitious challenge, necessitating a meticulous delineation of the fundamental characteristics that artificial intelligence (AI) should possess to effectively achieve this objective. This challenge represents a paradigm shift in the design, deployment, and operation of wireless networks, where conventional, static configurations give way to dynamic, adaptive, and AI-native systems capable of self-optimization, self-sustainment, and learning. This thesis aims to provide a comprehensive exploration of the fundamental principles and practical approaches required to create autonomous mobile radio networks that seamlessly integrate communication and learning components. The first chapter of this thesis introduces the notion of Predictive Quality of Service (PQoS) and adaptive optimization and expands upon the challenge to achieve adaptable, reliable, and robust network performance in dynamic and ever-changing environments. The subsequent chapter delves into the revolutionary role of generative AI in shaping next-generation autonomous networks. This chapter emphasizes achieving trustworthy uncertainty-aware generation processes with the use of approximate Bayesian methods and aims to show how generative AI can improve generalization while reducing data communication costs. Finally, the thesis embarks on the topic of distributed learning over wireless networks. Distributed learning and its declinations, including multi-agent reinforcement learning systems and federated learning, have the potential to meet the scalability demands of modern data-driven applications, enabling efficient and collaborative model training across dynamic scenarios while ensuring data privacy and reducing communication overhead.
Resumo:
In highly urbanized coastal lowlands, effective site characterization is crucial for assessing seismic risk. It requires a comprehensive stratigraphic analysis of the shallow subsurface, coupled with the precise assessment of the geophysical properties of buried deposits. In this context, late Quaternary paleovalley systems, shallowly buried fluvial incisions formed during the Late Pleistocene sea-level fall and filled during the Holocene sea-level rise, are crucial for understanding seismic amplification due to their soft sediment infill and sharp lithologic contrasts. In this research, we conducted high-resolution stratigraphic analyses of two regions, the Pescara and Manfredonia areas along the Adriatic coastline of Italy, to delineate the geometries and facies architecture of two paleovalley systems. Furthermore, we carried out geophysical investigations to characterize the study areas and perform seismic response analyses. We tested the microtremor-based horizontal-to-vertical spectral ratio as a mapping tool to reconstruct the buried paleovalley geometries. We evaluated the relationship between geological and geophysical data and identified the stratigraphic surfaces responsible for the observed resonances. To perform seismic response analysis of the Pescara paleovalley system, we integrated the stratigraphic framework with microtremor and shear wave velocity measurements. The seismic response analysis highlights strong seismic amplifications in frequency ranges that can interact with a wide variety of building types. Additionally, we explored the applicability of artificial intelligence in performing facies analysis from borehole images. We used a robust dataset of high-resolution digital images from continuous sediment cores of Holocene age to outline a novel, deep-learning-based approach for performing automatic semantic segmentation directly on core images, leveraging the power of convolutional neural networks. We propose an automated model to rapidly characterize sediment cores, reproducing the sedimentologist's interpretation, and providing guidance for stratigraphic correlation and subsurface reconstructions.
Resumo:
Actual trends in software development are pushing the need to face a multiplicity of diverse activities and interaction styles characterizing complex and distributed application domains, in such a way that the resulting dynamics exhibits some grade of order, i.e. in terms of evolution of the system and desired equilibrium. Autonomous agents and Multiagent Systems are argued in literature as one of the most immediate approaches for describing such a kind of challenges. Actually, agent research seems to converge towards the definition of renewed abstraction tools aimed at better capturing the new demands of open systems. Besides agents, which are assumed as autonomous entities purposing a series of design objectives, Multiagent Systems account new notions as first-class entities, aimed, above all, at modeling institutional/organizational entities, placed for normative regulation, interaction and teamwork management, as well as environmental entities, placed as resources to further support and regulate agent work. The starting point of this thesis is recognizing that both organizations and environments can be rooted in a unifying perspective. Whereas recent research in agent systems seems to account a set of diverse approaches to specifically face with at least one aspect within the above mentioned, this work aims at proposing a unifying approach where both agents and their organizations can be straightforwardly situated in properly designed working environments. In this line, this work pursues reconciliation of environments with sociality, social interaction with environment based interaction, environmental resources with organizational functionalities with the aim to smoothly integrate the various aspects of complex and situated organizations in a coherent programming approach. Rooted in Agents and Artifacts (A&A) meta-model, which has been recently introduced both in the context of agent oriented software engineering and programming, the thesis promotes the notion of Embodied Organizations, characterized by computational infrastructures attaining a seamless integration between agents, organizations and environmental entities.