25 resultados para Multi-Agent Systems
Resumo:
Multi-phase electrical drives are potential candidates for the employment in innovative electric vehicle powertrains, in response to the request for high efficiency and reliability of this type of application. In addition to the multi-phase technology, in the last decades also, multilevel technology has been developed. These two technologies are somewhat complementary since both allow increasing the power rating of the system without increasing the current and voltage ratings of the single power switches of the inverter. In this thesis, some different topics concerning the inverter, the motor and the fault diagnosis of an electric vehicle powertrain are addressed. In particular, the attention is focused on multi-phase and multilevel technologies and their potential advantages with respect to traditional technologies. First of all, the mathematical models of two multi-phase machines, a five-phase induction machine and an asymmetrical six-phase permanent magnet synchronous machines are developed using the Vector Space Decomposition approach. Then, a new modulation technique for multi-phase multilevel T-type inverters, which solves the voltage balancing problem of the DC-link capacitors, ensuring flexible management of the capacitor voltages, is developed. The technique is based on the proper selection of the zero-sequence component of the modulating signals. Subsequently, a diagnostic technique for detecting the state of health of the rotor magnets in a six-phase permanent magnet synchronous machine is established. The technique is based on analysing the electromotive force induced in the stator windings by the rotor magnets. Furthermore, an innovative algorithm able to extend the linear modulation region for five-phase inverters, taking advantage of the multiple degrees of freedom available in multi-phase systems is presented. Finally, the mathematical model of an eighteen-phase squirrel cage induction motor is defined. This activity aims to develop a motor drive able to change the number of poles of the machine during the machine operation.
Resumo:
The integration of distributed and ubiquitous intelligence has emerged over the last years as the mainspring of transformative advancements in mobile radio networks. As we approach the era of “mobile for intelligence”, next-generation wireless networks are poised to undergo significant and profound changes. Notably, the overarching challenge that lies ahead is the development and implementation of integrated communication and learning mechanisms that will enable the realization of autonomous mobile radio networks. The ultimate pursuit of eliminating human-in-the-loop constitutes an ambitious challenge, necessitating a meticulous delineation of the fundamental characteristics that artificial intelligence (AI) should possess to effectively achieve this objective. This challenge represents a paradigm shift in the design, deployment, and operation of wireless networks, where conventional, static configurations give way to dynamic, adaptive, and AI-native systems capable of self-optimization, self-sustainment, and learning. This thesis aims to provide a comprehensive exploration of the fundamental principles and practical approaches required to create autonomous mobile radio networks that seamlessly integrate communication and learning components. The first chapter of this thesis introduces the notion of Predictive Quality of Service (PQoS) and adaptive optimization and expands upon the challenge to achieve adaptable, reliable, and robust network performance in dynamic and ever-changing environments. The subsequent chapter delves into the revolutionary role of generative AI in shaping next-generation autonomous networks. This chapter emphasizes achieving trustworthy uncertainty-aware generation processes with the use of approximate Bayesian methods and aims to show how generative AI can improve generalization while reducing data communication costs. Finally, the thesis embarks on the topic of distributed learning over wireless networks. Distributed learning and its declinations, including multi-agent reinforcement learning systems and federated learning, have the potential to meet the scalability demands of modern data-driven applications, enabling efficient and collaborative model training across dynamic scenarios while ensuring data privacy and reducing communication overhead.
Resumo:
Actual trends in software development are pushing the need to face a multiplicity of diverse activities and interaction styles characterizing complex and distributed application domains, in such a way that the resulting dynamics exhibits some grade of order, i.e. in terms of evolution of the system and desired equilibrium. Autonomous agents and Multiagent Systems are argued in literature as one of the most immediate approaches for describing such a kind of challenges. Actually, agent research seems to converge towards the definition of renewed abstraction tools aimed at better capturing the new demands of open systems. Besides agents, which are assumed as autonomous entities purposing a series of design objectives, Multiagent Systems account new notions as first-class entities, aimed, above all, at modeling institutional/organizational entities, placed for normative regulation, interaction and teamwork management, as well as environmental entities, placed as resources to further support and regulate agent work. The starting point of this thesis is recognizing that both organizations and environments can be rooted in a unifying perspective. Whereas recent research in agent systems seems to account a set of diverse approaches to specifically face with at least one aspect within the above mentioned, this work aims at proposing a unifying approach where both agents and their organizations can be straightforwardly situated in properly designed working environments. In this line, this work pursues reconciliation of environments with sociality, social interaction with environment based interaction, environmental resources with organizational functionalities with the aim to smoothly integrate the various aspects of complex and situated organizations in a coherent programming approach. Rooted in Agents and Artifacts (A&A) meta-model, which has been recently introduced both in the context of agent oriented software engineering and programming, the thesis promotes the notion of Embodied Organizations, characterized by computational infrastructures attaining a seamless integration between agents, organizations and environmental entities.
Resumo:
Nano(bio)science and nano(bio)technology play a growing and tremendous interest both on academic and industrial aspects. They are undergoing rapid developments on many fronts such as genomics, proteomics, system biology, and medical applications. However, the lack of characterization tools for nano(bio)systems is currently considered as a major limiting factor to the final establishment of nano(bio)technologies. Flow Field-Flow Fractionation (FlFFF) is a separation technique that is definitely emerging in the bioanalytical field, and the number of applications on nano(bio)analytes such as high molar-mass proteins and protein complexes, sub-cellular units, viruses, and functionalized nanoparticles is constantly increasing. This can be ascribed to the intrinsic advantages of FlFFF for the separation of nano(bio)analytes. FlFFF is ideally suited to separate particles over a broad size range (1 nm-1 μm) according to their hydrodynamic radius (rh). The fractionation is carried out in an empty channel by a flow stream of a mobile phase of any composition. For these reasons, fractionation is developed without surface interaction of the analyte with packing or gel media, and there is no stationary phase able to induce mechanical or shear stress on nanosized analytes, which are for these reasons kept in their native state. Characterization of nano(bio)analytes is made possible after fractionation by interfacing the FlFFF system with detection techniques for morphological, optical or mass characterization. For instance, FlFFF coupling with multi-angle light scattering (MALS) detection allows for absolute molecular weight and size determination, and mass spectrometry has made FlFFF enter the field of proteomics. Potentialities of FlFFF couplings with multi-detection systems are discussed in the first section of this dissertation. The second and the third sections are dedicated to new methods that have been developed for the analysis and characterization of different samples of interest in the fields of diagnostics, pharmaceutics, and nanomedicine. The second section focuses on biological samples such as protein complexes and protein aggregates. In particular it focuses on FlFFF methods developed to give new insights into: a) chemical composition and morphological features of blood serum lipoprotein classes, b) time-dependent aggregation pattern of the amyloid protein Aβ1-42, and c) aggregation state of antibody therapeutics in their formulation buffers. The third section is dedicated to the analysis and characterization of structured nanoparticles designed for nanomedicine applications. The discussed results indicate that FlFFF with on-line MALS and fluorescence detection (FD) may become the unparallel methodology for the analysis and characterization of new, structured, fluorescent nanomaterials.
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
The present thesis is focused on the study of innovative Si-based materials for third generation photovoltaics. In particular, silicon oxi-nitride (SiOxNy) thin films and multilayer of Silicon Rich Carbide (SRC)/Si have been characterized in view of their application in photovoltaics. SiOxNy is a promising material for applications in thin-film solar cells as well as for wafer based silicon solar cells, like silicon heterojunction solar cells. However, many issues relevant to the material properties have not been studied yet, such as the role of the deposition condition and precursor gas concentrations on the optical and electronic properties of the films, the composition and structure of the nanocrystals. The results presented in the thesis aim to clarify the effects of annealing and oxygen incorporation within nc-SiOxNy films on its properties in view of the photovoltaic applications. Silicon nano-crystals (Si NCs) embedded in a dielectric matrix were proposed as absorbers in all-Si multi-junction solar cells due to the quantum confinement capability of Si NCs, that allows a better match to the solar spectrum thanks to the size induced tunability of the band gap. Despite the efficient solar radiation absorption capability of this structure, its charge collection and transport properties has still to be fully demonstrated. The results presented in the thesis aim to the understanding of the transport mechanisms at macroscopic and microscopic scale. Experimental results on SiOxNy thin films and SRC/Si multilayers have been obtained at macroscopical and microscopical level using different characterizations techniques, such as Atomic Force Microscopy, Reflection and Transmission measurements, High Resolution Transmission Electron Microscopy, Energy-Dispersive X-ray spectroscopy and Fourier Transform Infrared Spectroscopy. The deep knowledge and improved understanding of the basic physical properties of these quite complex, multi-phase and multi-component systems, made by nanocrystals and amorphous phases, will contribute to improve the efficiency of Si based solar cells.
Resumo:
Comparative studies on constitutional design for divided societies indicate that there is no magic formula to the challenges that these societies pose, as lots of factors influence constitutional design. In the literature on asymmetric federalism, the introduction of constitutional asymmetries is considered a flexible instrument of ethnic conflict resolution, as it provides a mixture of the two main theoretical approaches to constitutional design for divided societies (i.e., integration and accommodation). Indeed, constitutional asymmetries are a complex and multifaceted phenomenon, as their degree of intensity can vary across constitutional systems, and there are both legal and extra-legal factors that may explain such differences. This thesis argues that constitutional asymmetries provide a flexible model of constitutional design and aims to explore the legal factors that are most likely to explain the different degrees of constitutional asymmetry in divided multi-tiered systems. To this end, the research adopts a qualitative methodology, i.e., Qualitative Comparative Analysis (QCA), which allows an understanding of whether a condition or combination of conditions (i.e., the legal factors) determine the outcome (i.e., high, medium, low degree of constitutional asymmetry, or constitutional symmetry). The QCA is conducted on 16 divided multi-tiered systems, and for each case, the degree of constitutional asymmetry was analyzed by employing standardized indexes on subnational autonomy, allowing for a more precise measure of constitutional asymmetry than has previously been provided in the literature. Overall, the research confirms the complex nature of constitutional asymmetries, as the degrees of asymmetries vary substantially not only across systems but also within cases among the dimensions of subnational autonomy. The outcome of the Qualitative Comparative Analysis also confirms a path of complex causality since the different degrees of constitutional asymmetry always depend on several legal factors, that combined produce a low, medium, or high degree of constitutional asymmetry or, conversely, constitutional symmetry.
Resumo:
Amid the trend of rising health expenditure in developed economies, changing the healthcare delivery models is an important point of action for service regulators to contain this trend. Such a change is mostly induced by either financial incentives or regulatory tools issued by the regulators and targeting service providers and patients. This creates a tripartite interaction between service regulators, professionals, and patients that manifests a multi-principal agent relationship, in which professionals are agents to two principals: regulators and patients. This thesis is concerned with such a multi-principal agent relationship in healthcare and attempts to investigate the determinants of the (non-)compliance to regulatory tools in light of this tripartite relationship. In addition, the thesis provides insights into the different institutional, economic, and regulatory settings, which govern the multi-principal agent relationship in healthcare in different countries. Furthermore, the thesis provides and empirically tests a conceptual framework of the possible determinants of (non-)compliance by physicians to regulatory tools issued by the regulator. The main findings of the thesis are first, in a multi-principal agent setting, the utilization of financial incentives to align the objectives of professionals and the regulator is important but not the only solution. This finding is based on the heterogeneity in the financial incentives provided to professionals in different health markets, which does not provide a one-size-fits-all model of financial incentives to influence clinical decisions. Second, soft law tools as clinical practice guidelines (CPGs) are important tools to mitigate the problems of the multi-principal agent setting in health markets as they reduce information asymmetries while preserving the autonomy of professionals. Third, CPGs are complex and heterogeneous and so are the determinants of (non-)compliance to them. Fourth, CPGs work but under conditions. Factors such as intra-professional competition between service providers or practitioners might lead to non-compliance to CPGs – if CPGs are likely to reduce the professional’s utility. Finally, different degrees of soft law mandate have different effects on providers’ compliance. Generally, the stronger the mandate, the stronger the compliance, however, even with a strong mandate, drivers such as intra-professional competition and co-management of patients by different professionals affected the (non-)compliance.
Resumo:
The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.
Resumo:
In rural and isolated areas without cellular coverage, Satellite Communication (SatCom) is the best candidate to complement terrestrial coverage. However, the main challenge for future generations of wireless networks will be to meet the growing demand for new services while dealing with the scarcity of frequency spectrum. As a result, it is critical to investigate more efficient methods of utilizing the limited bandwidth; and resource sharing is likely the only choice. The research community’s focus has recently shifted towards the interference management and exploitation paradigm to meet the increasing data traffic demands. In the Downlink (DL) and Feedspace (FS), LEO satellites with an on-board antenna array can offer service to numerous User Terminals (UTs) (VSAT or Handhelds) on-ground in FFR schemes by using cutting-edge digital beamforming techniques. Considering this setup, the adoption of an effective user scheduling approach is a critical aspect given the unusually high density of User terminals on the ground as compared to the on-board available satellite antennas. In this context, one possibility is that of exploiting clustering algorithms for scheduling in LEO MU-MIMO systems in which several users within the same group are simultaneously served by the satellite via Space Division Multiplexing (SDM), and then these different user groups are served in different time slots via Time Division Multiplexing (TDM). This thesis addresses this problem by defining a user scheduling problem as an optimization problem and discusses several algorithms to solve it. In particular, focusing on the FS and user service link (i.e., DL) of a single MB-LEO satellite operating below 6 GHz, the user scheduling problem in the Frequency Division Duplex (FDD) mode is addressed. The proposed State-of-the-Art scheduling approaches are based on graph theory. The proposed solution offers high performance in terms of per-user capacity, Sum-rate capacity, SINR, and Spectral Efficiency.