23 resultados para Multi-Criteria Optimization
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Benessere delle popolazioni, gestione sostenibile delle risorse, povertà e degrado ambientale sono dei concetti fortemente connessi in un mondo in cui il 20% della popolazione mondiale consuma più del 75% delle risorse naturali. Sin dal 1992 al Summit della Terra a Rio de Janeiro si è affermato il forte legame tra tutela dell’ambiente e riduzione della povertà, ed è anche stata riconosciuta l’importanza di un ecosistema sano per condurre una vita dignitosa, specialmente nelle zone rurali povere dell’Africa, dell’Asia e dell’America Latina. La natura infatti, soprattutto per le popolazioni rurali, rappresenta un bene quotidiano e prezioso, una forma essenziale per la sussistenza ed una fonte primaria di reddito. Accanto a questa constatazione vi è anche la consapevolezza che negli ultimi decenni gli ecosistemi naturali si stanno degradando ad un ritmo impressionate, senza precedenti nella storia della specie umana: consumiamo le risorse più velocemente di quanto la Terra sia capace di rigenerarle e di “metabolizzare” i nostri scarti. Allo stesso modo aumenta la povertà: attualmente ci sono 1,2 miliardi di persone che vivono con meno di un dollaro al giorno, mentre circa metà della popolazione mondiale sopravvive con meno di due dollari al giorno (UN). La connessione tra povertà ed ambiente non dipende solamente dalla scarsità di risorse che rende più difficili le condizioni di vita, ma anche dalla gestione delle stesse risorse naturali. Infatti in molti paesi o luoghi dove le risorse non sono carenti la popolazione più povera non vi ha accesso per motivi politici, economici e sociali. Inoltre se si paragona l’impronta ecologica con una misura riconosciuta dello “sviluppo umano”, l’Indice dello Sviluppo Umano (HDI) delle Nazioni Unite (Cfr. Cap 2), il rapporto dimostra chiaramente che ciò che noi accettiamo generalmente come “alto sviluppo” è molto lontano dal concetto di sviluppo sostenibile accettato universalmente, in quanto i paesi cosiddetti “sviluppati” sono quelli con una maggior impronta ecologica. Se allora lo “sviluppo” mette sotto pressione gli ecosistemi, dal cui benessere dipende direttamente il benessere dell’uomo, allora vuol dire che il concetto di “sviluppo” deve essere rivisitato, perché ha come conseguenza non il benessere del pianeta e delle popolazioni, ma il degrado ambientale e l’accrescimento delle disuguaglianze sociali. Quindi da una parte vi è la “società occidentale”, che promuove l’avanzamento della tecnologia e dell’industrializzazione per la crescita economica, spremendo un ecosistema sempre più stanco ed esausto al fine di ottenere dei benefici solo per una ristretta fetta della popolazione mondiale che segue un modello di vita consumistico degradando l’ambiente e sommergendolo di rifiuti; dall’altra parte ci sono le famiglie di contadini rurali, i “moradores” delle favelas o delle periferie delle grandi metropoli del Sud del Mondo, i senza terra, gli immigrati delle baraccopoli, i “waste pickers” delle periferie di Bombay che sopravvivono raccattando rifiuti, i profughi di guerre fatte per il controllo delle risorse, gli sfollati ambientali, gli eco-rifugiati, che vivono sotto la soglia di povertà, senza accesso alle risorse primarie per la sopravvivenza. La gestione sostenibile dell’ambiente, il produrre reddito dalla valorizzazione diretta dell’ecosistema e l’accesso alle risorse naturali sono tra gli strumenti più efficaci per migliorare le condizioni di vita degli individui, strumenti che possono anche garantire la distribuzione della ricchezza costruendo una società più equa, in quanto le merci ed i servizi dell’ecosistema fungono da beni per le comunità. La corretta gestione dell’ambiente e delle risorse quindi è di estrema importanza per la lotta alla povertà ed in questo caso il ruolo e la responsabilità dei tecnici ambientali è cruciale. Il lavoro di ricerca qui presentato, partendo dall’analisi del problema della gestione delle risorse naturali e dal suo stretto legame con la povertà, rivisitando il concetto tradizionale di “sviluppo” secondo i nuovi filoni di pensiero, vuole suggerire soluzioni e tecnologie per la gestione sostenibile delle risorse naturali che abbiano come obiettivo il benessere delle popolazioni più povere e degli ecosistemi, proponendo inoltre un metodo valutativo per la scelta delle alternative, soluzioni o tecnologie più adeguate al contesto di intervento. Dopo l’analisi dello “stato del Pianeta” (Capitolo 1) e delle risorse, sia a livello globale che a livello regionale, il secondo Capitolo prende in esame il concetto di povertà, di Paese in Via di Sviluppo (PVS), il concetto di “sviluppo sostenibile” e i nuovi filoni di pensiero: dalla teoria della Decrescita, al concetto di Sviluppo Umano. Dalla presa di coscienza dei reali fabbisogni umani, dall’analisi dello stato dell’ambiente, della povertà e delle sue diverse facce nei vari paesi, e dalla presa di coscienza del fallimento dell’economia della crescita (oggi visibile più che mai) si può comprendere che la soluzione per sconfiggere la povertà, il degrado dell’ambiente, e raggiungere lo sviluppo umano, non è il consumismo, la produzione, e nemmeno il trasferimento della tecnologia e l’industrializzazione; ma il “piccolo e bello” (F. Schumacher, 1982), ovvero gli stili di vita semplici, la tutela degli ecosistemi, e a livello tecnologico le “tecnologie appropriate”. Ed è proprio alle Tecnologie Appropriate a cui sono dedicati i Capitoli successivi (Capitolo 4 e Capitolo 5). Queste sono tecnologie semplici, a basso impatto ambientale, a basso costo, facilmente gestibili dalle comunità, tecnologie che permettono alle popolazioni più povere di avere accesso alle risorse naturali. Sono le tecnologie che meglio permettono, grazie alle loro caratteristiche, la tutela dei beni comuni naturali, quindi delle risorse e dell’ambiente, favorendo ed incentivando la partecipazione delle comunità locali e valorizzando i saperi tradizionali, grazie al coinvolgimento di tutti gli attori, al basso costo, alla sostenibilità ambientale, contribuendo all’affermazione dei diritti umani e alla salvaguardia dell’ambiente. Le Tecnologie Appropriate prese in esame sono quelle relative all’approvvigionamento idrico e alla depurazione dell’acqua tra cui: - la raccolta della nebbia, - metodi semplici per la perforazione di pozzi, - pompe a pedali e pompe manuali per l’approvvigionamento idrico, - la raccolta dell’acqua piovana, - il recupero delle sorgenti, - semplici metodi per la depurazione dell’acqua al punto d’uso (filtro in ceramica, filtro a sabbia, filtro in tessuto, disinfezione e distillazione solare). Il quinto Capitolo espone invece le Tecnolocie Appropriate per la gestione dei rifiuti nei PVS, in cui sono descritte: - soluzioni per la raccolta dei rifiuti nei PVS, - soluzioni per lo smaltimento dei rifiuti nei PVS, - semplici tecnologie per il riciclaggio dei rifiuti solidi. Il sesto Capitolo tratta tematiche riguardanti la Cooperazione Internazionale, la Cooperazione Decentrata e i progetti di Sviluppo Umano. Per progetti di sviluppo si intende, nell’ambito della Cooperazione, quei progetti che hanno come obiettivi la lotta alla povertà e il miglioramento delle condizioni di vita delle comunità beneficiarie dei PVS coinvolte nel progetto. All’interno dei progetti di cooperazione e di sviluppo umano gli interventi di tipo ambientale giocano un ruolo importante, visto che, come già detto, la povertà e il benessere delle popolazioni dipende dal benessere degli ecosistemi in cui vivono: favorire la tutela dell’ambiente, garantire l’accesso all’acqua potabile, la corretta gestione dei rifiuti e dei reflui nonché l’approvvigionamento energetico pulito sono aspetti necessari per permettere ad ogni individuo, soprattutto se vive in condizioni di “sviluppo”, di condurre una vita sana e produttiva. È importante quindi, negli interventi di sviluppo umano di carattere tecnico ed ambientale, scegliere soluzioni decentrate che prevedano l’adozione di Tecnologie Appropriate per contribuire a valorizzare l’ambiente e a tutelare la salute della comunità. I Capitoli 7 ed 8 prendono in esame i metodi per la valutazione degli interventi di sviluppo umano. Un altro aspetto fondamentale che rientra nel ruolo dei tecnici infatti è l’utilizzo di un corretto metodo valutativo per la scelta dei progetti possibili che tenga presente tutti gli aspetti, ovvero gli impatti sociali, ambientali, economici e che si cali bene alle realtà svantaggiate come quelle prese in considerazione in questo lavoro; un metodo cioè che consenta una valutazione specifica per i progetti di sviluppo umano e che possa permettere l’individuazione del progetto/intervento tecnologico e ambientale più appropriato ad ogni contesto specifico. Dall’analisi dei vari strumenti valutativi si è scelto di sviluppare un modello per la valutazione degli interventi di carattere ambientale nei progetti di Cooperazione Decentrata basato sull’Analisi Multi Criteria e sulla Analisi Gerarchica. L’oggetto di questa ricerca è stato quindi lo sviluppo di una metodologia, che tramite il supporto matematico e metodologico dell’Analisi Multi Criteria, permetta di valutare l’appropriatezza, la sostenibilità degli interventi di Sviluppo Umano di carattere ambientale, sviluppati all’interno di progetti di Cooperazione Internazionale e di Cooperazione Decentrata attraverso l’utilizzo di Tecnologie Appropriate. Nel Capitolo 9 viene proposta la metodologia, il modello di calcolo e i criteri su cui si basa la valutazione. I successivi capitoli (Capitolo 10 e Capitolo 11) sono invece dedicati alla sperimentazione della metodologia ai diversi casi studio: - “Progetto ambientale sulla gestione dei rifiuti presso i campi Profughi Saharawi”, Algeria, - “Programa 1 milhão de Cisternas, P1MC” e - “Programa Uma Terra e Duas Águas, P1+2”, Semi Arido brasiliano.
Resumo:
DI Diesel engine are widely used both for industrial and automotive applications due to their durability and fuel economy. Nonetheless, increasing environmental concerns force that type of engine to comply with increasingly demanding emission limits, so that, it has become mandatory to develop a robust design methodology of the DI Diesel combustion system focused on reduction of soot and NOx simultaneously while maintaining a reasonable fuel economy. In recent years, genetic algorithms and CFD three-dimensional combustion simulations have been successfully applied to that kind of problem. However, combining GAs optimization with actual CFD three-dimensional combustion simulations can be too onerous since a large number of calculations is usually needed for the genetic algorithm to converge, resulting in a high computational cost and, thus, limiting the suitability of this method for industrial processes. In order to make the optimization process less time-consuming, CFD simulations can be more conveniently used to generate a training set for the learning process of an artificial neural network which, once correctly trained, can be used to forecast the engine outputs as a function of the design parameters during a GA optimization performing a so-called virtual optimization. In the current work, a numerical methodology for the multi-objective virtual optimization of the combustion of an automotive DI Diesel engine, which relies on artificial neural networks and genetic algorithms, was developed.
Resumo:
The design optimization of industrial products has always been an essential activity to improve product quality while reducing time-to-market and production costs. Although cost management is very complex and comprises all phases of the product life cycle, the control of geometrical and dimensional variations, known as Dimensional Management (DM), allows compliance with product and process requirements. Hence, the tolerance-cost optimization becomes the main practice to provide an effective application of Design for Tolerancing (DfT) and Design to Cost (DtC) approaches by enabling a connection between product tolerances and associated manufacturing costs. However, despite the growing interest in this topic, a profitable application in the industry of these techniques is hampered by their complexity: the definition of a systematic framework is the key element to improving design optimization, enhancing the concurrent use of Computer-Aided tools and Model-Based Definition (MBD) practices. The present doctorate research aims to define and develop an integrated methodology for product/process design optimization, to better exploit the new capabilities of advanced simulations and tools. By implementing predictive models and multi-disciplinary optimization, a Computer-Aided Integrated framework for tolerance-cost optimization has been proposed to allow the integration of DfT and DtC approaches and their direct application for the design of automotive components. Several case studies have been considered, with the final application of the integrated framework on a high-performance V12 engine assembly, to achieve both functional targets and cost reduction. From a scientific point of view, the proposed methodology provides an improvement for the tolerance-cost optimization of industrial components. The integration of theoretical approaches and Computer-Aided tools allows to analyse the influence of tolerances on both product performance and manufacturing costs. The case studies proved the suitability of the methodology for its application in the industrial field, providing the identification of further areas for improvement and refinement.
Resumo:
In the present work, the multi-objective optimization by genetic algorithms is investigated and applied to heat transfer problems. Firstly, the work aims to compare different reproduction processes employed by genetic algorithms and two new promising processes are suggested. Secondly, in this work two heat transfer problems are studied under the multi-objective point of view. Specifically, the two cases studied are the wavy fins and the corrugated wall channel. Both these cases have already been studied by a single objective optimizer. Therefore, this work aims to extend the previous works in a more comprehensive study.
Resumo:
Sports biomechanics describes human movement from a performance enhancement and an injury reduction perspective. In this respect, the purpose of sports scientists is to support coaches and physicians with reliable information about athletes’ technique. The lack of methods allowing for in-field athlete evaluation as well as for accurate joint force estimates represents, to date, the main limitation to this purpose. The investigations illustrated in the present thesis aimed at providing a contribution towards the development of the above mentioned methods. Two complementary approaches were adopted: a Low Resolution Approach – related to performance assessment – where the use of wearable inertial measurement units is exploited during different phases of sprint running, and a High Resolution Approach – related to joint kinetics estimate for injury prevention – where subject-specific, non-rigid constraints for knee joint kinematic modelling used in multi-body optimization techniques are defined. Results obtained using the Low Resolution Approach indicated that, due to their portability and inexpensiveness, inertial measurement systems are a valid alternative to laboratory-based instrumentation for in-field performance evaluation of sprint running. Using acceleration and angular velocity data, the following quantities were estimated: trunk inclination and angular velocity, instantaneous horizontal velocity and displacement of a point approximating the centre of mass, and stride and support phase durations. As concerns the High Resolution Approach, results indicated that the length of the anterior cruciate and lateral collateral ligaments decreased, while that of the deep bundle of the medial collateral ligament increased significantly during flexion. Variations of the posterior cruciate and the superficial bundle of the medial collateral ligament lengths were concealed by the experimental indeterminacy. A mathematical model was provided that allowed the estimate of subject-specific ligament lengths as a function of knee flexion and that can be integrated in a multi-body optimization procedure.
Resumo:
La ricerca proposta si pone l’obiettivo di definire e sperimentare un metodo per un’articolata e sistematica lettura del territorio rurale, che, oltre ad ampliare la conoscenza del territorio, sia di supporto ai processi di pianificazione paesaggistici ed urbanistici e all’attuazione delle politiche agricole e di sviluppo rurale. Un’approfondita disamina dello stato dell’arte riguardante l’evoluzione del processo di urbanizzazione e le conseguenze dello stesso in Italia e in Europa, oltre che del quadro delle politiche territoriali locali nell’ambito del tema specifico dello spazio rurale e periurbano, hanno reso possibile, insieme a una dettagliata analisi delle principali metodologie di analisi territoriale presenti in letteratura, la determinazione del concept alla base della ricerca condotta. E’ stata sviluppata e testata una metodologia multicriteriale e multilivello per la lettura del territorio rurale sviluppata in ambiente GIS, che si avvale di algoritmi di clustering (quale l’algoritmo IsoCluster) e classificazione a massima verosimiglianza, focalizzando l’attenzione sugli spazi agricoli periurbani. Tale metodo si incentra sulla descrizione del territorio attraverso la lettura di diverse componenti dello stesso, quali quelle agro-ambientali e socio-economiche, ed opera una sintesi avvalendosi di una chiave interpretativa messa a punto allo scopo, l’Impronta Agroambientale (Agro-environmental Footprint - AEF), che si propone di quantificare il potenziale impatto degli spazi rurali sul sistema urbano. In particolare obiettivo di tale strumento è l’identificazione nel territorio extra-urbano di ambiti omogenei per caratteristiche attraverso una lettura del territorio a differenti scale (da quella territoriale a quella aziendale) al fine di giungere ad una sua classificazione e quindi alla definizione delle aree classificabili come “agricole periurbane”. La tesi propone la presentazione dell’architettura complessiva della metodologia e la descrizione dei livelli di analisi che la compongono oltre che la successiva sperimentazione e validazione della stessa attraverso un caso studio rappresentativo posto nella Pianura Padana (Italia).
Resumo:
Coastal flooding poses serious threats to coastal areas around the world, billions of dollars in damage to property and infrastructure, and threatens the lives of millions of people. Therefore, disaster management and risk assessment aims at detecting vulnerability and capacities in order to reduce coastal flood disaster risk. In particular, non-specialized researchers, emergency management personnel, and land use planners require an accurate, inexpensive method to determine and map risk associated with storm surge events and long-term sea level rise associated with climate change. This study contributes to the spatially evaluation and mapping of social-economic-environmental vulnerability and risk at sub-national scale through the development of appropriate tools and methods successfully embedded in a Web-GIS Decision Support System. A new set of raster-based models were studied and developed in order to be easily implemented in the Web-GIS framework with the purpose to quickly assess and map flood hazards characteristics, damage and vulnerability in a Multi-criteria approach. The Web-GIS DSS is developed recurring to open source software and programming language and its main peculiarity is to be available and usable by coastal managers and land use planners without requiring high scientific background in hydraulic engineering. The effectiveness of the system in the coastal risk assessment is evaluated trough its application to a real case study.
Resumo:
Against a backdrop of rapidly increasing worldwide population and growing energy demand, the development of renewable energy technologies has become of primary importance in the effort to reduce greenhouse gas emissions. However, it is often technically and economically infeasible to transport discontinuous renewable electricity for long distances to the shore. Another shortcoming of non-programmable renewable power is its integration into the onshore grid without affecting the dispatching process. On the other hand, the offshore oil & gas industry is striving to reduce overall carbon footprint from onsite power generators and limiting large expenses associated to carrying electricity from remote offshore facilities. Furthermore, the increased complexity and expansion towards challenging areas of offshore hydrocarbons operations call for higher attention to safety and environmental protection issues from major accident hazards. Innovative hybrid energy systems, as Power-to-Gas (P2G), Power-to-Liquid (P2L) and Gas-to-Power (G2P) options, implemented at offshore locations, would offer the opportunity to overcome challenges of both renewable and oil & gas sectors. This study aims at the development of systematic methodologies based on proper sustainability and safety performance indicators supporting the choice of P2G, P2L and G2P hybrid energy options for offshore green projects in early design phases. An in-depth analysis of the different offshore hybrid strategies was performed. The literature reviews on existing methods proposing metrics to assess sustainability of hybrid energy systems, inherent safety of process routes in conceptual design stage and environmental protection of installations from oil and chemical accidental spills were carried out. To fill the gaps, a suite of specific decision-making methodologies was developed, based on representative multi-criteria indicators addressing technical, economic, environmental and societal aspects of alternative options. A set of five case-studies was defined, covering different offshore scenarios of concern, to provide an assessment of the effectiveness and value of the developed tools.
Resumo:
The exploitation of hydrocarbon reservoirs by the oil and gas industries represents one of the most relevant and concerning anthropic stressor in various marine areas worldwide and the presence of extractive structures can have severe consequences on the marine environment. Environmental monitoring surveys are carried out to monitor the effects and impacts of offshore energy facilities. Macrobenthic communities, inhabiting the soft-bottom, represent a key component of these surveys given their great responsiveness to natural and anthropic changes. A comprehensive collection of monitoring data from four Italian seas was used to investigate distributional pattern of macrozoobenthos assemblages confirming a high spatial variability in relation to the environmental variables analyzed. Since these datasets could represent a powerful tool for the industrial and scientific research, the steps and standardized procedures needed to obtain robust and comparable high-quality data were investigated and outlined. Over recent years, decommissioning of old platforms is a growing topic in this sector, involving many actors in the various decision-making processes. A Multi-Criteria Decision Analysis, specific for the Adriatic Sea, was developed to investigate the impacts of decommissioning of a gas platform on environmental and socio-economic aspects, to select the best decommissioning scenario. From the scenarios studied, the most impacting one has resulted to be total removal, affecting all the faunal component considered in the study. Currently, the European nations are increasing the production of energy from offshore wind farms with an exponential expansion. A comparative study of methodologies used five countries of the North Sea countries was carried out to investigate the best approaches to monitor the effects of wind farms on the benthic communities. In the foreseeable future, collaboration between industry, scientific communities, national and international policies are needed to gain knowledge concerning the effects of these industrial activities on the ecological status of the ecosystems.
Resumo:
Riding the wave of recent groundbreaking achievements, artificial intelligence (AI) is currently the buzzword on everybody’s lips and, allowing algorithms to learn from historical data, Machine Learning (ML) emerged as its pinnacle. The multitude of algorithms, each with unique strengths and weaknesses, highlights the absence of a universal solution and poses a challenging optimization problem. In response, automated machine learning (AutoML) navigates vast search spaces within minimal time constraints. By lowering entry barriers, AutoML emerged as promising the democratization of AI, yet facing some challenges. In data-centric AI, the discipline of systematically engineering data used to build an AI system, the challenge of configuring data pipelines is rather simple. We devise a methodology for building effective data pre-processing pipelines in supervised learning as well as a data-centric AutoML solution for unsupervised learning. In human-centric AI, many current AutoML tools were not built around the user but rather around algorithmic ideas, raising ethical and social bias concerns. We contribute by deploying AutoML tools aiming at complementing, instead of replacing, human intelligence. In particular, we provide solutions for single-objective and multi-objective optimization and showcase the challenges and potential of novel interfaces featuring large language models. Finally, there are application areas that rely on numerical simulators, often related to earth observations, they tend to be particularly high-impact and address important challenges such as climate change and crop life cycles. We commit to coupling these physical simulators with (Auto)ML solutions towards a physics-aware AI. Specifically, in precision farming, we design a smart irrigation platform that: allows real-time monitoring of soil moisture, predicts future moisture values, and estimates water demand to schedule the irrigation.
Resumo:
Several decision and control tasks in cyber-physical networks can be formulated as large- scale optimization problems with coupling constraints. In these "constraint-coupled" problems, each agent is associated to a local decision variable, subject to individual constraints. This thesis explores the use of primal decomposition techniques to develop tailored distributed algorithms for this challenging set-up over graphs. We first develop a distributed scheme for convex problems over random time-varying graphs with non-uniform edge probabilities. The approach is then extended to unknown cost functions estimated online. Subsequently, we consider Mixed-Integer Linear Programs (MILPs), which are of great interest in smart grid control and cooperative robotics. We propose a distributed methodological framework to compute a feasible solution to the original MILP, with guaranteed suboptimality bounds, and extend it to general nonconvex problems. Monte Carlo simulations highlight that the approach represents a substantial breakthrough with respect to the state of the art, thus representing a valuable solution for new toolboxes addressing large-scale MILPs. We then propose a distributed Benders decomposition algorithm for asynchronous unreliable networks. The framework has been then used as starting point to develop distributed methodologies for a microgrid optimal control scenario. We develop an ad-hoc distributed strategy for a stochastic set-up with renewable energy sources, and show a case study with samples generated using Generative Adversarial Networks (GANs). We then introduce a software toolbox named ChoiRbot, based on the novel Robot Operating System 2, and show how it facilitates simulations and experiments in distributed multi-robot scenarios. Finally, we consider a Pickup-and-Delivery Vehicle Routing Problem for which we design a distributed method inspired to the approach of general MILPs, and show the efficacy through simulations and experiments in ChoiRbot with ground and aerial robots.
Resumo:
Water distribution networks optimization is a challenging problem due to the dimension and the complexity of these systems. Since the last half of the twentieth century this field has been investigated by many authors. Recently, to overcome discrete nature of variables and non linearity of equations, the research has been focused on the development of heuristic algorithms. This algorithms do not require continuity and linearity of the problem functions because they are linked to an external hydraulic simulator that solve equations of mass continuity and of energy conservation of the network. In this work, a NSGA-II (Non-dominating Sorting Genetic Algorithm) has been used. This is a heuristic multi-objective genetic algorithm based on the analogy of evolution in nature. Starting from an initial random set of solutions, called population, it evolves them towards a front of solutions that minimize, separately and contemporaneously, all the objectives. This can be very useful in practical problems where multiple and discordant goals are common. Usually, one of the main drawback of these algorithms is related to time consuming: being a stochastic research, a lot of solutions must be analized before good ones are found. Results of this thesis about the classical optimal design problem shows that is possible to improve results modifying the mathematical definition of objective functions and the survival criterion, inserting good solutions created by a Cellular Automata and using rules created by classifier algorithm (C4.5). This part has been tested using the version of NSGA-II supplied by Centre for Water Systems (University of Exeter, UK) in MATLAB® environment. Even if orientating the research can constrain the algorithm with the risk of not finding the optimal set of solutions, it can greatly improve the results. Subsequently, thanks to CINECA help, a version of NSGA-II has been implemented in C language and parallelized: results about the global parallelization show the speed up, while results about the island parallelization show that communication among islands can improve the optimization. Finally, some tests about the optimization of pump scheduling have been carried out. In this case, good results are found for a small network, while the solutions of a big problem are affected by the lack of constraints on the number of pump switches. Possible future research is about the insertion of further constraints and the evolution guide. In the end, the optimization of water distribution systems is still far from a definitive solution, but the improvement in this field can be very useful in reducing the solutions cost of practical problems, where the high number of variables makes their management very difficult from human point of view.
Resumo:
The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.