22 resultados para Multiple Objective Optimization
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
DI Diesel engine are widely used both for industrial and automotive applications due to their durability and fuel economy. Nonetheless, increasing environmental concerns force that type of engine to comply with increasingly demanding emission limits, so that, it has become mandatory to develop a robust design methodology of the DI Diesel combustion system focused on reduction of soot and NOx simultaneously while maintaining a reasonable fuel economy. In recent years, genetic algorithms and CFD three-dimensional combustion simulations have been successfully applied to that kind of problem. However, combining GAs optimization with actual CFD three-dimensional combustion simulations can be too onerous since a large number of calculations is usually needed for the genetic algorithm to converge, resulting in a high computational cost and, thus, limiting the suitability of this method for industrial processes. In order to make the optimization process less time-consuming, CFD simulations can be more conveniently used to generate a training set for the learning process of an artificial neural network which, once correctly trained, can be used to forecast the engine outputs as a function of the design parameters during a GA optimization performing a so-called virtual optimization. In the current work, a numerical methodology for the multi-objective virtual optimization of the combustion of an automotive DI Diesel engine, which relies on artificial neural networks and genetic algorithms, was developed.
Resumo:
In the present work, the multi-objective optimization by genetic algorithms is investigated and applied to heat transfer problems. Firstly, the work aims to compare different reproduction processes employed by genetic algorithms and two new promising processes are suggested. Secondly, in this work two heat transfer problems are studied under the multi-objective point of view. Specifically, the two cases studied are the wavy fins and the corrugated wall channel. Both these cases have already been studied by a single objective optimizer. Therefore, this work aims to extend the previous works in a more comprehensive study.
Resumo:
Riding the wave of recent groundbreaking achievements, artificial intelligence (AI) is currently the buzzword on everybody’s lips and, allowing algorithms to learn from historical data, Machine Learning (ML) emerged as its pinnacle. The multitude of algorithms, each with unique strengths and weaknesses, highlights the absence of a universal solution and poses a challenging optimization problem. In response, automated machine learning (AutoML) navigates vast search spaces within minimal time constraints. By lowering entry barriers, AutoML emerged as promising the democratization of AI, yet facing some challenges. In data-centric AI, the discipline of systematically engineering data used to build an AI system, the challenge of configuring data pipelines is rather simple. We devise a methodology for building effective data pre-processing pipelines in supervised learning as well as a data-centric AutoML solution for unsupervised learning. In human-centric AI, many current AutoML tools were not built around the user but rather around algorithmic ideas, raising ethical and social bias concerns. We contribute by deploying AutoML tools aiming at complementing, instead of replacing, human intelligence. In particular, we provide solutions for single-objective and multi-objective optimization and showcase the challenges and potential of novel interfaces featuring large language models. Finally, there are application areas that rely on numerical simulators, often related to earth observations, they tend to be particularly high-impact and address important challenges such as climate change and crop life cycles. We commit to coupling these physical simulators with (Auto)ML solutions towards a physics-aware AI. Specifically, in precision farming, we design a smart irrigation platform that: allows real-time monitoring of soil moisture, predicts future moisture values, and estimates water demand to schedule the irrigation.
Resumo:
Water distribution networks optimization is a challenging problem due to the dimension and the complexity of these systems. Since the last half of the twentieth century this field has been investigated by many authors. Recently, to overcome discrete nature of variables and non linearity of equations, the research has been focused on the development of heuristic algorithms. This algorithms do not require continuity and linearity of the problem functions because they are linked to an external hydraulic simulator that solve equations of mass continuity and of energy conservation of the network. In this work, a NSGA-II (Non-dominating Sorting Genetic Algorithm) has been used. This is a heuristic multi-objective genetic algorithm based on the analogy of evolution in nature. Starting from an initial random set of solutions, called population, it evolves them towards a front of solutions that minimize, separately and contemporaneously, all the objectives. This can be very useful in practical problems where multiple and discordant goals are common. Usually, one of the main drawback of these algorithms is related to time consuming: being a stochastic research, a lot of solutions must be analized before good ones are found. Results of this thesis about the classical optimal design problem shows that is possible to improve results modifying the mathematical definition of objective functions and the survival criterion, inserting good solutions created by a Cellular Automata and using rules created by classifier algorithm (C4.5). This part has been tested using the version of NSGA-II supplied by Centre for Water Systems (University of Exeter, UK) in MATLAB® environment. Even if orientating the research can constrain the algorithm with the risk of not finding the optimal set of solutions, it can greatly improve the results. Subsequently, thanks to CINECA help, a version of NSGA-II has been implemented in C language and parallelized: results about the global parallelization show the speed up, while results about the island parallelization show that communication among islands can improve the optimization. Finally, some tests about the optimization of pump scheduling have been carried out. In this case, good results are found for a small network, while the solutions of a big problem are affected by the lack of constraints on the number of pump switches. Possible future research is about the insertion of further constraints and the evolution guide. In the end, the optimization of water distribution systems is still far from a definitive solution, but the improvement in this field can be very useful in reducing the solutions cost of practical problems, where the high number of variables makes their management very difficult from human point of view.
Resumo:
The main objective of this research is to demonstrate that the Clean Development Mechanism (CDM), an instrument created under a global international treaty, can achieve multiple objectives beyond those for which it has been established. As such, while being already a powerful tool to contribute to the global fight against climate change, the CDM can also be successful if applied to different sectors not contemplated before. In particular, this research aimed at demonstrating that a wider utilization of the CDM in the tourism sector can represent an innovative way to foster sustainable tourism and generate additional benefits. The CDM was created by Article 12 of the Kyoto Protocol of the United Nations Framework Convention on Climate Change (UNFCCC) and represents an innovative tool to reduce greenhouse gases emissions through the implementation of mitigation activities in developing countries which generate certified emission reductions (CERs), each of them equivalent to one ton of CO2 not emitted in the atmosphere. These credits can be used for compliance reasons by industrialized countries in achieving their reduction targets. The logic path of this research begins with an analysis of the scientific evidences of climate change and its impacts on different economic sectors including tourism and it continues with a focus on the linkages between climate and the tourism sector. Then, it analyses the international responses to the issue of climate change and the peculiar activities in the international arena addressing climate change and the tourism sector. The concluding part of the work presents the objectives and achievements of the CDM and its links to the tourism sector by considering case studies of existing projects which demonstrate that the underlying question can be positively answered. New opportunities for the tourism sector are available.
Resumo:
The objective of the Ph.D. thesis is to put the basis of an all-embracing link analysis procedure that may form a general reference scheme for the future state-of-the-art of RF/microwave link design: it is basically meant as a circuit-level simulation of an entire radio link, with – generally multiple – transmitting and receiving antennas examined by EM analysis. In this way the influence of mutual couplings on the frequency-dependent near-field and far-field performance of each element is fully accounted for. The set of transmitters is treated as a unique nonlinear system loaded by the multiport antenna, and is analyzed by nonlinear circuit techniques. In order to establish the connection between transmitters and receivers, the far-fields incident onto the receivers are evaluated by EM analysis and are combined by extending an available Ray Tracing technique to the link study. EM theory is used to describe the receiving array as a linear active multiport network. Link performances in terms of bit error rate (BER) are eventually verified a posteriori by a fast system-level algorithm. In order to validate the proposed approach, four heterogeneous application contexts are provided. A complete MIMO link design in a realistic propagation scenario is meant to constitute the reference case study. The second one regards the design, optimization and testing of various typologies of rectennas for power generation by common RF sources. Finally, the project and implementation of two typologies of radio identification tags, at X-band and V-band respectively. In all the cases the importance of an exhaustive nonlinear/electromagnetic co-simulation and co-design is demonstrated to be essential for any accurate system performance prediction.
Resumo:
The objective of this study is to provide empirical evidence on how ownership structure and owner’s identity affect performance, in the banking industry by using a panel of Indonesia banks over the period 2000–2009. Firstly, we analysed the impact of the presence of multiple blockholders on bank ownership structure and performance. Building on multiple agency and principal-principal theories, we investigated whether the presence and shares dispersion across blockholders with different identities (i.e. central and regional government; families; foreign banks and financial institutions) affected bank performance, in terms of profitability and efficiency. We found that the number of blockholders has a negative effect on banks’ performance, while blockholders’ concentration has a positive effect. Moreover, we observed that the dispersion of ownership across different types of blockholders has a negative effect on banks’ performance. We interpret such results as evidence that, when heterogeneous blockholders are present, the disadvantage from conflicts of interests between blockholders seems to outweigh the advantage of the increase in additional monitoring by additional blockholder. Secondly, we conducted a joint analysis of the static, selection, and dynamic effects of different types of ownership on banks’ performance. We found that regional banks and foreign banks have a higher profitability and efficiency as compared to domestic private banks. In the short-run, foreign acquisitions and domestic M&As reduce the level of overhead costs, while in the long-run they increase the Net Interest Margin (NIM). Further, we analysed NIM determinants, to asses the impact of ownership on bank business orientation. Our findings lend support to our prediction that the NIM determinants differs accordingly to the type of bank ownership. We also observed that banks that experienced changes in ownership, such as foreign-acquired banks, manifest different interest margin determinants with respect to domestic or foreign banks that did not experience ownership rearrangements.
Resumo:
The principle aim of this study was to investigate biological predictors of response and resistance to multiple myeloma treatment. Two hypothesis had been proposed as responsible of responsiveness: SNPs in DNA repair and Folate pathway, and P-gp dependent efflux. As a first objective, panel of SNPs in DNA repair and Folate pathway genes, were analyzed. It was a retrospective study in a group of 454, previously untreated, MM patients enrolled in a randomized phase III open-label study. Results show that some SNPs in Folate pathway are correlated with response to MM treatment. MTR genotype was associated with favorable response in the overall population of MM patients. However, this relation, disappear after adjustment for treatment response. When poor responder includes very good partial response, partial response and stable/progressive disease MTFHR rs1801131 genotype was associated with poor response to therapy. This relation - unlike in MTR – was still significant after adjustment for treatment response. Identification of this genetic variant in MM patients could be used as an independent prognostic factor for therapeutic outcome in the clinical practice. In the second objective, basic disposition characteristics of bortezomib was investigated. We demonstrated that bortezomib is a P-gp substrate in a bi-directional transport study. We obtain apparent permeability rate values that together with solubility values can have a crucial implication in better understanding of bortezomib pharmacokinetics with respect to the importance of membrane transporters. Subsequently, in view of the importance of P-gp for bortezomib responsiveness a panel of SNPs in ABCB1 gene - coding for P-gp - were analyzed. In particular we analyzed five SNPs, none of them however correlated with treatment responsiveness. However, we found a significant association between ABCB1 variants and cytogenetic abnormalities. In particular, deletion of chromosome 17 and t(4;14) translocation were present in patients harboring rs60023214 and rs2038502 variants respectively.
Resumo:
RAF is a bio-energetic descriptive model integrates with MAD model to support Integrated Farm Management. RAF model aimed to enhancing economical, social and environmental sustainability of farm production in terms of energy via convert energy crops and animal manure to biogas and digestate (bio-fertilizers) by anaerobic digestion technologies, growing and breeding practices. The user defines farm structure in terms of present crops, livestock and market prices and RAF model investigates the possibilities of establish on-farm biogas system (different anaerobic digestion technologies proposed for different scales of farms in terms of energy requirements) according to budget and sustainability constraints to reduce the dependence on fossil fuels. The objective function of RAF (Z) is optimizing the total net income of farm (maximizing income and minimizing costs) for whole period which is considered by the analysis. The main results of this study refers to the possibility of enhancing the exploitation of the available Italian potentials of biogas production from on-farm production of energy crops and livestock manure feedstock by using the developed mathematical model RAF integrates with MAD to presents reliable reconcile between farm size, farm structure and on-farm biogas systems technologies applied to support selection, applying and operating of appropriate biogas technology at any farm under Italian conditions.
Resumo:
Thermal effects are rapidly gaining importance in nanometer heterogeneous integrated systems. Increased power density, coupled with spatio-temporal variability of chip workload, cause lateral and vertical temperature non-uniformities (variations) in the chip structure. The assumption of an uniform temperature for a large circuit leads to inaccurate determination of key design parameters. To improve design quality, we need precise estimation of temperature at detailed spatial resolution which is very computationally intensive. Consequently, thermal analysis of the designs needs to be done at multiple levels of granularity. To further investigate the flow of chip/package thermal analysis we exploit the Intel Single Chip Cloud Computer (SCC) and propose a methodology for calibration of SCC on-die temperature sensors. We also develop an infrastructure for online monitoring of SCC temperature sensor readings and SCC power consumption. Having the thermal simulation tool in hand, we propose MiMAPT, an approach for analyzing delay, power and temperature in digital integrated circuits. MiMAPT integrates seamlessly into industrial Front-end and Back-end chip design flows. It accounts for temperature non-uniformities and self-heating while performing analysis. Furthermore, we extend the temperature variation aware analysis of designs to 3D MPSoCs with Wide-I/O DRAM. We improve the DRAM refresh power by considering the lateral and vertical temperature variations in the 3D structure and adapting the per-DRAM-bank refresh period accordingly. We develop an advanced virtual platform which models the performance, power, and thermal behavior of a 3D-integrated MPSoC with Wide-I/O DRAMs in detail. Moving towards real-world multi-core heterogeneous SoC designs, a reconfigurable heterogeneous platform (ZYNQ) is exploited to further study the performance and energy efficiency of various CPU-accelerator data sharing methods in heterogeneous hardware architectures. A complete hardware accelerator featuring clusters of OpenRISC CPUs, with dynamic address remapping capability is built and verified on a real hardware.
Resumo:
During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.
Resumo:
This Thesis aims at building and discussing mathematical models applications focused on Energy problems, both on the thermal and electrical side. The objective is to show how mathematical programming techniques developed within Operational Research can give useful answers in the Energy Sector, how they can provide tools to support decision making processes of Companies operating in the Energy production and distribution and how they can be successfully used to make simulations and sensitivity analyses to better understand the state of the art and convenience of a particular technology by comparing it with the available alternatives. The first part discusses the fundamental mathematical background followed by a comprehensive literature review about mathematical modelling in the Energy Sector. The second part presents mathematical models for the District Heating strategic network design and incremental network design. The objective is the selection of an optimal set of new users to be connected to an existing thermal network, maximizing revenues, minimizing infrastructure and operational costs and taking into account the main technical requirements of the real world application. Results on real and randomly generated benchmark networks are discussed with particular attention to instances characterized by big networks dimensions. The third part is devoted to the development of linear programming models for optimal battery operation in off-grid solar power schemes, with consideration of battery degradation. The key contribution of this work is the inclusion of battery degradation costs in the optimisation models. As available data on relating degradation costs to the nature of charge/discharge cycles are limited, we concentrate on investigating the sensitivity of operational patterns to the degradation cost structure. The objective is to investigate the combination of battery costs and performance at which such systems become economic. We also investigate how the system design should change when battery degradation is taken into account.
Resumo:
A servo-controlled automatic machine can perform tasks that involve synchronized actuation of a significant number of servo-axes, namely one degree-of-freedom (DoF) electromechanical actuators. Each servo-axis comprises a servo-motor, a mechanical transmission and an end-effector, and is responsible for generating the desired motion profile and providing the power required to achieve the overall task. The design of a such a machine must involve a detailed study from a mechatronic viewpoint, due to its electric and mechanical nature. The first objective of this thesis is the development of an overarching electromechanical model for a servo-axis. Every loss source is taken into account, be it mechanical or electrical. The mechanical transmission is modeled by means of a sequence of lumped-parameter blocks. The electric model of the motor and the inverter takes into account winding losses, iron losses and controller switching losses. No experimental characterizations are needed to implement the electric model, since the parameters are inferred from the data available in commercial catalogs. With the global model at disposal, a second objective of this work is to perform the optimization analysis, in particular, the selection of the motor-reducer unit. The optimal transmission ratios that minimize several objective functions are found. An optimization process is carried out and repeated for each candidate motor. Then, we present a novel method where the discrete set of available motor is extended to a continuous domain, by fitting manufacturer data. The problem becomes a two-dimensional nonlinear optimization subject to nonlinear constraints, and the solution gives the optimal choice for the motor-reducer system. The presented electromechanical model, along with the implementation of optimization algorithms, forms a complete and powerful simulation tool for servo-controlled automatic machines. The tool allows for determining a wide range of electric and mechanical parameters and the behavior of the system in different operating conditions.
Resumo:
The main objective of this PhD thesis is to optimize a specific multifunctional maritime structure for harbour protection and energy production, named Overtopping Breakwater for Energy Conversion (OBREC), developed by the team of the University of Campania. This device is provided with a sloping plate followed by a unique reservoir, which is linked with the machine room (where the energy conversion occurs) by means of a pipe passing through the crown wall, provided with a parapet on top of it. Therefore, the potential energy of the overtopping waves, collected inside the reservoir located above the still water level, is then converted by means of low – head turbines. In order to improve the understanding of the wave – structure interactions with OBREC, several methodologies have been used and combined together: i. analysis of recent experimental campaigns on wave overtopping discharges and pressures at the crown wall on small – scale OBREC cross sections, carried out in other laboratories by the team of the University of Campania; ii. new experiments on cross sections similar to the OBREC device, planned and carried out in the hydraulic lab at the University of Bologna in the framework of this PhD work; iii. numerical modelling with a 1 – phase incompressible fluid model IH – 2VOF, developed by the University of Cantabria, and with a 2 – phase incompressible fluid model OpenFOAM, both available from the literature; iv. numerical modelling with a new 2 – phase compressible fluid model developed in the OpenFOAM environment within this PhD work; v. analysis of the data gained from the monitoring of the OBREC prototype installation.