876 resultados para compiler optimization
Resumo:
In a large number of problems the high dimensionality of the search space, the vast number of variables and the economical constrains limit the ability of classical techniques to reach the optimum of a function, known or unknown. In this thesis we investigate the possibility to combine approaches from advanced statistics and optimization algorithms in such a way to better explore the combinatorial search space and to increase the performance of the approaches. To this purpose we propose two methods: (i) Model Based Ant Colony Design and (ii) Naïve Bayes Ant Colony Optimization. We test the performance of the two proposed solutions on a simulation study and we apply the novel techniques on an appplication in the field of Enzyme Engineering and Design.
Resumo:
This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.
Resumo:
The research activities described in the present thesis have been oriented to the design and development of components and technological processes aimed at optimizing the performance of plasma sources in advanced in material treatments. Consumables components for high definition plasma arc cutting (PAC) torches were studied and developed. Experimental activities have in particular focussed on the modifications of the emissive insert with respect to the standard electrode configuration, which comprises a press fit hafnium insert in a copper body holder, to improve its durability. Based on a deep analysis of both the scientific and patent literature, different solutions were proposed and tested. First, the behaviour of Hf cathodes when operating at high current levels (250A) in oxidizing atmosphere has been experimentally investigated optimizing, with respect to expected service life, the initial shape of the electrode emissive surface. Moreover, the microstructural modifications of the Hf insert in PAC electrodes were experimentally investigated during first cycles, in order to understand those phenomena occurring on and under the Hf emissive surface and involved in the electrode erosion process. Thereafter, the research activity focussed on producing, characterizing and testing prototypes of composite inserts, combining powders of a high thermal conductibility (Cu, Ag) and high thermionic emissivity (Hf, Zr) materials The complexity of the thermal plasma torch environment required and integrated approach also involving physical modelling. Accordingly, a detailed line-by-line method was developed to compute the net emission coefficient of Ar plasmas at temperatures ranging from 3000 K to 25000 K and pressure ranging from 50 kPa to 200 kPa, for optically thin and partially autoabsorbed plasmas. Finally, prototypal electrodes were studied and realized for a newly developed plasma source, based on the plasma needle concept and devoted to the generation of atmospheric pressure non-thermal plasmas for biomedical applications.
Resumo:
Recent developments in the theory of plasma-based collisionally excited x-ray lasers (XRL) have shown an optimization potential based on the dependence of the absorption region of the pumping laser on its angle of incidence on the plasma. For the experimental proof of this idea, a number of diagnostic schemes were developed, tested, qualified and applied. A high-resolution imaging system, yielding the keV emission profile perpendicular to the target surface, provided positions of the hottest plasma regions, interesting for the benchmarking of plasma simulation codes. The implementation of a highly efficient spectrometer for the plasma emission made it possible to gain information about the abundance of the ionization states necessary for the laser action in the plasma. The intensity distribution and deflection angle of the pump laser beam could be imaged for single XRL shots, giving access to its refraction process within the plasma. During a European collaboration campaign at the Lund Laser Center, Sweden, the optimization of the pumping laser incidence angle resulted in a reduction of the required pumping energy for a Ni-like Mo XRL, which enabled the operation at a repetition rate of 10 Hz. Using the experiences gained there, the XRL performance at the PHELIX facility, GSI Darmstadt with respect to achievable repetition rate and at wavelengths below 20 nm was significantly improved, and also important information for the development towards multi-100 eV plasma XRLs was acquired. Due to the setup improvements achieved during the work for this thesis, the PHELIX XRL system now has reached a degree of reproducibility and versatility which is sufficient for demanding applications like the XRL spectroscopy of heavy ions. In addition, a European research campaign, aiming towards plasma XRLs approaching the water-window (wavelengths below 5 nm) was initiated.
Resumo:
Photovoltaic (PV) solar panels generally produce electricity in the 6% to 16% efficiency range, the rest being dissipated in thermal losses. To recover this amount, hybrid photovoltaic thermal systems (PVT) have been devised. These are devices that simultaneously convert solar energy into electricity and heat. It is thus interesting to study the PVT system globally from different point of views in order to evaluate advantages and disadvantages of this technology and its possible uses. In particular in Chapter II, the development of the PVT absorber numerical optimization by a genetic algorithm has been carried out analyzing different internal channel profiles in order to find a right compromise between performance and technical and economical feasibility. Therefore in Chapter III ,thanks to a mobile structure built into the university lab, it has been compared experimentally electrical and thermal output power from PVT panels with separated photovoltaic and solar thermal productions. Collecting a lot of experimental data based on different seasonal conditions (ambient temperature,irradiation, wind...),the aim of this mobile structure has been to evaluate average both thermal and electrical increasing and decreasing efficiency values obtained respect to separate productions through the year. In Chapter IV , new PVT and solar thermal equation based models in steady state conditions have been developed by software Dymola that uses Modelica language. This permits ,in a simplified way respect to previous system modelling softwares, to model and evaluate different concepts about PVT panel regarding its structure before prototyping and measuring it. Chapter V concerns instead the definition of PVT boundary conditions into a HVAC system . This was made trough year simulations by software Polysun in order to finally assess the best solar assisted integrated structure thanks to F_save(solar saving energy)factor. Finally, Chapter VI presents the conclusion and the perspectives of this PhD work.
Resumo:
RAF is a bio-energetic descriptive model integrates with MAD model to support Integrated Farm Management. RAF model aimed to enhancing economical, social and environmental sustainability of farm production in terms of energy via convert energy crops and animal manure to biogas and digestate (bio-fertilizers) by anaerobic digestion technologies, growing and breeding practices. The user defines farm structure in terms of present crops, livestock and market prices and RAF model investigates the possibilities of establish on-farm biogas system (different anaerobic digestion technologies proposed for different scales of farms in terms of energy requirements) according to budget and sustainability constraints to reduce the dependence on fossil fuels. The objective function of RAF (Z) is optimizing the total net income of farm (maximizing income and minimizing costs) for whole period which is considered by the analysis. The main results of this study refers to the possibility of enhancing the exploitation of the available Italian potentials of biogas production from on-farm production of energy crops and livestock manure feedstock by using the developed mathematical model RAF integrates with MAD to presents reliable reconcile between farm size, farm structure and on-farm biogas systems technologies applied to support selection, applying and operating of appropriate biogas technology at any farm under Italian conditions.
Resumo:
DI Diesel engine are widely used both for industrial and automotive applications due to their durability and fuel economy. Nonetheless, increasing environmental concerns force that type of engine to comply with increasingly demanding emission limits, so that, it has become mandatory to develop a robust design methodology of the DI Diesel combustion system focused on reduction of soot and NOx simultaneously while maintaining a reasonable fuel economy. In recent years, genetic algorithms and CFD three-dimensional combustion simulations have been successfully applied to that kind of problem. However, combining GAs optimization with actual CFD three-dimensional combustion simulations can be too onerous since a large number of calculations is usually needed for the genetic algorithm to converge, resulting in a high computational cost and, thus, limiting the suitability of this method for industrial processes. In order to make the optimization process less time-consuming, CFD simulations can be more conveniently used to generate a training set for the learning process of an artificial neural network which, once correctly trained, can be used to forecast the engine outputs as a function of the design parameters during a GA optimization performing a so-called virtual optimization. In the current work, a numerical methodology for the multi-objective virtual optimization of the combustion of an automotive DI Diesel engine, which relies on artificial neural networks and genetic algorithms, was developed.
Resumo:
This study is focused on radio-frequency inductively coupled thermal plasma (ICP) synthesis of nanoparticles, combining experimental and modelling approaches towards process optimization and industrial scale-up, in the framework of the FP7-NMP SIMBA European project (Scaling-up of ICP technology for continuous production of Metallic nanopowders for Battery Applications). First the state of the art of nanoparticle production through conventional and plasma routes is summarized, then results for the characterization of the plasma source and on the investigation of the nanoparticle synthesis phenomenon, aiming at highlighting fundamental process parameters while adopting a design oriented modelling approach, are presented. In particular, an energy balance of the torch and of the reaction chamber, employing a calorimetric method, is presented, while results for three- and two-dimensional modelling of an ICP system are compared with calorimetric and enthalpy probe measurements to validate the temperature field predicted by the model and used to characterize the ICP system under powder-free conditions. Moreover, results from the modeling of critical phases of ICP synthesis process, such as precursor evaporation, vapour conversion in nanoparticles and nanoparticle growth, are presented, with the aim of providing useful insights both for the design and optimization of the process and on the underlying physical phenomena. Indeed, precursor evaporation, one of the phases holding the highest impact on industrial feasibility of the process, is discussed; by employing models to describe particle trajectories and thermal histories, adapted from the ones originally developed for other plasma technologies or applications, such as DC non-transferred arc torches and powder spherodization, the evaporation of micro-sized Si solid precursor in a laboratory scale ICP system is investigated. Finally, a discussion on the role of thermo-fluid dynamic fields on nano-particle formation is presented, as well as a study on the effect of the reaction chamber geometry on produced nanoparticle characteristics and process yield.
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.
Resumo:
La presente dissertazione investiga la possibilità di ottimizzare l’uso di energia a bordo di una nave per trasporto di prodotti chimici e petrolchimici. Il software sviluppato per questo studio può essere adattato a qualsiasi tipo di nave. Tale foglio di calcolo fornisce la metodologia per stimare vantaggi e miglioramenti energetici, con accuratezza direttamente proporzionale ai dati disponibili sulla configurazione del sistema energetico e sui dispositivi installati a bordo. Lo studio si basa su differenti fasi che permettono la semplificazione del lavoro; nell’introduzione sono indicati i dati necessari per svolgere un’accurata analisi ed è presentata la metodologia adottata. Inizialmente è fornita una spiegazione sul layout dell’impianto, sulle sue caratteristiche e sui principali dispositivi installati a bordo. Vengono dunque trattati separatamente i principali carichi, meccanico, elettrico e termico. In seguito si procede con una selezione delle principali fasi operative della nave: è seguito tale approccio in modo da comprendere meglio la ripartizione della richiesta di potenza a bordo della nave e il suo sfruttamento. Successivamente è svolto un controllo sul dimensionamento del sistema elettrico: ciò aiuta a comprendere se la potenza stimata dai progettisti sia assimilabile a quella effettivamente richiesta sulla nave. Si ottengono in seguito curve di carico meccanico, elettrico e termico in funzione del tempo per tutte le fasi operative considerate: tramite l’uso del software Visual Basic Application (VBA) vengono creati i profili di carico che possono essere gestiti nella successiva fase di ottimizzazione. L’ottimizzazione rappresenta il cuore di questo studio; i profili di potenza ottenuti dalla precedente fase sono gestiti in modo da conseguire un sistema che sia in grado di fornire potenza alla nave nel miglior modo possibile da un punto di vista energetico. Il sistema energetico della nave è modellato e ottimizzato mantenendo lo status quo dei dispositivi di bordo, per i quali sono considerate le configurazioni di “Load following”, “two shifts” e “minimal”. Una successiva investigazione riguarda l’installazione a bordo di un sistema di accumulo di energia termica, così da migliorare lo sfruttamento dell’energia disponibile. Infine, nella conclusione, sono messi a confronto i reali consumi della nave con i risultati ottenuti con e senza l’introduzione del sistema di accumulo termico. Attraverso la configurazione “minimal” è possibile risparmiare circa l’1,49% dell’energia totale consumata durante un anno di attività; tale risparmio è completamente gratuito poiché può essere raggiunto seguendo alcune semplici regole nella gestione dell’energia a bordo. L’introduzione di un sistema di accumulo termico incrementa il risparmio totale fino al 4,67% con un serbatoio in grado di accumulare 110000 kWh di energia termica; tuttavia, in questo caso, è necessario sostenere il costo di installazione del serbatoio. Vengono quindi dibattuti aspetti economici e ambientali in modo da spiegare e rendere chiari i vantaggi che si possono ottenere con l’applicazione di questo studio, in termini di denaro e riduzione di emissioni in atmosfera.
Resumo:
The aim of the thesis is to design and verify a doubler for the Airbus A350XWB cargo door surround. The software used for the design is Catia and the software used for the doubler verification are Patran and Nastran.
Resumo:
Traditionally, the study of internal combustion engines operation has focused on the steady-state performance. However, the daily driving schedule of automotive engines is inherently related to unsteady conditions. There are various operating conditions experienced by (diesel) engines that can be classified as transient. Besides the variation of the engine operating point, in terms of engine speed and torque, also the warm up phase can be considered as a transient condition. Chapter 2 has to do with this thermal transient condition; more precisely the main issue is the performance of a Selective Catalytic Reduction (SCR) system during cold start and warm up phases of the engine. The proposal of the underlying work is to investigate and identify optimal exhaust line heating strategies, to provide a fast activation of the catalytic reactions on SCR. Chapters 3 and 4 focus the attention on the dynamic behavior of the engine, when considering typical driving conditions. The common approach to dynamic optimization involves the solution of a single optimal-control problem. However, this approach requires the availability of models that are valid throughout the whole engine operating range and actuator ranges. In addition, the result of the optimization is meaningful only if the model is very accurate. Chapter 3 proposes a methodology to circumvent those demanding requirements: an iteration between transient measurements to refine a purpose-built model and a dynamic optimization which is constrained to the model validity region. Moreover all numerical methods required to implement this procedure are presented. Chapter 4 proposes an approach to derive a transient feedforward control system in an automated way. It relies on optimal control theory to solve a dynamic optimization problem for fast transients. From the optimal solutions, the relevant information is extracted and stored in maps spanned by the engine speed and the torque gradient.
Resumo:
Thermal effects are rapidly gaining importance in nanometer heterogeneous integrated systems. Increased power density, coupled with spatio-temporal variability of chip workload, cause lateral and vertical temperature non-uniformities (variations) in the chip structure. The assumption of an uniform temperature for a large circuit leads to inaccurate determination of key design parameters. To improve design quality, we need precise estimation of temperature at detailed spatial resolution which is very computationally intensive. Consequently, thermal analysis of the designs needs to be done at multiple levels of granularity. To further investigate the flow of chip/package thermal analysis we exploit the Intel Single Chip Cloud Computer (SCC) and propose a methodology for calibration of SCC on-die temperature sensors. We also develop an infrastructure for online monitoring of SCC temperature sensor readings and SCC power consumption. Having the thermal simulation tool in hand, we propose MiMAPT, an approach for analyzing delay, power and temperature in digital integrated circuits. MiMAPT integrates seamlessly into industrial Front-end and Back-end chip design flows. It accounts for temperature non-uniformities and self-heating while performing analysis. Furthermore, we extend the temperature variation aware analysis of designs to 3D MPSoCs with Wide-I/O DRAM. We improve the DRAM refresh power by considering the lateral and vertical temperature variations in the 3D structure and adapting the per-DRAM-bank refresh period accordingly. We develop an advanced virtual platform which models the performance, power, and thermal behavior of a 3D-integrated MPSoC with Wide-I/O DRAMs in detail. Moving towards real-world multi-core heterogeneous SoC designs, a reconfigurable heterogeneous platform (ZYNQ) is exploited to further study the performance and energy efficiency of various CPU-accelerator data sharing methods in heterogeneous hardware architectures. A complete hardware accelerator featuring clusters of OpenRISC CPUs, with dynamic address remapping capability is built and verified on a real hardware.