32 resultados para Local optimization algorithms

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In a large number of problems the high dimensionality of the search space, the vast number of variables and the economical constrains limit the ability of classical techniques to reach the optimum of a function, known or unknown. In this thesis we investigate the possibility to combine approaches from advanced statistics and optimization algorithms in such a way to better explore the combinatorial search space and to increase the performance of the approaches. To this purpose we propose two methods: (i) Model Based Ant Colony Design and (ii) Naïve Bayes Ant Colony Optimization. We test the performance of the two proposed solutions on a simulation study and we apply the novel techniques on an appplication in the field of Enzyme Engineering and Design.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A servo-controlled automatic machine can perform tasks that involve synchronized actuation of a significant number of servo-axes, namely one degree-of-freedom (DoF) electromechanical actuators. Each servo-axis comprises a servo-motor, a mechanical transmission and an end-effector, and is responsible for generating the desired motion profile and providing the power required to achieve the overall task. The design of a such a machine must involve a detailed study from a mechatronic viewpoint, due to its electric and mechanical nature. The first objective of this thesis is the development of an overarching electromechanical model for a servo-axis. Every loss source is taken into account, be it mechanical or electrical. The mechanical transmission is modeled by means of a sequence of lumped-parameter blocks. The electric model of the motor and the inverter takes into account winding losses, iron losses and controller switching losses. No experimental characterizations are needed to implement the electric model, since the parameters are inferred from the data available in commercial catalogs. With the global model at disposal, a second objective of this work is to perform the optimization analysis, in particular, the selection of the motor-reducer unit. The optimal transmission ratios that minimize several objective functions are found. An optimization process is carried out and repeated for each candidate motor. Then, we present a novel method where the discrete set of available motor is extended to a continuous domain, by fitting manufacturer data. The problem becomes a two-dimensional nonlinear optimization subject to nonlinear constraints, and the solution gives the optimal choice for the motor-reducer system. The presented electromechanical model, along with the implementation of optimization algorithms, forms a complete and powerful simulation tool for servo-controlled automatic machines. The tool allows for determining a wide range of electric and mechanical parameters and the behavior of the system in different operating conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The topic of the Ph.D project focuses on the modelling of the soil-water dynamics inside an instrumented embankment section along Secchia River (Cavezzo (MO)) in the period from 2017 to 2018 and the quantification of the performance of the direct and indirect simulations . The commercial code Hydrus2D by Pc-Progress has been chosen to run the direct simulations. Different soil-hydraulic models have been adopted and compared. The parameters of the different hydraulic models are calibrated using a local optimization method based on the Levenberg - Marquardt algorithm implemented in the Hydrus package. The calibration program is carried out using different types of dataset of observation points, different weighting distributions, different combinations of optimized parameters and different initial sets of parameters. The final goal is an in-depth study of the potentialities and limits of the inverse analysis when applied to a complex geotechnical problem as the case study. The second part of the research focuses on the effects of plant roots and soil-vegetation-atmosphere interaction on the spatial and temporal distribution of pore water pressure in soil. The investigated soil belongs to the West Charlestown Bypass embankment, Newcastle, Australia, that showed in the past years shallow instabilities and the use of long stem planting is intended to stabilize the slope. The chosen plant species is the Malaleuca Styphelioides, native of eastern Australia. The research activity included the design and realization of a specific large scale apparatus for laboratory experiments. Local suction measurements at certain intervals of depth and radial distances from the root bulb are recorded within the vegetated soil mass under controlled boundary conditions. The experiments are then reproduced numerically using the commercial code Hydrus 2D. Laboratory data are used to calibrate the RWU parameters and the parameters of the hydraulic model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work is structured as follows: In Section 1 we discuss the clinical problem of heart failure. In particular, we present the phenomenon known as ventricular mechanical dyssynchrony: its impact on cardiac function, the therapy for its treatment and the methods for its quantification. Specifically, we describe the conductance catheter and its use for the measurement of dyssynchrony. At the end of the Section 1, we propose a new set of indexes to quantify the dyssynchrony that are studied and validated thereafter. In Section 2 we describe the studies carried out in this work: we report the experimental protocols, we present and discuss the results obtained. Finally, we report the overall conclusions drawn from this work and we try to envisage future works and possible clinical applications of our results. Ancillary studies that were carried out during this work mainly to investigate several aspects of cardiac resynchronization therapy (CRT) are mentioned in Appendix. -------- Ventricular mechanical dyssynchrony plays a regulating role already in normal physiology but is especially important in pathological conditions, such as hypertrophy, ischemia, infarction, or heart failure (Chapter 1,2.). Several prospective randomized controlled trials supported the clinical efficacy and safety of cardiac resynchronization therapy (CRT) in patients with moderate or severe heart failure and ventricular dyssynchrony. CRT resynchronizes ventricular contraction by simultaneous pacing of both left and right ventricle (biventricular pacing) (Chapter 1.). Currently, the conductance catheter method has been used extensively to assess global systolic and diastolic ventricular function and, more recently, the ability of this instrument to pick-up multiple segmental volume signals has been used to quantify mechanical ventricular dyssynchrony. Specifically, novel indexes based on volume signals acquired with the conductance catheter were introduced to quantify dyssynchrony (Chapter 3,4.). Present work was aimed to describe the characteristics of the conductancevolume signals, to investigate the performance of the indexes of ventricular dyssynchrony described in literature and to introduce and validate improved dyssynchrony indexes. Morevoer, using the conductance catheter method and the new indexes, the clinical problem of the ventricular pacing site optimization was addressed and the measurement protocol to adopt for hemodynamic tests on cardiac pacing was investigated. In accordance to the aims of the work, in addition to the classical time-domain parameters, a new set of indexes has been extracted, based on coherent averaging procedure and on spectral and cross-spectral analysis (Chapter 4.). Our analyses were carried out on patients with indications for electrophysiologic study or device implantation (Chapter 5.). For the first time, besides patients with heart failure, indexes of mechanical dyssynchrony based on conductance catheter were extracted and studied in a population of patients with preserved ventricular function, providing information on the normal range of such a kind of values. By performing a frequency domain analysis and by applying an optimized coherent averaging procedure (Chapter 6.a.), we were able to describe some characteristics of the conductance-volume signals (Chapter 6.b.). We unmasked the presence of considerable beat-to-beat variations in dyssynchrony that seemed more frequent in patients with ventricular dysfunction and to play a role in discriminating patients. These non-recurrent mechanical ventricular non-uniformities are probably the expression of the substantial beat-to-beat hemodynamic variations, often associated with heart failure and due to cardiopulmonary interaction and conduction disturbances. We investigated how the coherent averaging procedure may affect or refine the conductance based indexes; in addition, we proposed and tested a new set of indexes which quantify the non-periodic components of the volume signals. Using the new set of indexes we studied the acute effects of the CRT and the right ventricular pacing, in patients with heart failure and patients with preserved ventricular function. In the overall population we observed a correlation between the hemodynamic changes induced by the pacing and the indexes of dyssynchrony, and this may have practical implications for hemodynamic-guided device implantation. The optimal ventricular pacing site for patients with conventional indications for pacing remains controversial. The majority of them do not meet current clinical indications for CRT pacing. Thus, we carried out an analysis to compare the impact of several ventricular pacing sites on global and regional ventricular function and dyssynchrony (Chapter 6.c.). We observed that right ventricular pacing worsens cardiac function in patients with and without ventricular dysfunction unless the pacing site is optimized. CRT preserves left ventricular function in patients with normal ejection fraction and improves function in patients with poor ejection fraction despite no clinical indication for CRT. Moreover, the analysis of the results obtained using new indexes of regional dyssynchrony, suggests that pacing site may influence overall global ventricular function depending on its relative effects on regional function and synchrony. Another clinical problem that has been investigated in this work is the optimal right ventricular lead location for CRT (Chapter 6.d.). Similarly to the previous analysis, using novel parameters describing local synchrony and efficiency, we tested the hypothesis and we demonstrated that biventricular pacing with alternative right ventricular pacing sites produces acute improvement of ventricular systolic function and improves mechanical synchrony when compared to standard right ventricular pacing. Although no specific right ventricular location was shown to be superior during CRT, the right ventricular pacing site that produced the optimal acute hemodynamic response varied between patients. Acute hemodynamic effects of cardiac pacing are conventionally evaluated after stabilization episodes. The applied duration of stabilization periods in most cardiac pacing studies varied considerably. With an ad hoc protocol (Chapter 6.e.) and indexes of mechanical dyssynchrony derived by conductance catheter we demonstrated that the usage of stabilization periods during evaluation of cardiac pacing may mask early changes in systolic and diastolic intra-ventricular dyssynchrony. In fact, at the onset of ventricular pacing, the main dyssynchrony and ventricular performance changes occur within a 10s time span, initiated by the changes in ventricular mechanical dyssynchrony induced by aberrant conduction and followed by a partial or even complete recovery. It was already demonstrated in normal animals that ventricular mechanical dyssynchrony may act as a physiologic modulator of cardiac performance together with heart rate, contractile state, preload and afterload. The present observation, which shows the compensatory mechanism of mechanical dyssynchrony, suggests that ventricular dyssynchrony may be regarded as an intrinsic cardiac property, with baseline dyssynchrony at increased level in heart failure patients. To make available an independent system for cardiac output estimation, in order to confirm the results obtained with conductance volume method, we developed and validated a novel technique to apply the Modelflow method (a method that derives an aortic flow waveform from arterial pressure by simulation of a non-linear three-element aortic input impedance model, Wesseling et al. 1993) to the left ventricular pressure signal, instead of the arterial pressure used in the classical approach (Chapter 7.). The results confirmed that in patients without valve abnormalities, undergoing conductance catheter evaluations, the continuous monitoring of cardiac output using the intra-ventricular pressure signal is reliable. Thus, cardiac output can be monitored quantitatively and continuously with a simple and low-cost method. During this work, additional studies were carried out to investigate several areas of uncertainty of CRT. The results of these studies are briefly presented in Appendix: the long-term survival in patients treated with CRT in clinical practice, the effects of CRT in patients with mild symptoms of heart failure and in very old patients, the limited thoracotomy as a second choice alternative to transvenous implant for CRT delivery, the evolution and prognostic significance of diastolic filling pattern in CRT, the selection of candidates to CRT with echocardiographic criteria and the prediction of response to the therapy.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this thesis we present some combinatorial optimization problems, suggest models and algorithms for their effective solution. For each problem,we give its description, followed by a short literature review, provide methods to solve it and, finally, present computational results and comparisons with previous works to show the effectiveness of the proposed approaches. The considered problems are: the Generalized Traveling Salesman Problem (GTSP), the Bin Packing Problem with Conflicts(BPPC) and the Fair Layout Problem (FLOP).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Combinatorial Optimization is becoming ever more crucial, in these days. From natural sciences to economics, passing through urban centers administration and personnel management, methodologies and algorithms with a strong theoretical background and a consolidated real-word effectiveness is more and more requested, in order to find, quickly, good solutions to complex strategical problems. Resource optimization is, nowadays, a fundamental ground for building the basements of successful projects. From the theoretical point of view, Combinatorial Optimization rests on stable and strong foundations, that allow researchers to face ever more challenging problems. However, from the application point of view, it seems that the rate of theoretical developments cannot cope with that enjoyed by modern hardware technologies, especially with reference to the one of processors industry. In this work we propose new parallel algorithms, designed for exploiting the new parallel architectures available on the market. We found that, exposing the inherent parallelism of some resolution techniques (like Dynamic Programming), the computational benefits are remarkable, lowering the execution times by more than an order of magnitude, and allowing to address instances with dimensions not possible before. We approached four Combinatorial Optimization’s notable problems: Packing Problem, Vehicle Routing Problem, Single Source Shortest Path Problem and a Network Design problem. For each of these problems we propose a collection of effective parallel solution algorithms, either for solving the full problem (Guillotine Cuts and SSSPP) or for enhancing a fundamental part of the solution method (VRP and ND). We endorse our claim by presenting computational results for all problems, either on standard benchmarks from the literature or, when possible, on data from real-world applications, where speed-ups of one order of magnitude are usually attained, not uncommonly scaling up to 40 X factors.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Several decision and control tasks in cyber-physical networks can be formulated as large- scale optimization problems with coupling constraints. In these "constraint-coupled" problems, each agent is associated to a local decision variable, subject to individual constraints. This thesis explores the use of primal decomposition techniques to develop tailored distributed algorithms for this challenging set-up over graphs. We first develop a distributed scheme for convex problems over random time-varying graphs with non-uniform edge probabilities. The approach is then extended to unknown cost functions estimated online. Subsequently, we consider Mixed-Integer Linear Programs (MILPs), which are of great interest in smart grid control and cooperative robotics. We propose a distributed methodological framework to compute a feasible solution to the original MILP, with guaranteed suboptimality bounds, and extend it to general nonconvex problems. Monte Carlo simulations highlight that the approach represents a substantial breakthrough with respect to the state of the art, thus representing a valuable solution for new toolboxes addressing large-scale MILPs. We then propose a distributed Benders decomposition algorithm for asynchronous unreliable networks. The framework has been then used as starting point to develop distributed methodologies for a microgrid optimal control scenario. We develop an ad-hoc distributed strategy for a stochastic set-up with renewable energy sources, and show a case study with samples generated using Generative Adversarial Networks (GANs). We then introduce a software toolbox named ChoiRbot, based on the novel Robot Operating System 2, and show how it facilitates simulations and experiments in distributed multi-robot scenarios. Finally, we consider a Pickup-and-Delivery Vehicle Routing Problem for which we design a distributed method inspired to the approach of general MILPs, and show the efficacy through simulations and experiments in ChoiRbot with ground and aerial robots.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Combinatorial optimization problems have been strongly addressed throughout history. Their study involves highly applied problems that must be solved in reasonable times. This doctoral Thesis addresses three Operations Research problems: the first deals with the Traveling Salesman Problem with Pickups and Delivery with Handling cost, which was approached with two metaheuristics based on Iterated Local Search; the results show that the proposed methods are faster and obtain good results respect to the metaheuristics from the literature. The second problem corresponds to the Quadratic Multiple Knapsack Problem, and polynomial formulations and relaxations are presented for new instances of the problem; in addition, a metaheuristic and a matheuristic are proposed that are competitive with state of the art algorithms. Finally, an Open-Pit Mining problem is approached. This problem is solved with a parallel genetic algorithm that allows excavations using truncated cones. Each of these problems was computationally tested with difficult instances from the literature, obtaining good quality results in reasonable computational times, and making significant contributions to the state of the art techniques of Operations Research.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Several decision and control tasks involve networks of cyber-physical systems that need to be coordinated and controlled according to a fully-distributed paradigm involving only local communications without any central unit. This thesis focuses on distributed optimization and games over networks from a system theoretical perspective. In the addressed frameworks, we consider agents communicating only with neighbors and running distributed algorithms with optimization-oriented goals. The distinctive feature of this thesis is to interpret these algorithms as dynamical systems and, thus, to resort to powerful system theoretical tools for both their analysis and design. We first address the so-called consensus optimization setup. In this context, we provide an original system theoretical analysis of the well-known Gradient Tracking algorithm in the general case of nonconvex objective functions. Then, inspired by this method, we provide and study a series of extensions to improve the performance and to deal with more challenging settings like, e.g., the derivative-free framework or the online one. Subsequently, we tackle the recently emerged framework named distributed aggregative optimization. For this setup, we develop and analyze novel schemes to handle (i) online instances of the problem, (ii) ``personalized'' optimization frameworks, and (iii) feedback optimization settings. Finally, we adopt a system theoretical approach to address aggregative games over networks both in the presence or absence of linear coupling constraints among the decision variables of the players. In this context, we design and inspect novel fully-distributed algorithms, based on tracking mechanisms, that outperform state-of-the-art methods in finding the Nash equilibrium of the game.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis deals with efficient solution of optimization problems of practical interest. The first part of the thesis deals with bin packing problems. The bin packing problem (BPP) is one of the oldest and most fundamental combinatorial optimiza- tion problems. The bin packing problem and its generalizations arise often in real-world ap- plications, from manufacturing industry, logistics and transportation of goods, and scheduling. After an introductory chapter, I will present two applications of two of the most natural extensions of the bin packing: Chapter 2 will be dedicated to an application of bin packing in two dimension to a problem of scheduling a set of computational tasks on a computer cluster, while Chapter 3 deals with the generalization of BPP in three dimensions that arise frequently in logistic and transportation, often com- plemented with additional constraints on the placement of items and characteristics of the solution, like, for example, guarantees on the stability of the items, to avoid potential damage to the transported goods, on the distribution of the total weight of the bins, and on compatibility with loading and unloading operations. The second part of the thesis, and in particular Chapter 4 considers the Trans- mission Expansion Problem (TEP), where an electrical transmission grid must be expanded so as to satisfy future energy demand at the minimum cost, while main- taining some guarantees of robustness to potential line failures. These problems are gaining importance in a world where a shift towards renewable energy can impose a significant geographical reallocation of generation capacities, resulting in the ne- cessity of expanding current power transmission grids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present work, the multi-objective optimization by genetic algorithms is investigated and applied to heat transfer problems. Firstly, the work aims to compare different reproduction processes employed by genetic algorithms and two new promising processes are suggested. Secondly, in this work two heat transfer problems are studied under the multi-objective point of view. Specifically, the two cases studied are the wavy fins and the corrugated wall channel. Both these cases have already been studied by a single objective optimizer. Therefore, this work aims to extend the previous works in a more comprehensive study.