18 resultados para direct search optimization algorithm
em Universidade do Minho
Resumo:
The Electromagnetism-like (EM) algorithm is a population- based stochastic global optimization algorithm that uses an attraction- repulsion mechanism to move sample points towards the optimal. In this paper, an implementation of the EM algorithm in the Matlab en- vironment as a useful function for practitioners and for those who want to experiment a new global optimization solver is proposed. A set of benchmark problems are solved in order to evaluate the performance of the implemented method when compared with other stochastic methods available in the Matlab environment. The results con rm that our imple- mentation is a competitive alternative both in term of numerical results and performance. Finally, a case study based on a parameter estimation problem of a biology system shows that the EM implementation could be applied with promising results in the control optimization area.
Resumo:
In this paper, we propose an extension of the firefly algorithm (FA) to multi-objective optimization. FA is a swarm intelligence optimization algorithm inspired by the flashing behavior of fireflies at night that is capable of computing global solutions to continuous optimization problems. Our proposal relies on a fitness assignment scheme that gives lower fitness values to the positions of fireflies that correspond to non-dominated points with smaller aggregation of objective function distances to the minimum values. Furthermore, FA randomness is based on the spread metric to reduce the gaps between consecutive non-dominated solutions. The obtained results from the preliminary computational experiments show that our proposal gives a dense and well distributed approximated Pareto front with a large number of points.
Resumo:
Earthworks tasks are often regarded in transportation projects as some of the most demanding processes. In fact, sequential tasks such as excavation, transportation, spreading and compaction are strongly based on heavy mechanical equipment and repetitive processes, thus becoming as economically demanding as they are time-consuming. Moreover, actual construction requirements originate higher demands for productivity and safety in earthwork constructions. Given the percentual weight of costs and duration of earthworks in infrastructure construction, the optimal usage of every resource in these tasks is paramount. Considering the characteristics of an earthwork construction, it can be looked at as a production line based on resources (mechanical equipment) and dependency relations between sequential tasks, hence being susceptible to optimization. Up to the present, the steady development of Information Technology areas, such as databases, artificial intelligence and operations research, has resulted in the emergence of several technologies with potential application bearing that purpose in mind. Among these, modern optimization methods (also known as metaheuristics), such as evolutionary computation, have the potential to find high quality optimal solutions with a reasonable use of computational resources. In this context, this work describes an optimization algorithm for earthworks equipment allocation based on a modern optimization approach, which takes advantage of the concept that an earthwork construction can be regarded as a production line.
Resumo:
Dissertação de mestrado integrado em Engenharia Civil
Resumo:
A novel framework for probabilistic-based structural assessment of existing structures, which combines model identification and reliability assessment procedures, considering in an objective way different sources of uncertainty, is presented in this paper. A short description of structural assessment applications, provided in literature, is initially given. Then, the developed model identification procedure, supported in a robust optimization algorithm, is presented. Special attention is given to both experimental and numerical errors, to be considered in this algorithm convergence criterion. An updated numerical model is obtained from this process. The reliability assessment procedure, which considers a probabilistic model for the structure in analysis, is then introduced, incorporating the results of the model identification procedure. The developed model is then updated, as new data is acquired, through a Bayesian inference algorithm, explicitly addressing statistical uncertainty. Finally, the developed framework is validated with a set of reinforced concrete beams, which were loaded up to failure in laboratory.
Resumo:
Tese de Doutoramento em Engenharia Industrial e de Sistemas.
Resumo:
Earthworks involve the levelling or shaping of a target area through the moving or processing of the ground surface. Most construction projects require earthworks, which are heavily dependent on mechanical equipment (e.g., excavators, trucks and compactors). Often, earthworks are the most costly and time-consuming component of infrastructure constructions (e.g., road, railway and airports) and current pressure for higher productivity and safety highlights the need to optimize earthworks, which is a nontrivial task. Most previous attempts at tackling this problem focus on single-objective optimization of partial processes or aspects of earthworks, overlooking the advantages of a multi-objective and global optimization. This work describes a novel optimization system based on an evolutionary multi-objective approach, capable of globally optimizing several objectives simultaneously and dynamically. The proposed system views an earthwork construction as a production line, where the goal is to optimize resources under two crucial criteria (costs and duration) and focus the evolutionary search (non-dominated sorting genetic algorithm-II) on compaction allocation, using linear programming to distribute the remaining equipment (e.g., excavators). Several experiments were held using real-world data from a Portuguese construction site, showing that the proposed system is quite competitive when compared with current manual earthwork equipment allocation.
Resumo:
A search is presented for the direct pair production of a chargino and a neutralino pp→χ~±1χ~02, where the chargino decays to the lightest neutralino and the W boson, χ~±1→χ~01(W±→ℓ±ν), while the neutralino decays to the lightest neutralino and the 125 GeV Higgs boson, χ~02→χ~01(h→bb/γγ/ℓ±νqq). The final states considered for the search have large missing transverse momentum, an isolated electron or muon, and one of the following: either two jets identified as originating from bottom quarks, or two photons, or a second electron or muon with the same electric charge. The analysis is based on 20.3 fb−1 of s√=8 TeV proton-proton collision data delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with the Standard Model expectations, and limits are set in the context of a simplified supersymmetric model.
Resumo:
The artificial fish swarm algorithm has recently been emerged in continuous global optimization. It uses points of a population in space to identify the position of fish in the school. Many real-world optimization problems are described by 0-1 multidimensional knapsack problems that are NP-hard. In the last decades several exact as well as heuristic methods have been proposed for solving these problems. In this paper, a new simpli ed binary version of the artificial fish swarm algorithm is presented, where a point/ fish is represented by a binary string of 0/1 bits. Trial points are created by using crossover and mutation in the different fi sh behavior that are randomly selected by using two user de ned probability values. In order to make the points feasible the presented algorithm uses a random heuristic drop item procedure followed by an add item procedure aiming to increase the profit throughout the adding of more items in the knapsack. A cyclic reinitialization of 50% of the population, and a simple local search that allows the progress of a small percentage of points towards optimality and after that refines the best point in the population greatly improve the quality of the solutions. The presented method is tested on a set of benchmark instances and a comparison with other methods available in literature is shown. The comparison shows that the proposed method can be an alternative method for solving these problems.
Resumo:
The present paper reports the precipitation process of Al3Sc structures in an aluminum scandium alloy, which has been simulated with a synchronous parallel kinetic Monte Carlo (spkMC) algorithm. The spkMC implementation is based on the vacancy diffusion mechanism. To filter the raw data generated by the spkMC simulations, the density-based clustering with noise (DBSCAN) method has been employed. spkMC and DBSCAN algorithms were implemented in the C language and using MPI library. The simulations were conducted in the SeARCH cluster located at the University of Minho. The Al3Sc precipitation was successfully simulated at the atomistic scale with the spkMC. DBSCAN proved to be a valuable aid to identify the precipitates by performing a cluster analysis of the simulation results. The achieved simulations results are in good agreement with those reported in the literature under sequential kinetic Monte Carlo simulations (kMC). The parallel implementation of kMC has provided a 4x speedup over the sequential version.
Resumo:
PhD thesis in Bioengineering
Resumo:
Results of a search for decays of massive particles to fully hadronic final states are presented. This search uses 20.3 fb−1 of data collected by the ATLAS detector in s√=8TeV proton--proton collisions at the LHC. Signatures based on high jet multiplicities without requirements on the missing transverse momentum are used to search for R-parity-violating supersymmetric gluino pair production with subsequent decays to quarks. The analysis is performed using a requirement on the number of jets, in combination with separate requirements on the number of b-tagged jets, as well as a topological observable formed from the scalar sum of the mass values of large-radius jets in the event. Results are interpreted in the context of all possible branching ratios of direct gluino decays to various quark flavors. No significant deviation is observed from the expected Standard Model backgrounds estimated using jet-counting as well as data-driven templates of the total-jet-mass spectra. Gluino pair decays to ten or more quarks via intermediate neutralinos are excluded for a gluino with mass mg~<1TeV for a neutralino mass mχ~01=500GeV. Direct gluino decays to six quarks are excluded for mg~<917GeV for light-flavor final states, and results for various flavor hypotheses are presented.
Resumo:
Results of a search for new phenomena in events with an energetic photon and large missing transverse momentum with the ATLAS experiment at the LHC are reported. Data were collected in proton--proton collisions at a center-of-mass energy of 8 TeV and correspond to an integrated luminosity of 20.3 fb−1. The observed data are well described by the expected Standard Model backgrounds. The expected (observed) upper limit on the fiducial cross section for the production of such events is 6.1 (5.3) fb at 95% confidence level. Exclusion limits are presented on models of new phenomena with large extra spatial dimensions, supersymmetric quarks, and direct pair production of dark-matter candidates.
Resumo:
Natural selection favors the survival and reproduction of organisms that are best adapted to their environment. Selection mechanism in evolutionary algorithms mimics this process, aiming to create environmental conditions in which artificial organisms could evolve solving the problem at hand. This paper proposes a new selection scheme for evolutionary multiobjective optimization. The similarity measure that defines the concept of the neighborhood is a key feature of the proposed selection. Contrary to commonly used approaches, usually defined on the basis of distances between either individuals or weight vectors, it is suggested to consider the similarity and neighborhood based on the angle between individuals in the objective space. The smaller the angle, the more similar individuals. This notion is exploited during the mating and environmental selections. The convergence is ensured by minimizing distances from individuals to a reference point, whereas the diversity is preserved by maximizing angles between neighboring individuals. Experimental results reveal a highly competitive performance and useful characteristics of the proposed selection. Its strong diversity preserving ability allows to produce a significantly better performance on some problems when compared with stat-of-the-art algorithms.
Resumo:
A search for flavour-changing neutral current decays of a top quark to an uptype quark (q = u, c) and the Standard Model Higgs boson, where the Higgs boson decays to bb¯¯, is presented. The analysis searches for top quark pair events in which one top quark decays to Wb, with the W boson decaying leptonically, and the other top quark decays to Hq. The search is based on pp collisions at s√=8 TeV recorded in 2012 with the ATLAS detector at the CERN Large Hadron Collider and uses an integrated luminosity of 20.3 fb−1. Data are analysed in the lepton-plus-jets final state, characterised by an isolated electron or muon and at least four jets. The search exploits the high multiplicity of b-quark jets characteristic of signal events, and employs a likelihood discriminant that uses the kinematic differences between the signal and the background, which is dominated by tt¯→WbWb decays. No significant excess of events above the background expectation is found, and observed (expected) 95% CL upper limits of 0.56% (0.42%) and 0.61% (0.64%) are derived for the t → Hc and t → Hu branching ratios respectively. The combination of this search with other ATLAS searches in the H → γγ and H → WW *, ττ decay modes significantly improves the sensitivity, yielding observed (expected) 95% CL upper limits on the t → Hc and t → Hu branching ratios of 0.46% (0.25%) and 0.45% (0.29%) respectively. The corresponding combined observed (expected) upper limits on the |λ tcH | and |λ tuH | couplings are 0.13 (0.10) and 0.13 (0.10) respectively. These are the most restrictive direct bounds on tqH interactions measured so far.