924 resultados para global optimization algorithms
Resumo:
This paper presents an optimization approach for the job shop scheduling problem (JSSP). The JSSP is a difficult problem in combinatorial optimization for which extensive investigation has been devoted to the development of efficient algorithms. The proposed approach is based on a genetic algorithm technique. The scheduling rules such as SPT and MWKR are integrated into the process of genetic evolution. The chromosome representation of the problem is based on random keys. The schedules are constructed using a priority rule in which the priorities and delay times of the operations are defined by the genetic algorithm. Schedules are constructed using a procedure that generates parameterized active schedules. After a schedule is obtained a local search heuristic is applied to improve the solution. The approach is tested on a set of standard instances taken from the literature and compared with other approaches. The computation results validate the effectiveness of the proposed approach.
Resumo:
Dissertação apresentada na faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Materials selection is a matter of great importance to engineering design and software tools are valuable to inform decisions in the early stages of product development. However, when a set of alternative materials is available for the different parts a product is made of, the question of what optimal material mix to choose for a group of parts is not trivial. The engineer/designer therefore goes about this in a part-by-part procedure. Optimizing each part per se can lead to a global sub-optimal solution from the product point of view. An optimization procedure to deal with products with multiple parts, each with discrete design variables, and able to determine the optimal solution assuming different objectives is therefore needed. To solve this multiobjective optimization problem, a new routine based on Direct MultiSearch (DMS) algorithm is created. Results from the Pareto front can help the designer to align his/hers materials selection for a complete set of materials with product attribute objectives, depending on the relative importance of each objective.
Resumo:
The optimal design of cold-formed steel columns is addressed in this paper, with two objectives: maximize the local-global buckling strength and maximize the distortional buckling strength. The design variables of the problem are the angles of orientation of cross-section wall elements the thickness and width of the steel sheet that forms the cross-section are fixed. The elastic local, distortional and global buckling loads are determined using Finite Strip Method (CUFSM) and the strength of cold-formed steel columns (with given length) is calculated using the Direct Strength Method (DSM). The bi-objective optimization problem is solved using the Direct MultiSearch (DMS) method, which does not use any derivatives of the objective functions. Trade-off Pareto optimal fronts are obtained separately for symmetric and anti-symmetric cross-section shapes. The results are analyzed and further discussed, and some interesting conclusions about the individual strengths (local-global and distortional) are found.
Resumo:
A multiobjective approach for optimization of passive damping for vibration reduction in sandwich structures is presented in this paper. Constrained optimization is conducted for maximization of modal loss factors and minimization of weight of sandwich beams and plates with elastic laminated constraining layers and a viscoelastic core, with layer thickness and material and laminate layer ply orientation angles as design variables. The problem is solved using the Direct MultiSearch (DMS) solver for derivative-free multiobjective optimization and solutions are compared with alternative ones obtained using genetic algorithms.
Resumo:
We agree with Ling-Yun et al. [5] and Zhang and Duan comments [2] about the typing error in equation (9) of the manuscript [8]. The correct formula was initially proposed in [6, 7]. The formula adopted in our algorithms discussed in our papers [1, 3, 4, 8] is, in fact, the following: ...
Resumo:
Submitted in partial fulfillment for the Requirements for the Degree of PhD in Mathematics, in the Speciality of Statistics in the Faculdade de Ciências e Tecnologia
Resumo:
In developed countries, civil infrastructures are one of the most significant investments of governments, corporations, and individuals. Among these, transportation infrastructures, including highways, bridges, airports, and ports, are of huge importance, both economical and social. Most developed countries have built a fairly complete network of highways to fit their needs. As a result, the required investment in building new highways has diminished during the last decade, and should be further reduced in the following years. On the other hand, significant structural deteriorations have been detected in transportation networks, and a huge investment is necessary to keep these infrastructures safe and serviceable. Due to the significant importance of bridges in the serviceability of highway networks, maintenance of these structures plays a major role. In this paper, recent progress in probabilistic maintenance and optimization strategies for deteriorating civil infrastructures with emphasis on bridges is summarized. A novel model including interaction between structural safety analysis,through the safety index, and visual inspections and non destructive tests, through the condition index, is presented. Single objective optimization techniques leading to maintenance strategies associated with minimum expected cumulative cost and acceptable levels of condition and safety are presented. Furthermore, multi-objective optimization is used to simultaneously consider several performance indicators such as safety, condition, and cumulative cost. Realistic examples of the application of some of these techniques and strategies are also presented.
Resumo:
This article introduces schedulability analysis for global fixed priority scheduling with deferred preemption (gFPDS) for homogeneous multiprocessor systems. gFPDS is a superset of global fixed priority pre-emptive scheduling (gFPPS) and global fixed priority non-pre-emptive scheduling (gFPNS). We show how schedulability can be improved using gFPDS via appropriate choice of priority assignment and final non-pre-emptive region lengths, and provide algorithms which optimize schedulability in this way. Via an experimental evaluation we compare the performance of multiprocessor scheduling using global approaches: gFPDS, gFPPS, and gFPNS, and also partitioned approaches employing FPDS, FPPS, and FPNS on each processor.
Resumo:
10th Conference on Telecommunications (Conftele 2015), Aveiro, Portugal.
Resumo:
8th International Workshop on Multiple Access Communications (MACOM2015), Helsinki, Finland.
Resumo:
Redundant manipulators have some advantages when compared with classical arms because they allow the trajectory optimization, both on the free space and on the presence of abstacles, and the resolution of singularities. For this type of manipulators, several kinetic algorithms adopt generalized inverse matrices. In this line of thought, the generalized inverse control scheme is tested through several experiments that reveal the difficulties that often arise. Motivated by theseproblems this paper presents a new method that ptimizes the manipulability through a least squre polynomialapproximation to determine the joints positions. Moreover, the article studies influence on the dynamics, when controlling redundant and hyper-redundant manipulators. The experiment confirm the superior performance of the proposed algorithm for redundant and hyper-redundant manipulators, revealing several fundamental properties of the chaotic phenomena, and gives a deeper insight towards the future development of superior trajectory control algorithms.
Resumo:
According to the new KDIGO (Kidney Disease Improving Global Outcomes) guidelines, the term of renal osteodystrophy, should be used exclusively in reference to the invasive diagnosis of bone abnormalities. Due to the low sensitivity and specificity of biochemical serum markers of bone remodelling,the performance of bone biopsies is highly stimulated in dialysis patients and after kidney transplantation. The tartrate-resistant acid phosphatase (TRACP) is an iso-enzyme of the group of acid phosphatases, which is highly expressed by activated osteoclasts and macrophages. TRACP in osteoclasts is in intracytoplasmic vesicles that transport the products of bone matrix degradation. Being present in activated osteoclasts, the identification of this enzyme by histochemistry in undecalcified bone biopsies is an excellent method to quantify the resorption of bone. Since it is an enzymatic histochemical method for a thermolabile enzyme, the temperature at which it is performed is particularly relevant. This study aimed to determine the optimal temperature for identification of TRACP in activated osteoclasts in undecalcified bone biopsies embedded in methylmethacrylate. We selected 10 cases of undecalcified bone biopsies from hemodialysis patients with the diagnosis of secondary hyperparathyroidism. Sections of 5 μm were stained to identify TRACP at different incubation temperatures (37º, 45º, 60º, 70º and 80ºC) for 30 minutes. Activated osteoclasts stained red and trabecular bone (mineralized bone) was contrasted with toluidine blue. This approach also increased the visibility of the trabecular bone resorption areas (Howship lacunae). Unlike what is suggested in the literature and in several international protocols, we found that the best results were obtained with temperatures between 60ºC and 70ºC. For technical reasons and according to the results of the present study, we recommended that, for an incubation time of 30 minutes, the reaction should be carried out at 60ºC. As active osteoclasts are usually scarce in a bone section, the standardization of the histochemistry method is of great relevance, to optimize the identification of these cells and increase the accuracy of the histomosphometric results. Our results, allowing an increase in osteoclasts contrast, also support the use of semi-automatic histomorphometric measurements.
Resumo:
This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.
Resumo:
Nos dias de hoje, os sistemas de tempo real crescem em importância e complexidade. Mediante a passagem do ambiente uniprocessador para multiprocessador, o trabalho realizado no primeiro não é completamente aplicável no segundo, dado que o nível de complexidade difere, principalmente devido à existência de múltiplos processadores no sistema. Cedo percebeu-se, que a complexidade do problema não cresce linearmente com a adição destes. Na verdade, esta complexidade apresenta-se como uma barreira ao avanço científico nesta área que, para já, se mantém desconhecida, e isto testemunha-se, essencialmente no caso de escalonamento de tarefas. A passagem para este novo ambiente, quer se trate de sistemas de tempo real ou não, promete gerar a oportunidade de realizar trabalho que no primeiro caso nunca seria possível, criando assim, novas garantias de desempenho, menos gastos monetários e menores consumos de energia. Este último fator, apresentou-se desde cedo, como, talvez, a maior barreira de desenvolvimento de novos processadores na área uniprocessador, dado que, à medida que novos eram lançados para o mercado, ao mesmo tempo que ofereciam maior performance, foram levando ao conhecimento de um limite de geração de calor que obrigou ao surgimento da área multiprocessador. No futuro, espera-se que o número de processadores num determinado chip venha a aumentar, e como é óbvio, novas técnicas de exploração das suas inerentes vantagens têm de ser desenvolvidas, e a área relacionada com os algoritmos de escalonamento não é exceção. Ao longo dos anos, diferentes categorias de algoritmos multiprocessador para dar resposta a este problema têm vindo a ser desenvolvidos, destacando-se principalmente estes: globais, particionados e semi-particionados. A perspectiva global, supõe a existência de uma fila global que é acessível por todos os processadores disponíveis. Este fato torna disponível a migração de tarefas, isto é, é possível parar a execução de uma tarefa e resumir a sua execução num processador distinto. Num dado instante, num grupo de tarefas, m, as tarefas de maior prioridade são selecionadas para execução. Este tipo promete limites de utilização altos, a custo elevado de preempções/migrações de tarefas. Em contraste, os algoritmos particionados, colocam as tarefas em partições, e estas, são atribuídas a um dos processadores disponíveis, isto é, para cada processador, é atribuída uma partição. Por essa razão, a migração de tarefas não é possível, acabando por fazer com que o limite de utilização não seja tão alto quando comparado com o caso anterior, mas o número de preempções de tarefas decresce significativamente. O esquema semi-particionado, é uma resposta de caráter hibrido entre os casos anteriores, pois existem tarefas que são particionadas, para serem executadas exclusivamente por um grupo de processadores, e outras que são atribuídas a apenas um processador. Com isto, resulta uma solução que é capaz de distribuir o trabalho a ser realizado de uma forma mais eficiente e balanceada. Infelizmente, para todos estes casos, existe uma discrepância entre a teoria e a prática, pois acaba-se por se assumir conceitos que não são aplicáveis na vida real. Para dar resposta a este problema, é necessário implementar estes algoritmos de escalonamento em sistemas operativos reais e averiguar a sua aplicabilidade, para caso isso não aconteça, as alterações necessárias sejam feitas, quer a nível teórico quer a nível prá