950 resultados para Railways, Scheduling, Heuristics, Search Algorithms
Resumo:
This chapter presents some of the issues with holonic manufacturing systems. It starts by presenting the current manufacturing scenario and trends and then provides some background information on the holonic concept and its application to manufacturing. The current limitations and future trends of manufacturing suggest more autonomous and distributed organisations for manufacturing systems; holonic manufacturing systems are proposed as a way to achieve such autonomy and decentralisation. After a brief literature survey a specific research work is presented to handle scheduling in holonic manufacturing systems. This work is based on task and resource holons that cooperate with each other based on a variant of the contract net protocol that allow the propagation of constraints between operations in the execution plan. The chapter ends by presenting some challenges and future opportunities of research.
Resumo:
In real optimization problems, usually the analytical expression of the objective function is not known, nor its derivatives, or they are complex. In these cases it becomes essential to use optimization methods where the calculation of the derivatives, or the verification of their existence, is not necessary: the Direct Search Methods or Derivative-free Methods are one solution. When the problem has constraints, penalty functions are often used. Unfortunately the choice of the penalty parameters is, frequently, very difficult, because most strategies for choosing it are heuristics strategies. As an alternative to penalty function appeared the filter methods. A filter algorithm introduces a function that aggregates the constrained violations and constructs a biobjective problem. In this problem the step is accepted if it either reduces the objective function or the constrained violation. This implies that the filter methods are less parameter dependent than a penalty function. In this work, we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of the simplex method and filter methods. This method does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. The basic idea of simplex filter algorithm is to construct an initial simplex and use the simplex to drive the search. We illustrate the behavior of our algorithm through some examples. The proposed methods were implemented in Java.
Resumo:
In this paper a solution to an highly constrained and non-convex economical dispatch (ED) problem with a meta-heuristic technique named Sensing Cloud Optimization (SCO) is presented. The proposed meta-heuristic is based on a cloud of particles whose central point represents the objective function value and the remaining particles act as sensors "to fill" the search space and "guide" the central particle so it moves into the best direction. To demonstrate its performance, a case study with multi-fuel units and valve- point effects is presented.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
This work aims at investigating the impact of treating breast cancer using different radiation therapy (RT) techniques – forwardly-planned intensity-modulated, f-IMRT, inversely-planned IMRT and dynamic conformal arc (DCART) RT – and their effects on the whole-breast irradiation and in the undesirable irradiation of the surrounding healthy tissues. Two algorithms of iPlan BrainLAB treatment planning system were compared: Pencil Beam Convolution (PBC) and commercial Monte Carlo (iMC). Seven left-sided breast patients submitted to breast-conserving surgery were enrolled in the study. For each patient, four RT techniques – f-IMRT, IMRT using 2-fields and 5-fields (IMRT2 and IMRT5, respectively) and DCART – were applied. The dose distributions in the planned target volume (PTV) and the dose to the organs at risk (OAR) were compared analyzing dose–volume histograms; further statistical analysis was performed using IBM SPSS v20 software. For PBC, all techniques provided adequate coverage of the PTV. However, statistically significant dose differences were observed between the techniques, in the PTV, OAR and also in the pattern of dose distribution spreading into normal tissues. IMRT5 and DCART spread low doses into greater volumes of normal tissue, right breast, right lung and heart than tangential techniques. However, IMRT5 plans improved distributions for the PTV, exhibiting better conformity and homogeneity in target and reduced high dose percentages in ipsilateral OAR. DCART did not present advantages over any of the techniques investigated. Differences were also found comparing the calculation algorithms: PBC estimated higher doses for the PTV, ipsilateral lung and heart than the iMC algorithm predicted.
Resumo:
Introduction: Image resizing is a normal feature incorporated into the Nuclear Medicine digital imaging. Upsampling is done by manufacturers to adequately fit more the acquired images on the display screen and it is applied when there is a need to increase - or decrease - the total number of pixels. This paper pretends to compare the “hqnx” and the “nxSaI” magnification algorithms with two interpolation algorithms – “nearest neighbor” and “bicubic interpolation” – in the image upsampling operations. Material and Methods: Three distinct Nuclear Medicine images were enlarged 2 and 4 times with the different digital image resizing algorithms (nearest neighbor, bicubic interpolation nxSaI and hqnx). To evaluate the pixel’s changes between the different output images, 3D whole image plot profiles and surface plots were used as an addition to the visual approach in the 4x upsampled images. Results: In the 2x enlarged images the visual differences were not so noteworthy. Although, it was clearly noticed that bicubic interpolation presented the best results. In the 4x enlarged images the differences were significant, with the bicubic interpolated images presenting the best results. Hqnx resized images presented better quality than 4xSaI and nearest neighbor interpolated images, however, its intense “halo effect” affects greatly the definition and boundaries of the image contents. Conclusion: The hqnx and the nxSaI algorithms were designed for images with clear edges and so its use in Nuclear Medicine images is obviously inadequate. Bicubic interpolation seems, from the algorithms studied, the most suitable and its each day wider applications seem to show it, being assumed as a multi-image type efficient algorithm.
Resumo:
Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores. Área de Especialização de Automação e Sistemas.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Vias de Comunicação e Transportes
Resumo:
Trabalho de Projeto realizado para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
In the hustle and bustle of daily life, how often do we stop to pay attention to the tiny details around us, some of them right beneath our feet? Such is the case of interesting decorative patterns that can be found in squares and sidewalks beautified by the traditional Portuguese pavement. Its most common colors are the black and the white of the basalt and the limestone used; the result is a large variety and richness in patterns. No doubt, it is worth devoting some of our time enjoying the lovely Portuguese pavement, a true worldwide attraction. The interesting patterns found on the Azorean handicrafts are as fascinating and substantial from the cultural point of view. Patterns existing in the sidewalks and crafts can be studied from the mathematical point of view, thus allowing a thorough and rigorous cataloguing of such heritage. The mathematical classification is based on the concept of symmetry, a unifying principle of geometry. Symmetry is a unique tool for helping us relate things that at first glance may appear to have no common ground at all. By interlacing different fields of endeavor, the mathematical approach to sidewalks and crafts is particularly interesting, and an excellent source of inspiration for the development of highly motivated recreational activities. This text is an invitation to visit the nine islands of the Azores and to identify a wide range of patterns, namely rosettes and friezes, by getting to know different arts and crafts and sidewalks.
Resumo:
The introduction of electricity markets and integration of Distributed Generation (DG) have been influencing the power system’s structure change. Recently, the smart grid concept has been introduced, to guarantee a more efficient operation of the power system using the advantages of this new paradigm. Basically, a smart grid is a structure that integrates different players, considering constant communication between them to improve power system operation and management. One of the players revealing a big importance in this context is the Virtual Power Player (VPP). In the transportation sector the Electric Vehicle (EV) is arising as an alternative to conventional vehicles propel by fossil fuels. The power system can benefit from this massive introduction of EVs, taking advantage on EVs’ ability to connect to the electric network to charge, and on the future expectation of EVs ability to discharge to the network using the Vehicle-to-Grid (V2G) capacity. This thesis proposes alternative strategies to control these two EV modes with the objective of enhancing the management of the power system. Moreover, power system must ensure the trips of EVs that will be connected to the electric network. The EV user specifies a certain amount of energy that will be necessary to charge, in order to ensure the distance to travel. The introduction of EVs in the power system turns the Energy Resource Management (ERM) under a smart grid environment, into a complex problem that can take several minutes or hours to reach the optimal solution. Adequate optimization techniques are required to accommodate this kind of complexity while solving the ERM problem in a reasonable execution time. This thesis presents a tool that solves the ERM considering the intensive use of EVs in the smart grid context. The objective is to obtain the minimum cost of ERM considering: the operation cost of DG, the cost of the energy acquired to external suppliers, the EV users payments and remuneration and penalty costs. This tool is directed to VPPs that manage specific network areas, where a high penetration level of EVs is expected to be connected in these areas. The ERM is solved using two methodologies: the adaptation of a deterministic technique proposed in a previous work, and the adaptation of the Simulated Annealing (SA) technique. With the purpose of improving the SA performance for this case, three heuristics are additionally proposed, taking advantage on the particularities and specificities of an ERM with these characteristics. A set of case studies are presented in this thesis, considering a 32 bus distribution network and up to 3000 EVs. The first case study solves the scheduling without considering EVs, to be used as a reference case for comparisons with the proposed approaches. The second case study evaluates the complexity of the ERM with the integration of EVs. The third case study evaluates the performance of scheduling with different control modes for EVs. These control modes, combined with the proposed SA approach and with the developed heuristics, aim at improving the quality of the ERM, while reducing drastically its execution time. The proposed control modes are: uncoordinated charging, smart charging and V2G capability. The fourth and final case study presents the ERM approach applied to consecutive days.
Resumo:
O aumento do número de recursos digitais disponíveis dificulta a tarefa de pesquisa dos recursos mais relevantes, no sentido de se obter o que é mais relevante. Assim sendo, um novo tipo de ferramentas, capaz de recomendar os recursos mais apropriados às necessidades do utilizador, torna-se cada vez mais necessário. O objetivo deste trabalho de I&D é o de implementar um módulo de recomendação inteligente para plataformas de e-learning. As recomendações baseiam-se, por um lado, no perfil do utilizador durante o processo de formação e, por outro lado, nos pedidos efetuados pelo utilizador, através de pesquisas [Tavares, Faria e Martins, 2012]. O e-learning 3.0 é um projeto QREN desenvolvido por um conjunto de organizações e tem com objetivo principal implementar uma plataforma de e-learning. Este trabalho encontra-se inserido no projeto e-learning 3.0 e consiste no desenvolvimento de um módulo de recomendação inteligente (MRI). O MRI utiliza diferentes técnicas de recomendação já aplicadas noutros sistemas de recomendação. Estas técnicas são utilizadas para criar um sistema de recomendação híbrido direcionado para a plataforma de e-learning. Para representar a informação relevante, sobre cada utilizador, foi construído um modelo de utilizador. Toda a informação necessária para efetuar a recomendação será representada no modelo do utilizador, sendo este modelo atualizado sempre que necessário. Os dados existentes no modelo de utilizador serão utilizados para personalizar as recomendações produzidas. As recomendações estão divididas em dois tipos, a formal e a não formal. Na recomendação formal o objetivo é fazer sugestões relacionadas a um curso específico. Na recomendação não-formal, o objetivo é fazer sugestões mais abrangentes onde as recomendações não estão associadas a nenhum curso. O sistema proposto é capaz de sugerir recursos de aprendizagem, com base no perfil do utilizador, através da combinação de técnicas de similaridade de palavras, um algoritmo de clustering e técnicas de filtragem [Tavares, Faria e Martins, 2012].
Resumo:
A optimização e a aprendizagem em Sistemas Multi-Agente são consideradas duas áreas promissoras mas relativamente pouco exploradas. A optimização nestes ambientes deve ser capaz de lidar com o dinamismo. Os agentes podem alterar o seu comportamento baseando-se em aprendizagem recente ou em objectivos de optimização. As estratégias de aprendizagem podem melhorar o desempenho do sistema, dotando os agentes da capacidade de aprender, por exemplo, qual a técnica de optimização é mais adequada para a resolução de uma classe particular de problemas, ou qual a parametrização é mais adequada em determinado cenário. Nesta dissertação são estudadas algumas técnicas de resolução de problemas de Optimização Combinatória, sobretudo as Meta-heurísticas, e é efectuada uma revisão do estado da arte de Aprendizagem em Sistemas Multi-Agente. É também proposto um módulo de aprendizagem para a resolução de novos problemas de escalonamento, com base em experiência anterior. O módulo de Auto-Optimização desenvolvido, inspirado na Computação Autónoma, permite ao sistema a selecção automática da Meta-heurística a usar no processo de optimização, assim como a respectiva parametrização. Para tal, recorreu-se à utilização de Raciocínio baseado em Casos de modo que o sistema resultante seja capaz de aprender com a experiência adquirida na resolução de problemas similares. Dos resultados obtidos é possível concluir da vantagem da sua utilização e respectiva capacidade de adaptação a novos e eventuais cenários.