856 resultados para penalty-based genetic algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient’s medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method.

Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated.

Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated.

Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The VLT-FLAMES Tarantula Survey (VFTS) has secured mid-resolution spectra of over 300 O-type stars in the 30 Doradus region of the Large Magellanic Cloud. A homogeneous analysis of such a large sample requires automated techniques, an approach that will also be needed for the upcoming analysis of the Gaia surveys of the Northern and Southern Hemisphere supplementing the Gaia measurements. We point out the importance of Gaia for the study of O stars, summarize the O star science case of VFTS and present a test of the automated modeling technique using synthetically generated data. This method employs a genetic algorithm based optimization technique in combination with fastwind model atmospheres. The method is found to be robust and able to recover the main photospheric parameters accurately. Precise wind parameters can be obtained as well, however, as expected, for dwarf stars the rate of acceleration of the ow is poorly constrained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An optimal day-ahead scheduling method (ODSM) for the integrated urban energy system (IUES) is introduced, which considers the reconfigurable capability of an electric distribution network. The hourly topology of a distribution network, a natural gas network, the energy centers including the combined heat and power (CHP) units, different energy conversion devices and demand responsive loads (DRLs), are optimized to minimize the day-ahead operation cost of the IUES. The hourly reconfigurable capability of the electric distribution network utilizing remotely controlled switches (RCSs) is explored and discussed. The operational constraints from the unbalanced three-phase electric distribution network, the natural gas network, and the energy centers are considered. The interactions between the electric distribution network and the natural gas network take place through conversion of energy among different energy vectors in the energy centers. An energy conversion analysis model for the energy center was developed based on the energy hub model. A hybrid optimization method based on genetic algorithm (GA) and a nonlinear interior point method (IPM) is utilized to solve the ODSM model. Numerical studies demonstrate that the proposed ODSM is able to provide the IUES with an effective and economical day-ahead scheduling scheme and reduce the operational cost of the IUES.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a study on the implementation of Real-Time Pricing (RTP) based Demand Side Management (DSM) of water pumping at a clean water pumping station in Northern Ireland, with the intention of minimising electricity costs and maximising the usage of electricity from wind generation. A Genetic Algorithm (GA) was used to create pumping schedules based on system constraints and electricity tariff scenarios. Implementation of this method would allow the water network operator to make significant savings on electricity costs while also helping to mitigate the variability of wind generation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a study on the implementation of Real-Time Pricing (RTP) based Demand Side Management (DSM) of water pumping at a clean water pumping station in Northern Ireland, with the intention of minimising electricity costs and maximising the usage of electricity from wind generation. A Genetic Algorithm (GA) was used to create pumping schedules based on system constraints and electricity tariff scenarios. Implementation of this method would allow the water network operator to make significant savings on electricity costs while also helping to mitigate the variability of wind generation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a study on the implementation of Real-Time Pricing (RTP) based Demand Side Management (DSM) of water pumping at a clean water pumping station in Northern Ireland, with the intention of minimising electricity costs and maximising the usage of electricity from wind generation. A Genetic Algorithm (GA) was used to create pumping schedules based on system constraints and electricity tariff scenarios. Implementation of this method would allow the water network operator to make significant savings on electricity costs while also helping to mitigate the variability of wind generation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper provides an overview of IDS types and how they work as well as configuration considerations and issues that affect them. Advanced methods of increasing the performance of an IDS are explored such as specification based IDS for protecting Supervisory Control And Data Acquisition (SCADA) and Cloud networks. Also by providing a review of varied studies ranging from issues in configuration and specific problems to custom techniques and cutting edge studies a reference can be provided to others interested in learning about and developing IDS solutions. Intrusion Detection is an area of much required study to provide solutions to satisfy evolving services and networks and systems that support them. This paper aims to be a reference for IDS technologies other researchers and developers interested in the field of intrusion detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The blast furnace is the main ironmaking production unit in the world which converts iron ore with coke and hot blast into liquid iron, hot metal, which is used for steelmaking. The furnace acts as a counter-current reactor charged with layers of raw material of very different gas permeability. The arrangement of these layers, or burden distribution, is the most important factor influencing the gas flow conditions inside the furnace, which dictate the efficiency of the heat transfer and reduction processes. For proper control the furnace operators should know the overall conditions in the furnace and be able to predict how control actions affect the state of the furnace. However, due to high temperatures and pressure, hostile atmosphere and mechanical wear it is very difficult to measure internal variables. Instead, the operators have to rely extensively on measurements obtained at the boundaries of the furnace and make their decisions on the basis of heuristic rules and results from mathematical models. It is particularly difficult to understand the distribution of the burden materials because of the complex behavior of the particulate materials during charging. The aim of this doctoral thesis is to clarify some aspects of burden distribution and to develop tools that can aid the decision-making process in the control of the burden and gas distribution in the blast furnace. A relatively simple mathematical model was created for simulation of the distribution of the burden material with a bell-less top charging system. The model developed is fast and it can therefore be used by the operators to gain understanding of the formation of layers for different charging programs. The results were verified by findings from charging experiments using a small-scale charging rig at the laboratory. A basic gas flow model was developed which utilized the results of the burden distribution model to estimate the gas permeability of the upper part of the blast furnace. This combined formulation for gas and burden distribution made it possible to implement a search for the best combination of charging parameters to achieve a target gas temperature distribution. As this mathematical task is discontinuous and non-differentiable, a genetic algorithm was applied to solve the optimization problem. It was demonstrated that the method was able to evolve optimal charging programs that fulfilled the target conditions. Even though the burden distribution model provides information about the layer structure, it neglects some effects which influence the results, such as mixed layer formation and coke collapse. A more accurate numerical method for studying particle mechanics, the Discrete Element Method (DEM), was used to study some aspects of the charging process more closely. Model charging programs were simulated using DEM and compared with the results from small-scale experiments. The mixed layer was defined and the voidage of mixed layers was estimated. The mixed layer was found to have about 12% less voidage than layers of the individual burden components. Finally, a model for predicting the extent of coke collapse when heavier pellets are charged over a layer of lighter coke particles was formulated based on slope stability theory, and was used to update the coke layer distribution after charging in the mathematical model. In designing this revision, results from DEM simulations and charging experiments for some charging programs were used. The findings from the coke collapse analysis can be used to design charging programs with more stable coke layers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The selection of a set of requirements between all the requirements previously defined by customers is an important process, repeated at the beginning of each development step when an incremental or agile software development approach is adopted. The set of selected requirements will be developed during the actual iteration. This selection problem can be reformulated as a search problem, allowing its treatment with metaheuristic optimization techniques. This paper studies how to apply Ant Colony Optimization algorithms to select requirements. First, we describe this problem formally extending an earlier version of the problem, and introduce a method based on Ant Colony System to find a variety of efficient solutions. The performance achieved by the Ant Colony System is compared with that of Greedy Randomized Adaptive Search Procedure and Non-dominated Sorting Genetic Algorithm, by means of computational experiments carried out on two instances of the problem constructed from data provided by the experts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho tem por objetivo propor uma metodologia heurística para o Problema de Cobertura de Arcos aplicado aos serviços de saneamento, em específico na leitura de hidrômetros. Dentro deste contexto desenvolveu-se um aplicativo que permite o planejamento de rotas de maneira que os custos em distância percorrida sejam reduzidos e mantenham-se aproximadamente os mesmos em todos os percursos. A metodologia foi dividida em etapas. Na primeira etapa, para compreender melhor o problema, fez-se uma pesquisa de campo organizando os dados disponibilizados por uma empresa de saneamento. A segunda etapa foi caracterizada pela determinação de pontos em cada metade de trechos de quadra e nas interseções de ruas, os quais foram cadastrados, em um mapa georeferenciado. Este mapa contemplou a região escolhida para o estudo e os pontos cadastrados serviram para determinar e consequentemente, designar as medianas relacionadas, o que constitui a terceira etapa. Para isso utilizou-se respectivamente o algoritmo de Teitz Bart Modificado por CADP e o algoritmo de designação de Gillet e Johnson adaptado. Ao final desta etapa formaram-se subsetores dentro de um setor específico. Na última etapa encontrou-se as rotas de cada subsetor através do algoritmo genético. O aplicativo desenvolvido permitiu flexibilidade de ações, dando autonomia para o usuário na escolha das opções de cálculo. Sua interface gráfica possibilitou a elaboração de mapas e a visualização das rotas em cada subsetor. Além disso o aplicativo minimizou os percursos e distribuiu os subsetores com distâncias aproximadas. A eficiência das heurísticas que embasaram o aplicativo desenvolvido, foi comprovada através dos testes realizados, os quais obtiveram resultados de boa qualidade.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O problema de planejamento de rotas de robôs móveis consiste em determinar a melhor rota para um robô, em um ambiente estático e/ou dinâmico, que seja capaz de deslocá-lo de um ponto inicial até e um ponto final, também em conhecido como estado objetivo. O presente trabalho emprega o uso de uma abordagem baseada em Algoritmos Genéticos para o planejamento de rotas de múltiplos robôs em um ambiente complexo composto por obstáculos fixos e obstáculos moveis. Através da implementação do modelo no software do NetLogo, uma ferramenta utilizada em simulações de aplicações multiagentes, possibilitou-se a modelagem de robôs e obstáculos presentes no ambiente como agentes interativos, viabilizando assim o desenvolvimento de processos de detecção e desvio de obstáculos. A abordagem empregada busca pela melhor rota para robôs e apresenta um modelo composto pelos operadores básicos de reprodução e mutação, acrescido de um novo operador duplo de refinamento capaz de aperfeiçoar as melhores soluções encontradas através da eliminação de movimentos inúteis. Além disso, o calculo da rota de cada robô adota um método de geração de subtrechos, ou seja, não calcula apenas uma unica rota que conecta os pontos inicial e final do cenário, mas sim várias pequenas subrotas que conectadas formam um caminho único capaz de levar o robô ao estado objetivo. Neste trabalho foram desenvolvidos dois cenários, para avaliação da sua escalabilidade: o primeiro consiste em um cenário simples composto apenas por um robô, um obstáculo movel e alguns obstáculos fixos; já o segundo, apresenta um cenário mais robusto, mais amplo, composto por múltiplos robôs e diversos obstáculos fixos e moveis. Ao final, testes de desempenho comparativos foram efetuados entre a abordagem baseada em Algoritmos Genéticos e o Algoritmo A*. Como critério de comparação foi utilizado o tamanho das rotas obtidas nas vinte simulações executadas em cada abordagem. A analise dos resultados foi especificada através do Teste t de Student.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes for (sub-)fitness evaluation purposes are examined for two multiple-choice optimisation problems. It is shown that random partnering strategies perform best by providing better sampling and more diversity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.