842 resultados para Optimization algorithm
Resumo:
This paper describes the development of a novel metaheuristic that combines an electromagnetic-like mechanism (EM) and the great deluge algorithm (GD) for the University course timetabling problem. This well-known timetabling problem assigns lectures to specific numbers of timeslots and rooms maximizing the overall quality of the timetable while taking various constraints into account. EM is a population-based stochastic global optimization algorithm that is based on the theory of physics, simulating attraction and repulsion of sample points in moving toward optimality. GD is a local search procedure that allows worse solutions to be accepted based on some given upper boundary or ‘level’. In this paper, the dynamic force calculated from the attraction-repulsion mechanism is used as a decreasing rate to update the ‘level’ within the search process. The proposed method has been applied to a range of benchmark university course timetabling test problems from the literature. Moreover, the viability of the method has been tested by comparing its results with other reported results from the literature, demonstrating that the method is able to produce improved solutions to those currently published. We believe this is due to the combination of both approaches and the ability of the resultant algorithm to converge all solutions at every search process.
Resumo:
The scheduling problem in distributed data-intensive computing environments has become an active research topic due to the tremendous growth in grid and cloud computing environments. As an innovative distributed intelligent paradigm, swarm intelligence provides a novel approach to solving these potentially intractable problems. In this paper, we formulate the scheduling problem for work-flow applications with security constraints in distributed data-intensive computing environments and present a novel security constraint model. Several meta-heuristic adaptations to the particle swarm optimization algorithm are introduced to deal with the formulation of efficient schedules. A variable neighborhood particle swarm optimization algorithm is compared with a multi-start particle swarm optimization and multi-start genetic algorithm. Experimental results illustrate that population based meta-heuristics approaches usually provide a good balance between global exploration and local exploitation and their feasibility and effectiveness for scheduling work-flow applications. © 2010 Elsevier Inc. All rights reserved.
Resumo:
Mathematical modelling has become an essential tool in the design of modern catalytic systems. Emissions legislation is becoming increasingly stringent, and so mathematical models of aftertreatment systems must become more accurate in order to provide confidence that a catalyst will convert pollutants over the required range of conditions.
Automotive catalytic converter models contain several sub-models that represent processes such as mass and heat transfer, and the rates at which the reactions proceed on the surface of the precious metal. Of these sub-models, the prediction of the surface reaction rates is by far the most challenging due to the complexity of the reaction system and the large number of gas species involved. The reaction rate sub-model uses global reaction kinetics to describe the surface reaction rate of the gas species and is based on the Langmuir Hinshelwood equation further developed by Voltz et al. [1] The reactions can be modelled using the pre-exponential and activation energies of the Arrhenius equations and the inhibition terms.
The reaction kinetic parameters of aftertreatment models are found from experimental data, where a measured light-off curve is compared against a predicted curve produced by a mathematical model. The kinetic parameters are usually manually tuned to minimize the error between the measured and predicted data. This process is most commonly long, laborious and prone to misinterpretation due to the large number of parameters and the risk of multiple sets of parameters giving acceptable fits. Moreover, the number of coefficients increases greatly with the number of reactions. Therefore, with the growing number of reactions, the task of manually tuning the coefficients is becoming increasingly challenging.
In the presented work, the authors have developed and implemented a multi-objective genetic algorithm to automatically optimize reaction parameters in AxiSuite®, [2] a commercial aftertreatment model. The genetic algorithm was developed and expanded from the code presented by Michalewicz et al. [3] and was linked to AxiSuite using the Simulink add-on for Matlab.
The default kinetic values stored within the AxiSuite model were used to generate a series of light-off curves under rich conditions for a number of gas species, including CO, NO, C3H8 and C3H6. These light-off curves were used to generate an objective function.
This objective function was used to generate a measure of fit for the kinetic parameters. The multi-objective genetic algorithm was subsequently used to search between specified limits to attempt to match the objective function. In total the pre-exponential factors and activation energies of ten reactions were simultaneously optimized.
The results reported here demonstrate that, given accurate experimental data, the optimization algorithm is successful and robust in defining the correct kinetic parameters of a global kinetic model describing aftertreatment processes.
Resumo:
As is now well established, a first order expansion of the Hohenberg-Kohn total energy density functional about a trial input density, namely, the Harris-Foulkes functional, can be used to rationalize a non self consistent tight binding model. If the expansion is taken to second order then the energy and electron density matrix need to be calculated self consistently and from this functional one can derive a charge self consistent tight binding theory. In this paper we have used this to describe a polarizable ion tight binding model which has the benefit of treating charge transfer in point multipoles. This admits a ready description of ionic polarizability and crystal field splitting. It is necessary in constructing such a model to find a number of parameters that mimic their more exact counterparts in the density functional theory. We describe in detail how this is done using a combination of intuition, exact analytical fitting, and a genetic optimization algorithm. Having obtained model parameters we show that this constitutes a transferable scheme that can be applied rather universally to small and medium sized organic molecules. We have shown that the model gives a good account of static structural and dynamic vibrational properties of a library of molecules, and finally we demonstrate the model's capability by showing a real time simulation of an enolization reaction in aqueous solution. In two subsequent papers, we show that the model is a great deal more general in that it will describe solvents and solid substrates and that therefore we have created a self consistent quantum mechanical scheme that may be applied to simulations in heterogeneous catalysis.
Resumo:
PURPOSE: We have been developing an image-guided single vocal cord irradiation technique to treat patients with stage T1a glottic carcinoma. In the present study, we compared the dose coverage to the affected vocal cord and the dose delivered to the organs at risk using conventional, intensity-modulated radiotherapy (IMRT) coplanar, and IMRT non-coplanar techniques.
METHODS AND MATERIALS: For 10 patients, conventional treatment plans using two laterally opposed wedged 6-MV photon beams were calculated in XiO (Elekta-CMS treatment planning system). An in-house IMRT/beam angle optimization algorithm was used to obtain the coplanar and non-coplanar optimized beam angles. Using these angles, the IMRT plans were generated in Monaco (IMRT treatment planning system, Elekta-CMS) with the implemented Monte Carlo dose calculation algorithm. The organs at risk included the contralateral vocal cord, arytenoids, swallowing muscles, carotid arteries, and spinal cord. The prescription dose was 66 Gy in 33 fractions.
RESULTS: For the conventional plans and coplanar and non-coplanar IMRT plans, the population-averaged mean dose ± standard deviation to the planning target volume was 67 ± 1 Gy. The contralateral vocal cord dose was reduced from 66 ± 1 Gy in the conventional plans to 39 ± 8 Gy and 36 ± 6 Gy in the coplanar and non-coplanar IMRT plans, respectively. IMRT consistently reduced the doses to the other organs at risk.
CONCLUSIONS: Single vocal cord irradiation with IMRT resulted in good target coverage and provided significant sparing of the critical structures. This has the potential to improve the quality-of-life outcomes after RT and maintain the same local control rates.
Resumo:
Clean and renewable energy generation and supply has drawn much attention worldwide in recent years, the proton exchange membrane (PEM) fuel cells and solar cells are among the most popular technologies. Accurately modeling the PEM fuel cells as well as solar cells is critical in their applications, and this involves the identification and optimization of model parameters. This is however challenging due to the highly nonlinear and complex nature of the models. In particular for PEM fuel cells, the model has to be optimized under different operation conditions, thus making the solution space extremely complex. In this paper, an improved and simplified teaching-learning based optimization algorithm (STLBO) is proposed to identify and optimize parameters for these two types of cell models. This is achieved by introducing an elite strategy to improve the quality of population and a local search is employed to further enhance the performance of the global best solution. To improve the diversity of the local search a chaotic map is also introduced. Compared with the basic TLBO, the structure of the proposed algorithm is much simplified and the searching ability is significantly enhanced. The performance of the proposed STLBO is firstly tested and verified on two low dimension decomposable problems and twelve large scale benchmark functions, then on the parameter identification of PEM fuel cell as well as solar cell models. Intensive experimental simulations show that the proposed STLBO exhibits excellent performance in terms of the accuracy and speed, in comparison with those reported in the literature.
Resumo:
O presente trabalho tem como objetivo analisar a cinética de secagem do bacalhau salgado verde (Gadus morhua) em secador convectivo. É apresentada a análise da composição físico-química dos bacalhaus utilizados nos ensaios experimentais, bem como o estudo das isotermas de sorção do produto, através de experiências e modelação matemática. Dos modelos usados para o ajuste das isotermas de sorção do bacalhau salgado verde, o que melhor se adaptou aos resultados experimentais foi o modelo de GAB Modificado, com coeficientes de correlação variando entre 0,992 e 0,998. Para o controlo do processo de secagem (nomeadamente os parâmetros temperatura, humidade relativa e velocidade do ar) foi utilizada lógica difusa, através do desenvolvimento de controladores difusos para o humidificador, desumidificador, resistências de aquecimento e ventilador. A modelação do processo de secagem foi realizada através de redes neuronais artificiais (RNA), modelo semi-empírico de Page e modelo difusivo de Fick. A comparação entre dados experimentais e simulados, para cada modelo, apresentou os seguintes erros: entre 1,43 e 11,58 para o modelo de Page, 0,34 e 4,59 para o modelo de Fick e entre 1,13 e 6,99 para a RNA, com médias de 4,38, 1,67 e 2,93 respectivamente. O modelo obtido pelas redes neuronais artificiais foi submetido a um algoritmo de otimização, a fim de buscar os parâmetros ideais de secagem, de forma a minimizar o tempo do processo e maximizar a perda de água do bacalhau. Os parâmetros ótimos obtidos para o processo de secagem, após otimização realizada, para obter-se uma humidade adimensional final de 0,65 foram: tempo de 68,6h, temperatura de 21,45°C, humidade relativa de 51,6% e velocidade de 1,5m/s. Foram também determinados os custos de secagem para as diferentes condições operacionais na instalação experimental. Os consumos por hora de secagem variaram entre 1,15 kWh e 2,87kWh, com uma média de 1,94kWh.
Resumo:
The large penetration of intermittent resources, such as solar and wind generation, involves the use of storage systems in order to improve power system operation. Electric Vehicles (EVs) with gridable capability (V2G) can operate as a means for storing energy. This paper proposes an algorithm to be included in a SCADA (Supervisory Control and Data Acquisition) system, which performs an intelligent management of three types of consumers: domestic, commercial and industrial, that includes the joint management of loads and the charge/discharge of EVs batteries. The proposed methodology has been implemented in a SCADA system developed by the authors of this paper – the SCADA House Intelligent Management (SHIM). Any event in the system, such as a Demand Response (DR) event, triggers the use of an optimization algorithm that performs the optimal energy resources scheduling (including loads and EVs), taking into account the priorities of each load defined by the installation users. A case study considering a specific consumer with several loads and EVs is presented in this paper.
Resumo:
One of the most well-known bio-inspired algorithms used in optimization problems is the particle swarm optimization (PSO), which basically consists on a machinelearning technique loosely inspired by birds flocking in search of food. More specifically, it consists of a number of particles that collectively move on the search space in search of the global optimum. The Darwinian particle swarm optimization (DPSO) is an evolutionary algorithm that extends the PSO using natural selection, or survival of the fittest, to enhance the ability to escape from local optima. This paper firstly presents a survey on PSO algorithms mainly focusing on the DPSO. Afterward, a method for controlling the convergence rate of the DPSO using fractional calculus (FC) concepts is proposed. The fractional-order optimization algorithm, denoted as FO-DPSO, is tested using several well-known functions, and the relationship between the fractional-order velocity and the convergence of the algorithm is observed. Moreover, experimental results show that the FO-DPSO significantly outperforms the previously presented FO-PSO.
Resumo:
Quality of life is a concept influenced by social, economic, psychological, spiritual or medical state factors. More specifically, the perceived quality of an individual's daily life is an assessment of their well-being or lack of it. In this context, information technologies may help on the management of services for healthcare of chronic patients such as estimating the patient quality of life and helping the medical staff to take appropriate measures to increase each patient quality of life. This paper describes a Quality of Life estimation system developed using information technologies and the application of data mining algorithms to access the information of clinical data of patients with cancer from Otorhinolaryngology and Head and Neck services of an oncology institution. The system was evaluated with a sample composed of 3013 patients. The results achieved show that there are variables that may be significant predictors for the Quality of Life of the patient: years of smoking (p value 0.049) and size of the tumor (p value < 0.001). In order to assign the variables to the classification of the quality of life the best accuracy was obtained by applying the John Platt's sequential minimal optimization algorithm for training a support vector classifier. In conclusion data mining techniques allow having access to patients additional information helping the physicians to be able to know the quality of life and produce a well-informed clinical decision.
Resumo:
Afin d'enrichir les données de corpus bilingues parallèles, il peut être judicieux de travailler avec des corpus dits comparables. En effet dans ce type de corpus, même si les documents dans la langue cible ne sont pas l'exacte traduction de ceux dans la langue source, on peut y retrouver des mots ou des phrases en relation de traduction. L'encyclopédie libre Wikipédia constitue un corpus comparable multilingue de plusieurs millions de documents. Notre travail consiste à trouver une méthode générale et endogène permettant d'extraire un maximum de phrases parallèles. Nous travaillons avec le couple de langues français-anglais mais notre méthode, qui n'utilise aucune ressource bilingue extérieure, peut s'appliquer à tout autre couple de langues. Elle se décompose en deux étapes. La première consiste à détecter les paires d’articles qui ont le plus de chance de contenir des traductions. Nous utilisons pour cela un réseau de neurones entraîné sur un petit ensemble de données constitué d'articles alignés au niveau des phrases. La deuxième étape effectue la sélection des paires de phrases grâce à un autre réseau de neurones dont les sorties sont alors réinterprétées par un algorithme d'optimisation combinatoire et une heuristique d'extension. L'ajout des quelques 560~000 paires de phrases extraites de Wikipédia au corpus d'entraînement d'un système de traduction automatique statistique de référence permet d'améliorer la qualité des traductions produites. Nous mettons les données alignées et le corpus extrait à la disposition de la communauté scientifique.
Resumo:
Nous étudions la gestion de centres d'appels multi-compétences, ayant plusieurs types d'appels et groupes d'agents. Un centre d'appels est un système de files d'attente très complexe, où il faut généralement utiliser un simulateur pour évaluer ses performances. Tout d'abord, nous développons un simulateur de centres d'appels basé sur la simulation d'une chaîne de Markov en temps continu (CMTC), qui est plus rapide que la simulation conventionnelle par événements discrets. À l'aide d'une méthode d'uniformisation de la CMTC, le simulateur simule la chaîne de Markov en temps discret imbriquée de la CMTC. Nous proposons des stratégies pour utiliser efficacement ce simulateur dans l'optimisation de l'affectation des agents. En particulier, nous étudions l'utilisation des variables aléatoires communes. Deuxièmement, nous optimisons les horaires des agents sur plusieurs périodes en proposant un algorithme basé sur des coupes de sous-gradients et la simulation. Ce problème est généralement trop grand pour être optimisé par la programmation en nombres entiers. Alors, nous relaxons l'intégralité des variables et nous proposons des méthodes pour arrondir les solutions. Nous présentons une recherche locale pour améliorer la solution finale. Ensuite, nous étudions l'optimisation du routage des appels aux agents. Nous proposons une nouvelle politique de routage basé sur des poids, les temps d'attente des appels, et les temps d'inoccupation des agents ou le nombre d'agents libres. Nous développons un algorithme génétique modifié pour optimiser les paramètres de routage. Au lieu d'effectuer des mutations ou des croisements, cet algorithme optimise les paramètres des lois de probabilité qui génèrent la population de solutions. Par la suite, nous développons un algorithme d'affectation des agents basé sur l'agrégation, la théorie des files d'attente et la probabilité de délai. Cet algorithme heuristique est rapide, car il n'emploie pas la simulation. La contrainte sur le niveau de service est convertie en une contrainte sur la probabilité de délai. Par après, nous proposons une variante d'un modèle de CMTC basé sur le temps d'attente du client à la tête de la file. Et finalement, nous présentons une extension d'un algorithme de coupe pour l'optimisation stochastique avec recours de l'affectation des agents dans un centre d'appels multi-compétences.
Resumo:
Les Mesures de Semblança Quàntica Molecular (MSQM) requereixen la maximització del solapament de les densitats electròniques de les molècules que es comparen. En aquest treball es presenta un algorisme de maximització de les MSQM, que és global en el límit de densitats electròniques deformades a funcions deltes de Dirac. A partir d'aquest algorisme se'n deriva l'equivalent per a densitats no deformades
Resumo:
In this thesis I propose a novel method to estimate the dose and injection-to-meal time for low-risk intensive insulin therapy. This dosage-aid system uses an optimization algorithm to determine the insulin dose and injection-to-meal time that minimizes the risk of postprandial hyper- and hypoglycaemia in type 1 diabetic patients. To this end, the algorithm applies a methodology that quantifies the risk of experiencing different grades of hypo- or hyperglycaemia in the postprandial state induced by insulin therapy according to an individual patient’s parameters. This methodology is based on modal interval analysis (MIA). Applying MIA, the postprandial glucose level is predicted with consideration of intra-patient variability and other sources of uncertainty. A worst-case approach is then used to calculate the risk index. In this way, a safer prediction of possible hyper- and hypoglycaemic episodes induced by the insulin therapy tested can be calculated in terms of these uncertainties.
Resumo:
A two-stage linear-in-the-parameter model construction algorithm is proposed aimed at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linear-in-the-parameter classifier. The prefiltering stage is a two-level process aimed at maximizing a model's generalization capability, in which a new elastic-net model identification algorithm using singular value decomposition is employed at the lower level, and then, two regularization parameters are optimized using a particle-swarm-optimization algorithm at the upper level by minimizing the leave-one-out (LOO) misclassification rate. It is shown that the LOO misclassification rate based on the resultant prefiltered signal can be analytically computed without splitting the data set, and the associated computational cost is minimal due to orthogonality. The second stage of sparse classifier construction is based on orthogonal forward regression with the D-optimality algorithm. Extensive simulations of this approach for noisy data sets illustrate the competitiveness of this approach to classification of noisy data problems.