917 resultados para Optimization algorithm
Resumo:
Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature.
Resumo:
Les Mesures de Semblança Quàntica Molecular (MSQM) requereixen la maximització del solapament de les densitats electròniques de les molècules que es comparen. En aquest treball es presenta un algorisme de maximització de les MSQM, que és global en el límit de densitatselectròniques deformades a funcions deltes de Dirac. A partir d'aquest algorisme se'n deriva l'equivalent per a densitats no deformades
Resumo:
Intensity-modulated radiotherapy (IMRT) treatment plan verification by comparison with measured data requires having access to the linear accelerator and is time consuming. In this paper, we propose a method for monitor unit (MU) calculation and plan comparison for step and shoot IMRT based on the Monte Carlo code EGSnrc/BEAMnrc. The beamlets of an IMRT treatment plan are individually simulated using Monte Carlo and converted into absorbed dose to water per MU. The dose of the whole treatment can be expressed through a linear matrix equation of the MU and dose per MU of every beamlet. Due to the positivity of the absorbed dose and MU values, this equation is solved for the MU values using a non-negative least-squares fit optimization algorithm (NNLS). The Monte Carlo plan is formed by multiplying the Monte Carlo absorbed dose to water per MU with the Monte Carlo/NNLS MU. Several treatment plan localizations calculated with a commercial treatment planning system (TPS) are compared with the proposed method for validation. The Monte Carlo/NNLS MUs are close to the ones calculated by the TPS and lead to a treatment dose distribution which is clinically equivalent to the one calculated by the TPS. This procedure can be used as an IMRT QA and further development could allow this technique to be used for other radiotherapy techniques like tomotherapy or volumetric modulated arc therapy.
Resumo:
Työn tavoitteena oli kehittää automaattinen optimointijärjestelmä energiayhtiön omistamaan pieneen sähkön- ja lämmöntuotantolaitokseen (CHP-laitos). Optimointitarve perustuu energiayhtiön sähkön hankintaan sähköpörssistä, kaasun hankintahintaan, kohteen paikallisiin sähkö- ja lämpökuormituksiin ja muihin laitoksen talouteen vaikuttaviin tekijöihin. Kehitettävällä optimointijärjestelmällä ontarkoitus tulevaisuudessa hallita useita hajautetun energiantuotannon yksiköitäkeskitetysti. Työssä kehitettiin algoritmi, joka optimoi voimalaitoksen taloutta sähkötehoa säätävillä ajomalleilla ja suoralla sähköteho-ohjeella. Työssä kehitetyn algoritmin tuottamia hyötyjä selvitettiin Harjun oppimiskeskuksen CHP-laitoksen mittaushistoriatiedoilla. CHP-laitosten käytön optimointiin luotiin keskitettyyn laskentaan ja hajautettuun ohjaukseen perustuva järjestelmä. Se ohjaa CHP-laitoksia reaaliaikaisesti ja ennustaa historiatietoihin perustuvalla aikasarjamallilla laitoksen tulevaa käyttöä. Optimointijärjestelmän toimivuus ja saatu hyöty selvitettiin Harjun oppimiskeskuksen CHP-laitoksella vertaamalla mittauksista laskettua toteutunutta hyötyä optimointijärjestelmän laskemaan ennustettuun hyötyyn.
Resumo:
Teollisuuden tuotannon eri prosessien optimointi on hyvin ajankohtainen aihe. Monet ohjausjärjestelmät ovat ajalta, jolloin tietokoneiden laskentateho oli hyvin vaatimaton nykyisiin verrattuna. Työssä esitetään tuotantoprosessi, joka sisältää teräksen leikkaussuunnitelman muodostamisongelman. Valuprosessi on yksi teräksen valmistuksen välivaiheita. Siinä sopivaan laatuun saatettu sula teräs valetaan linjastoon, jossa se jähmettyy ja leikataan aihioiksi. Myöhemmissä vaiheissa teräsaihioista muokataan pienempiä kokonaisuuksia, tehtaan lopputuotteita. Jatkuvavaletut aihiot voidaan leikata tilauskannasta riippuen monella eri tavalla. Tätä varten tarvitaan leikkaussuunnitelma, jonka muodostamiseksi on ratkaistava sekalukuoptimointiongelma. Sekalukuoptimointiongelmat ovat optimoinnin haastavin muoto. Niitä on tutkittu yksinkertaisempiin optimointiongelmiin nähden vähän. Nykyisten tietokoneiden laskentateho on kuitenkin mahdollistanut raskaampien ja monimutkaisempien optimointialgoritmien käytön ja kehittämisen. Työssä on käytetty ja esitetty eräs stokastisen optimoinnin menetelmä, differentiaalievoluutioalgoritmi. Tässä työssä esitetään teräksen leikkausoptimointialgoritmi. Kehitetty optimointimenetelmä toimii dynaamisesti tehdasympäristössä käyttäjien määrittelemien parametrien mukaisesti. Työ on osa Syncron Tech Oy:n Ovako Bar Oy Ab:lle toimittamaa ohjausjärjestelmää.
Resumo:
A procedure for compositional characterization of a microalgae oil is presented and applied to investigate a microalgae based biodiesel production process through process simulation. The methodology consists of: proposing a set of triacylglycerides (TAG) present in the oil; assuming an initial TAG composition and simulating the transesterification reaction (UNISIM Design, Honeywell) to obtain FAME characterization values (methyl ester composition); evaluating deviations of experimental from calculated values; minimizing the sum of squared deviations by a non-linear optimization algorithm, with TAG molar fractions as decision variables. Biodiesel from the characterized oil is compared to a rapeseed based biodiesel.
Resumo:
Tässä diplomityössä määritellään biopolttoainetta käyttävän voimalaitoksen käytönaikainen tuotannon optimointimenetelmä. Määrittelytyö liittyy MW Powerin MultiPower CHP –voimalaitoskonseptin jatkokehitysprojektiin. Erilaisten olemassa olevien optimointitapojen joukosta valitaan tarkoitukseen sopiva, laitosmalliin ja kustannusfunktioon perustuva menetelmä, jonka tulokset viedään automaatiojärjestelmään PID-säätimien asetusarvojen muodossa. Prosessin mittaustulosten avulla lasketaan laitoksen energia- ja massataseet, joiden tuloksia käytetään seuraavan optimointihetken lähtötietoina. Optimoinnin kohdefunktio on kustannusfunktio, jonka termit ovat voimalaitoksen käytöstä aiheutuvia tuottoja ja kustannuksia. Prosessia optimoidaan säätimille annetut raja-arvot huomioiden niin, että kokonaiskate maksimoituu. Kun laitokselle kertyy käyttöikää ja historiadataa, voidaan prosessin optimointia nopeuttaa hakemalla tilastollisesti historiadatasta nykytilanteen olosuhteita vastaava hetki. Kyseisen historian hetken katetta verrataan kustannusfunktion optimoinnista saatuun katteeseen. Paremman katteen antavan menetelmän laskemat asetusarvot otetaan käyttöön prosessin ohjausta varten. Mikäli kustannusfunktion laskenta eikä historiadatan perusteella tehty haku anna paranevaa katetta, niiden laskemia asetusarvoja ei oteta käyttöön. Sen sijaan optimia aletaan hakea deterministisellä optimointialgoritmilla, joka hakee nykyhetken ympäristöstä paremman katteen antavia säätimien asetusarvoja. Säätöjärjestelmä on mahdollista toteuttaa myös tulevaisuutta ennustavana. Työn käytännön osuudessa voimalaitosmalli luodaan kahden eri mallinnusohjelman avulla, joista toisella kuvataan kattilan ja toisella voimalaitosprosessin toimintaa. Mallinnuksen tuloksena saatuja prosessiarvoja hyödynnetään lähtötietoina käyttökatteen laskennassa. Kate lasketaan kustannusfunktion perusteella. Tuotoista suurimmat liittyvät sähkön ja lämmön myyntiin sekä tuotantotukeen, ja suurimmat kustannukset liittyvät investoinnin takaisinmaksuun ja polttoaineen ostoon. Kustannusfunktiolle tehdään herkkyystarkastelu, jossa seurataan katteen muutosta prosessin teknisiä arvoja muutettaessa. Tuloksia vertaillaan referenssivoimalaitoksella suoritettujen verifiointimittausten tuloksiin, ja havaitaan, että tulokset eivät ole täysin yhteneviä. Erot johtuvat sekä mallinnuksen puutteista että mittausten lyhyehköistä tarkasteluajoista. Automatisoidun optimointijärjestelmän käytännön toteutusta alustetaan määrittelemällä käyttöön otettava optimointitapa, siihen liittyvät säätöpiirit ja tarvittavat lähtötiedot. Projektia tullaan jatkamaan järjestelmän ohjelmoinnilla, testauksella ja virityksellä todellisessa voimalaitosympäristössä ja myöhemmin ennustavan säädön toteuttamisella.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
The aim of this thesis is to propose a novel control method for teleoperated electrohydraulic servo systems that implements a reliable haptic sense between the human and manipulator interaction, and an ideal position control between the manipulator and the task environment interaction. The proposed method has the characteristics of a universal technique independent of the actual control algorithm and it can be applied with other suitable control methods as a real-time control strategy. The motivation to develop this control method is the necessity for a reliable real-time controller for teleoperated electrohydraulic servo systems that provides highly accurate position control based on joystick inputs with haptic capabilities. The contribution of the research is that the proposed control method combines a directed random search method and a real-time simulation to develop an intelligent controller in which each generation of parameters is tested on-line by the real-time simulator before being applied to the real process. The controller was evaluated on a hydraulic position servo system. The simulator of the hydraulic system was built based on Markov chain Monte Carlo (MCMC) method. A Particle Swarm Optimization algorithm combined with the foraging behavior of E. coli bacteria was utilized as the directed random search engine. The control strategy allows the operator to be plugged into the work environment dynamically and kinetically. This helps to ensure the system has haptic sense with high stability, without abstracting away the dynamics of the hydraulic system. The new control algorithm provides asymptotically exact tracking of both, the position and the contact force. In addition, this research proposes a novel method for re-calibration of multi-axis force/torque sensors. The method makes several improvements to traditional methods. It can be used without dismantling the sensor from its application and it requires smaller number of standard loads for calibration. It is also more cost efficient and faster in comparison to traditional calibration methods. The proposed method was developed in response to re-calibration issues with the force sensors utilized in teleoperated systems. The new approach aimed to avoid dismantling of the sensors from their applications for applying calibration. A major complication with many manipulators is the difficulty accessing them when they operate inside a non-accessible environment; especially if those environments are harsh; such as in radioactive areas. The proposed technique is based on design of experiment methodology. It has been successfully applied to different force/torque sensors and this research presents experimental validation of use of the calibration method with one of the force sensors which method has been applied to.
Resumo:
The accelerating adoption of electrical technologies in vehicles over the recent years has led to an increase in the research on electrochemical energy storage systems, which are among the key elements in these technologies. The application of electrochemical energy storage systems for instance in hybrid electrical vehicles (HEVs) or hybrid mobile working machines allows tolerating high power peaks, leading to an opportunity to downsize the internal combustion engine and reduce fuel consumption, and therefore, CO2 and other emissions. Further, the application of electrochemical energy storage systems provides an option of kinetic and potential energy recuperation. Presently, the lithium-ion (Li-ion) battery is considered the most suitable electrochemical energy storage type in HEVs and hybrid mobile working machines. However, the intensive operating cycle produces high heat losses in the Li-ion battery, which increase its operating temperature. The Li-ion battery operation at high temperatures accelerates the ageing of the battery, and in the worst case, may lead to a thermal runaway and fire. Therefore, an appropriate Li-ion battery cooling system should be provided for the temperature control in applications such as HEVs and mobile working machines. In this doctoral dissertation, methods are presented to set up a thermal model of a single Li-ion cell and a more complex battery module, which can be used if full information about the battery chemistry is not available. In addition, a non-destructive method is developed for the cell thermal characterization, which allows to measure the thermal parameters at different states of charge and in different points of cell surface. The proposed models and the cell thermal characterization method have been verified by experimental measurements. The minimization of high thermal non-uniformity, which was detected in the pouch cell during its operation with a high C-rate current, was analysed by applying a simplified pouch cell 3D thermal model. In the analysis, heat pipes were incorporated into the pouch cell cooling system, and an optimization algorithm was generated for the estimation of the optimalplacement of heat pipes in the pouch cell cooling system. An analysis of the application of heat pipes to the pouch cell cooling system shows that heat pipes significantly decrease the temperature non-uniformity on the cell surface, and therefore, heat pipes were recommended for the enhancement of the pouch cell cooling system.
Resumo:
Afin d'enrichir les données de corpus bilingues parallèles, il peut être judicieux de travailler avec des corpus dits comparables. En effet dans ce type de corpus, même si les documents dans la langue cible ne sont pas l'exacte traduction de ceux dans la langue source, on peut y retrouver des mots ou des phrases en relation de traduction. L'encyclopédie libre Wikipédia constitue un corpus comparable multilingue de plusieurs millions de documents. Notre travail consiste à trouver une méthode générale et endogène permettant d'extraire un maximum de phrases parallèles. Nous travaillons avec le couple de langues français-anglais mais notre méthode, qui n'utilise aucune ressource bilingue extérieure, peut s'appliquer à tout autre couple de langues. Elle se décompose en deux étapes. La première consiste à détecter les paires d’articles qui ont le plus de chance de contenir des traductions. Nous utilisons pour cela un réseau de neurones entraîné sur un petit ensemble de données constitué d'articles alignés au niveau des phrases. La deuxième étape effectue la sélection des paires de phrases grâce à un autre réseau de neurones dont les sorties sont alors réinterprétées par un algorithme d'optimisation combinatoire et une heuristique d'extension. L'ajout des quelques 560~000 paires de phrases extraites de Wikipédia au corpus d'entraînement d'un système de traduction automatique statistique de référence permet d'améliorer la qualité des traductions produites. Nous mettons les données alignées et le corpus extrait à la disposition de la communauté scientifique.
Resumo:
Nous étudions la gestion de centres d'appels multi-compétences, ayant plusieurs types d'appels et groupes d'agents. Un centre d'appels est un système de files d'attente très complexe, où il faut généralement utiliser un simulateur pour évaluer ses performances. Tout d'abord, nous développons un simulateur de centres d'appels basé sur la simulation d'une chaîne de Markov en temps continu (CMTC), qui est plus rapide que la simulation conventionnelle par événements discrets. À l'aide d'une méthode d'uniformisation de la CMTC, le simulateur simule la chaîne de Markov en temps discret imbriquée de la CMTC. Nous proposons des stratégies pour utiliser efficacement ce simulateur dans l'optimisation de l'affectation des agents. En particulier, nous étudions l'utilisation des variables aléatoires communes. Deuxièmement, nous optimisons les horaires des agents sur plusieurs périodes en proposant un algorithme basé sur des coupes de sous-gradients et la simulation. Ce problème est généralement trop grand pour être optimisé par la programmation en nombres entiers. Alors, nous relaxons l'intégralité des variables et nous proposons des méthodes pour arrondir les solutions. Nous présentons une recherche locale pour améliorer la solution finale. Ensuite, nous étudions l'optimisation du routage des appels aux agents. Nous proposons une nouvelle politique de routage basé sur des poids, les temps d'attente des appels, et les temps d'inoccupation des agents ou le nombre d'agents libres. Nous développons un algorithme génétique modifié pour optimiser les paramètres de routage. Au lieu d'effectuer des mutations ou des croisements, cet algorithme optimise les paramètres des lois de probabilité qui génèrent la population de solutions. Par la suite, nous développons un algorithme d'affectation des agents basé sur l'agrégation, la théorie des files d'attente et la probabilité de délai. Cet algorithme heuristique est rapide, car il n'emploie pas la simulation. La contrainte sur le niveau de service est convertie en une contrainte sur la probabilité de délai. Par après, nous proposons une variante d'un modèle de CMTC basé sur le temps d'attente du client à la tête de la file. Et finalement, nous présentons une extension d'un algorithme de coupe pour l'optimisation stochastique avec recours de l'affectation des agents dans un centre d'appels multi-compétences.
Resumo:
Les Mesures de Semblança Quàntica Molecular (MSQM) requereixen la maximització del solapament de les densitats electròniques de les molècules que es comparen. En aquest treball es presenta un algorisme de maximització de les MSQM, que és global en el límit de densitats electròniques deformades a funcions deltes de Dirac. A partir d'aquest algorisme se'n deriva l'equivalent per a densitats no deformades
Resumo:
In this thesis I propose a novel method to estimate the dose and injection-to-meal time for low-risk intensive insulin therapy. This dosage-aid system uses an optimization algorithm to determine the insulin dose and injection-to-meal time that minimizes the risk of postprandial hyper- and hypoglycaemia in type 1 diabetic patients. To this end, the algorithm applies a methodology that quantifies the risk of experiencing different grades of hypo- or hyperglycaemia in the postprandial state induced by insulin therapy according to an individual patient’s parameters. This methodology is based on modal interval analysis (MIA). Applying MIA, the postprandial glucose level is predicted with consideration of intra-patient variability and other sources of uncertainty. A worst-case approach is then used to calculate the risk index. In this way, a safer prediction of possible hyper- and hypoglycaemic episodes induced by the insulin therapy tested can be calculated in terms of these uncertainties.
Resumo:
A two-stage linear-in-the-parameter model construction algorithm is proposed aimed at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linear-in-the-parameter classifier. The prefiltering stage is a two-level process aimed at maximizing a model's generalization capability, in which a new elastic-net model identification algorithm using singular value decomposition is employed at the lower level, and then, two regularization parameters are optimized using a particle-swarm-optimization algorithm at the upper level by minimizing the leave-one-out (LOO) misclassification rate. It is shown that the LOO misclassification rate based on the resultant prefiltered signal can be analytically computed without splitting the data set, and the associated computational cost is minimal due to orthogonality. The second stage of sparse classifier construction is based on orthogonal forward regression with the D-optimality algorithm. Extensive simulations of this approach for noisy data sets illustrate the competitiveness of this approach to classification of noisy data problems.