876 resultados para RELIABILITY-BASED OPTIMIZATION
Resumo:
Polysaccharides are gaining increasing attention as potential environmental friendly and sustainable building blocks in many fields of the (bio)chemical industry. The microbial production of polysaccharides is envisioned as a promising path, since higher biomass growth rates are possible and therefore higher productivities may be achieved compared to vegetable or animal polysaccharides sources. This Ph.D. thesis focuses on the modeling and optimization of a particular microbial polysaccharide, namely the production of extracellular polysaccharides (EPS) by the bacterial strain Enterobacter A47. Enterobacter A47 was found to be a metabolically versatile organism in terms of its adaptability to complex media, notably capable of achieving high growth rates in media containing glycerol byproduct from the biodiesel industry. However, the industrial implementation of this production process is still hampered due to a largely unoptimized process. Kinetic rates from the bioreactor operation are heavily dependent on operational parameters such as temperature, pH, stirring and aeration rate. The increase of culture broth viscosity is a common feature of this culture and has a major impact on the overall performance. This fact complicates the mathematical modeling of the process, limiting the possibility to understand, control and optimize productivity. In order to tackle this difficulty, data-driven mathematical methodologies such as Artificial Neural Networks can be employed to incorporate additional process data to complement the known mathematical description of the fermentation kinetics. In this Ph.D. thesis, we have adopted such an hybrid modeling framework that enabled the incorporation of temperature, pH and viscosity effects on the fermentation kinetics in order to improve the dynamical modeling and optimization of the process. A model-based optimization method was implemented that enabled to design bioreactor optimal control strategies in the sense of EPS productivity maximization. It is also critical to understand EPS synthesis at the level of the bacterial metabolism, since the production of EPS is a tightly regulated process. Methods of pathway analysis provide a means to unravel the fundamental pathways and their controls in bioprocesses. In the present Ph.D. thesis, a novel methodology called Principal Elementary Mode Analysis (PEMA) was developed and implemented that enabled to identify which cellular fluxes are activated under different conditions of temperature and pH. It is shown that differences in these two parameters affect the chemical composition of EPS, hence they are critical for the regulation of the product synthesis. In future studies, the knowledge provided by PEMA could foster the development of metabolically meaningful control strategies that target the EPS sugar content and oder product quality parameters.
Resumo:
Our docking program, Fitted, implemented in our computational platform, Forecaster, has been modified to carry out automated virtual screening of covalent inhibitors. With this modified version of the program, virtual screening and further docking-based optimization of a selected hit led to the identification of potential covalent reversible inhibitors of prolyl oligopeptidase activity. After visual inspection, a virtual hit molecule together with four analogues were selected for synthesis and made in one-five chemical steps. Biological evaluations on recombinant POP and FAPα enzymes, cell extracts, and living cells demonstrated high potency and selectivity for POP over FAPα and DPPIV. Three compounds even exhibited high nanomolar inhibitory activities in intact living human cells and acceptable metabolic stability. This small set of molecules also demonstrated that covalent binding and/or geometrical constraints to the ligand/protein complex may lead to an increase in bioactivity.
Resumo:
Tässä työssä on esitetty väsyttävän kuormituksen mittaamiseen ja mittausdatan jälkikäsittelyyn sekä väsymismitoitukseen liittyviä menetelmiä. Menetelmien sovelluskohteena oli metsäkoneen kuormain, joka on väsyttävästi kuormitettu hitsattu rakenne. Teoriaosassa on kuvattu väsymisilmiötä ja väsymismitoitusmenetelmiä sekä kuormitusten tunnistamiseen ja mittausten jälkikäsittelyyn liittyviä menetelmiä. Yleisimmin käytettyjen väsymismitoitusmenetelmien rinnalle on esitetty luotettavuuteen perustuvaa väsymismitoitusmenetelmää. Kuormainten suunnittelussa on keveys- j a kestoikävaatimusten takia erityisen suuri merkitys väsymisen huomioimisella. Rakenteille on ominaista tietyt toiminnan kannalta välttämättömät hitsatut yksityiskohdat, jotka usein määräävät koko rakenteen kestoiän. Koska nämä ongelmakohdat pystytään useimmiten tunnistamaan jo suunnitteluvaiheessa, voidaan yksityiskohtien muotoilulla usein parantaa huomattavasti koko rakenteen kestoikää. Näiden yksityiskohtien optimointi on osittain mahdollista toteuttaa ilman kuormituskertymätietoa, mutta useimmiten kuormitusten tunnistaminen on edellytys parhaan ratkaisun löytymiselle. Tällöin toistaiseksi paras keino todellisen väsyttävän kuormituksen tunnistamiseksi on pitkäaikaiset kenttämittaukset. Kenttämittauksilla selvitetään rakenteeseen kohdistuvat kuormitukset venymäliuskojen avulla. Kuormitusten tunnistamisella on erityisen suuri merkitys kun halutaan määrittää rakenteen kestoikä. Väsyminen ja väsyttävä kuormitus ovat kuitenkin tilastollisia muuttujia j a yksittäiselle rakenteelle ei ole mahdollista määrittää tarkkaa k estoikää. Tilastollisia menetelmiä käyttäen on kuitenkin mahdollista määrittää rakenteen vaurioitumisriski. Laskettaessa vaurioitumisriskiä suurelle määrälle yksittäisiä rakenteita voidaan muodostaa tarkkojakin ennusteita mahdollisten vaurioiden lukumäärästä. Tällöin kuormituskertymätiedosta voi olla tavanomaisen suunnittelun lisäksi laajempaa hyötyä esimerkiksi takuukäsittelyssä. Tässä työssä on sovellettu esitettyjä teorioita käytännössä metsäkoneen harvesterin puomiston väsymistarkasteluun. Kyseisen rakenteen kuormituksia mitattiin kahden viikon aikana yhteensä 35 tuntia, jonka perusteella laskettiin väsyttävän kuormituksen tilastollinen jakauma esimerkkitapaukselle. Mittauksen perusteella ei voitu tehdä kuitenkaan johtopäätöksiä tuotteen koko elinkaaren kuormituksista eikä muiden samanlaisten tuotteiden kuormituksista, koska mitattu otos oli suhteellisen lyhyt ja rajoittui vain yhteen käyttäjään ja muutamaan käyttökohteeseen. Menetelmien testaamiseksi kyseinen otos oli kuitenkin riittävä. Kuormituskertymätietoa käytettiin hyväksi myös laatumääritysten muodostamisessaesimerkkitapaukselle. Murtumismekaniikkaan perustuvalla menetelmällä arvioitiinharvesteripilarin valun mahdollisten valuvirheiden suurin sallittu koko. Luotettavuuteen pohjautuvan mitoitusmenettelyn tarve näyttää olevanlisääntymässä, joten pitkäaikaisten kenttämittausten tehokas hyödyntäminen tulee olemaan keskeinen osa väsymismitoitusta lähitulevaisuudessa. Menetelmiä olisi mahdollista tehostaa yhdistämällä kuormituskertymään erilaisia kuormitusten suhteen riippuvia tunnettuja suureita kuten käsiteltävän puun halkaisija. Todellisettuotekohtaiset tilastolliset jakaumat kuormituksista voitaisiin muodostaa mahdollisesti tehokkaammin, jos esimerkiksi kuormitusten riippuvuus metsätyypistä pystyttäisiin ensin määrittämään.
Resumo:
Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.
Resumo:
Centrifugal pumps are a notable end-consumer of electrical energy. Typical application of a centrifugal pump is the filling or emptying of a reservoir tank, where the pump is often operated at a constant speed until the process is completed. Installing a frequency converter to control the motor substitutes the traditional fixed-speed pumping system, allows the optimization of rotational speed profile for the pumping tasks and enables the estimation of rotational speed and shaft torque of an induction motor without any additional measurements from the motor shaft. Utilization of variable-speed operation provides the possibility to decrease the overall energy consumption of the pumping task. The static head of the pumping process may change during the pumping task. In such systems, the minimum rotational speed changes during reservoir filling or emptying, and the minimum energy consumption can’t be achieved with a fixed rotational speed. This thesis presents embedded algorithms to automatically identify, optimize and monitor pumping processes between supply and destination reservoirs, and evaluates the changing static head –based optimization method.
Resumo:
The Two-Connected Network with Bounded Ring (2CNBR) problem is a network design problem addressing the connection of servers to create a survivable network with limited redirections in the event of failures. Particle Swarm Optimization (PSO) is a stochastic population-based optimization technique modeled on the social behaviour of flocking birds or schooling fish. This thesis applies PSO to the 2CNBR problem. As PSO is originally designed to handle a continuous solution space, modification of the algorithm was necessary in order to adapt it for such a highly constrained discrete combinatorial optimization problem. Presented are an indirect transcription scheme for applying PSO to such discrete optimization problems and an oscillating mechanism for averting stagnation.
Resumo:
Les centres d’appels sont des éléments clés de presque n’importe quelle grande organisation. Le problème de gestion du travail a reçu beaucoup d’attention dans la littérature. Une formulation typique se base sur des mesures de performance sur un horizon infini, et le problème d’affectation d’agents est habituellement résolu en combinant des méthodes d’optimisation et de simulation. Dans cette thèse, nous considérons un problème d’affection d’agents pour des centres d’appels soumis a des contraintes en probabilité. Nous introduisons une formulation qui exige que les contraintes de qualité de service (QoS) soient satisfaites avec une forte probabilité, et définissons une approximation de ce problème par moyenne échantillonnale dans un cadre de compétences multiples. Nous établissons la convergence de la solution du problème approximatif vers celle du problème initial quand la taille de l’échantillon croit. Pour le cas particulier où tous les agents ont toutes les compétences (un seul groupe d’agents), nous concevons trois méthodes d’optimisation basées sur la simulation pour le problème de moyenne échantillonnale. Étant donné un niveau initial de personnel, nous augmentons le nombre d’agents pour les périodes où les contraintes sont violées, et nous diminuons le nombre d’agents pour les périodes telles que les contraintes soient toujours satisfaites après cette réduction. Des expériences numériques sont menées sur plusieurs modèles de centre d’appels à faible occupation, au cours desquelles les algorithmes donnent de bonnes solutions, i.e. la plupart des contraintes en probabilité sont satisfaites, et nous ne pouvons pas réduire le personnel dans une période donnée sont introduire de violation de contraintes. Un avantage de ces algorithmes, par rapport à d’autres méthodes, est la facilité d’implémentation.
Resumo:
Analog-to digital Converters (ADC) have an important impact on the overall performance of signal processing system. This research is to explore efficient techniques for the design of sigma-delta ADC,specially for multi-standard wireless tranceivers. In particular, the aim is to develop novel models and algorithms to address this problem and to implement software tools which are avle to assist the designer's decisions in the system-level exploration phase. To this end, this thesis presents a framework of techniques to design sigma-delta analog to digital converters.A2-2-2 reconfigurable sigma-delta modulator is proposed which can meet the design specifications of the three wireless communication standards namely GSM,WCDMA and WLAN. A sigma-delta modulator design tool is developed using the Graphical User Interface Development Environment (GUIDE) In MATLAB.Genetic Algorithm(GA) based search method is introduced to find the optimum value of the scaling coefficients and to maximize the dynamic range in a sigma-delta modulator.
Resumo:
Cancer treatment is most effective when it is detected early and the progress in treatment will be closely related to the ability to reduce the proportion of misses in the cancer detection task. The effectiveness of algorithms for detecting cancers can be greatly increased if these algorithms work synergistically with those for characterizing normal mammograms. This research work combines computerized image analysis techniques and neural networks to separate out some fraction of the normal mammograms with extremely high reliability, based on normal tissue identification and removal. The presence of clustered microcalcifications is one of the most important and sometimes the only sign of cancer on a mammogram. 60% to 70% of non-palpable breast carcinoma demonstrates microcalcifications on mammograms [44], [45], [46].WT based techniques are applied on the remaining mammograms, those are obviously abnormal, to detect possible microcalcifications. The goal of this work is to improve the detection performance and throughput of screening-mammography, thus providing a ‘second opinion ‘ to the radiologists. The state-of- the- art DWT computation algorithms are not suitable for practical applications with memory and delay constraints, as it is not a block transfonn. Hence in this work, the development of a Block DWT (BDWT) computational structure having low processing memory requirement has also been taken up.
Resumo:
Optimal control theory is a powerful tool for solving control problems in quantum mechanics, ranging from the control of chemical reactions to the implementation of gates in a quantum computer. Gradient-based optimization methods are able to find high fidelity controls, but require considerable numerical effort and often yield highly complex solutions. We propose here to employ a two-stage optimization scheme to significantly speed up convergence and achieve simpler controls. The control is initially parametrized using only a few free parameters, such that optimization in this pruned search space can be performed with a simplex method. The result, considered now simply as an arbitrary function on a time grid, is the starting point for further optimization with a gradient-based method that can quickly converge to high fidelities. We illustrate the success of this hybrid technique by optimizing a geometric phase gate for two superconducting transmon qubits coupled with a shared transmission line resonator, showing that a combination of Nelder-Mead simplex and Krotov’s method yields considerably better results than either one of the two methods alone.
Resumo:
In an evermore competitive environment, power distribution companies need to continuously monitor and improve the reliability indices of their systems. The network reconfiguration (NR) of a distribution system is a technique that well adapts to this new deregulated environment for it allows improvement of system reliability indices without the onus involved in procuring new equipment. This paper presents a reliability-based NR methodology that uses metaheuristic techniques to search for the optimal network configuration. Three metaheuristics, i.e. Tabu Search, Evolution Strategy, and Differential Evolution, are tested using a Brazilian distribution network and the results are discussed. © 2009 IEEE.
Resumo:
Besides optimizing classifier predictive performance and addressing the curse of the dimensionality problem, feature selection techniques support a classification model as simple as possible. In this paper, we present a wrapper feature selection approach based on Bat Algorithm (BA) and Optimum-Path Forest (OPF), in which we model the problem of feature selection as an binary-based optimization technique, guided by BA using the OPF accuracy over a validating set as the fitness function to be maximized. Moreover, we present a methodology to better estimate the quality of the reduced feature set. Experiments conducted over six public datasets demonstrated that the proposed approach provides statistically significant more compact sets and, in some cases, it can indeed improve the classification effectiveness. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Since the beginning, some pattern recognition techniques have faced the problem of high computational burden for dataset learning. Among the most widely used techniques, we may highlight Support Vector Machines (SVM), which have obtained very promising results for data classification. However, this classifier requires an expensive training phase, which is dominated by a parameter optimization that aims to make SVM less prone to errors over the training set. In this paper, we model the problem of finding such parameters as a metaheuristic-based optimization task, which is performed through Harmony Search (HS) and some of its variants. The experimental results have showen the robustness of HS-based approaches for such task in comparison against with an exhaustive (grid) search, and also a Particle Swarm Optimization-based implementation.
Resumo:
In this paper we deal with the problem of boosting the Optimum-Path Forest (OPF) clustering approach using evolutionary-based optimization techniques. As the OPF classifier performs an exhaustive search to find out the size of sample's neighborhood that allows it to reach the minimum graph cut as a quality measure, we compared several optimization techniques that can obtain close graph cut values to the ones obtained by brute force. Experiments in two public datasets in the context of unsupervised network intrusion detection have showed the evolutionary optimization techniques can find suitable values for the neighborhood faster than the exhaustive search. Additionally, we have showed that it is not necessary to employ many agents for such task, since the neighborhood size is defined by discrete values, with constrain the set of possible solution to a few ones.