911 resultados para Robust Optimization
Resumo:
Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA) representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study that optimizes the ethanol production in the fermentation of Saccharomyces cerevisiae.
Resumo:
In this work, we have studied the texturization process of (100) c-Si wafers using a low concentration potassium hydroxide solution in order to obtain good quality textured wafers. The optimization of the etching conditions have led to random but uniform pyramidal structures with good optical properties. Then, symmetric heterojunctions were deposited by Hot-Wire CVD onto these substrates and the Quasi-Steady-State PhotoConductance technique was used to measure passivation quality. Little degradation in the effective lifetime and implicit open circuit voltage of these devices (< 20 mV) was observed in all cases. It is especially remarkable that for big uniform pyramids, the open-circuit voltage is comparable to the values obtained on flat substrates.
Resumo:
Self-nanoemulsifying drug delivery systems of gemfibrozil were developed under Quality by Design approach for improvement of dissolution and oral absorption. Preliminary screening was performed to select proper components combination. BoxBehnken experimental design was employed as statistical tool to optimize the formulation variables, X1 (Cremophor® EL), X2 (Capmul® MCM-C8), and X3 (lemon essential oil). Systems were assessed for visual characteristics (emulsification efficacy), turbidity, droplet size, polydispersity index and drug release. Different pH media were also assayed for optimization. Following optimization, the values of formulation components (X1, X2, and X3) were 32.43%, 29.73% and 21.62%, respectively (16.22% of gemfibrozil). Transmission electron microscopy demonstrated spherical droplet morphology. SNEEDS release study was compared to commercial tablets. Optimized SNEDDS formulation of gemfibrozil showed a significant increase in dissolution rate compared to conventional tablets. Both formulations followed Weibull mathematical model release with a significant difference in td parameter in favor of the SNEDDS. Equally amodelistic parameters were calculated being the dissolution efficiency significantly higher for SNEDDS, confirming that the developed SNEDDS formulation was superior to commercial formulation with respect to in vitro dissolution profile. This paper provides an overview of the SNEDDS of the gemfibrozil as a promising alternative to improve oral absorption.
Resumo:
Drug combinations can improve angiostatic cancer treatment efficacy and enable the reduction of side effects and drug resistance. Combining drugs is non-trivial due to the high number of possibilities. We applied a feedback system control (FSC) technique with a population-based stochastic search algorithm to navigate through the large parametric space of nine angiostatic drugs at four concentrations to identify optimal low-dose drug combinations. This implied an iterative approach of in vitro testing of endothelial cell viability and algorithm-based analysis. The optimal synergistic drug combination, containing erlotinib, BEZ-235 and RAPTA-C, was reached in a small number of iterations. Final drug combinations showed enhanced endothelial cell specificity and synergistically inhibited proliferation (p < 0.001), but not migration of endothelial cells, and forced enhanced numbers of endothelial cells to undergo apoptosis (p < 0.01). Successful translation of this drug combination was achieved in two preclinical in vivo tumor models. Tumor growth was inhibited synergistically and significantly (p < 0.05 and p < 0.01, respectively) using reduced drug doses as compared to optimal single-drug concentrations. At the applied conditions, single-drug monotherapies had no or negligible activity in these models. We suggest that FSC can be used for rapid identification of effective, reduced dose, multi-drug combinations for the treatment of cancer and other diseases.
Resumo:
Tämän diplomityön tavoitteena oli sekundäärisen esiflotaation optimointi Stora Enso Sachsen GmbH:n tehtaalla. Optimoinnin muuttujana käytettiin vaahdon määrää ja optimointiparametreinä ISO-vaaleutta, saantoja sekä tuhkapitoisuutta. Lisäksi tutkittiin flotaatiosakeuden vaikutusta myös muihin tehtaan flotaatioprosesseihin. Kirjallisuusosassa tarkasteltiin flotaatiotapahtumaa, poistettavien partikkeleiden ja ilmakuplien kontaktia, vaahdon muodostumista sekä tärkeimpiä käytössä olevia siistausflotaattoreiden laiteratkaisuja. Kokeellisessa osassa tutkittiin flotaatiosakeuden pienetämisen vaikutuksia tehtaan flotaatioprosesseihin tuhkapitoisuuden, ISO-vaaleuden, valon sironta- ja valon absorpiokerrointen kannalta. Sekundäärisen esiflotaation optimonti suoritettiin muuttamalla vaahdon määrää kolmella erilaisella injektorin koolla, (8 mm, 10 mm ja 13 mm), joista keskimmäinen kasvattaa 30 % massan tilavuusvirtaa ilmapitoisuuden muodossa. Optimonnin tarkoituksena oli kasvattaa hyväksytyn massajakeen ISO-vaaleutta, sekä kasvattaa kuitu- ja kokonaissaantoa sekundäärisessä esiflotaatiossa. Flotaatiosakeuden pienentämisellä oli edullisia vaikutuksia ISO-vaaleuteen ja valon sirontakertoimeen kussakin flotaatiossa. Tuhkapitoisuus pieneni sekundäärisissä flotaatioissa enemmän sakeuden ollessa pienempi, kun taas primäärisissä flotaatiossa vaikutus oli päinvastainen. Valon absorptiokerroin parani jälkiflotaatioissa alhaisemmalla sakeudella, kun taas esiflotaatioissa vaikutus oli päinvastainen. Sekundäärisen esiflotaation optimoinnin tuloksena oli lähes 5 % parempi ISO-vaaleus hyväksytyssä massajakeessa. Kokonaissaanto parani optimoinnin myötä 5 % ja kuitusaanto 2 %. Saantojen nousu tuottaa vuosittaisia säästöjä siistauslaitoksen tuotantokapasiteetin noustessa 0,5 %. Tämän lisäksi sekundäärisessä esiflotaatiossa rejektoituvan massavirran pienentyminen tuottaa lisäsäästöjä tehtaan voimalaitoksella.
Resumo:
Reinsurance is one of the tools that an insurer can use to mitigate the underwriting risk and then to control its solvency. In this paper, we focus on the proportional reinsurance arrangements and we examine several optimization and decision problems of the insurer with respect to the reinsurance strategy. To this end, we use as decision tools not only the probability of ruin but also the random variable deficit at ruin if ruin occurs. The discounted penalty function (Gerber & Shiu, 1998) is employed to calculate as particular cases the probability of ruin and the moments and the distribution function of the deficit at ruin if ruin occurs.
Resumo:
We consider robust parametric procedures for univariate discrete distributions, focusing on the negative binomial model. The procedures are based on three steps: ?First, a very robust, but possibly inefficient, estimate of the model parameters is computed. ?Second, this initial model is used to identify outliers, which are then removed from the sample. ?Third, a corrected maximum likelihood estimator is computed with the remaining observations. The final estimate inherits the breakdown point (bdp) of the initial one and its efficiency can be significantly higher. Analogous procedures were proposed in [1], [2], [5] for the continuous case. A comparison of the asymptotic bias of various estimates under point contamination points out the minimum Neyman's chi-squared disparity estimate as a good choice for the initial step. Various minimum disparity estimators were explored by Lindsay [4], who showed that the minimum Neyman's chi-squared estimate has a 50% bdp under point contamination; in addition, it is asymptotically fully efficient at the model. However, the finite sample efficiency of this estimate under the uncontaminated negative binomial model is usually much lower than 100% and the bias can be strong. We show that its performance can then be greatly improved using the three step procedure outlined above. In addition, we compare the final estimate with the procedure described in
Resumo:
We prove the existence and local uniqueness of invariant tori on the verge of breakdown for two systems: the quasi-periodically driven logistic map and the quasi-periodically forced standard map. These systems exemplify two scenarios: the Heagy-Hammel route for the creation of strange non- chaotic attractors and the nonsmooth bifurcation of saddle invariant tori. Our proofs are computer- assisted and are based on a tailored version of the Newton-Kantorovich theorem. The proofs cannot be performed using classical perturbation theory because the two scenarios are very far from the perturbative regime, and fundamental hypotheses such as reducibility or hyperbolicity either do not hold or are very close to failing. Our proofs are based on a reliable computation of the invariant tori and a careful study of their dynamical properties, leading to the rigorous validation of the numerical results with our novel computational techniques.
Resumo:
Although fetal anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fetal MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fetal and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fetal MRI recovery as in comparison with state-of-the art methods.
Resumo:
Tässä työssä optimoidaan keskinopean Wärtsilä 32 -dieselmoottorin jäähdytysjärjestelmää ja tutkitaan taajuusmuuttajien käyttömahdollisuutta kiertopumppujen yhteydessä niin, että järjestelmässä saataisiin kiertämään vain kulloinkin tarvittava määrä vettä. Tutkimuksen mallinnus on toteutettu laatimalla aiemmin käytössä olleista yksinkertaisista simulointimalleista yksi malli, johon on sisällytetty sekä virtauksen että lämmönsiirron laskenta, jotka on aiemmin mallinnettu erillisillä ohjelmilla. Diplomityö on osa projektia, joka on tehty Sähkötekniikan osaston tutkijan Mikko Pääkkösen kanssa yhteistyössä. Tämän diplomityö keskittyy lähinnä virtausteknisiin ja lämmönsiirtoon liittyviin asioihin, kun taas sähkötekniikan osuus on esitetty Mikko Pääkkösen raportissa. Tulosten perustella voidaan sanoa, että taajuusmuuttajakäyttö kannattaa kiertopumppujen yhteydessä. Käyttämällä pumppujen virtaussäätöä voidaan jäähdytysjärjestelmästä jättää monia komponentteja, kuten termostaattiventtiilejä pois. Mallinnetut yksinkertaiset piiriratkaisut näyttävät toimivan ainakin yleisellä tasolla. Tutkimusta pumppujen säädöstä ja tässä projektissa luoduista jäähdytysjärjestelmäkonfiguraatioista kannattaa jatkaa.
Resumo:
Lexical diversity measures are notoriously sensitive to variations of sample size and recent approaches to this issue typically involve the computation of the average variety of lexical units in random subsamples of fixed size. This methodology has been further extended to measures of inflectional diversity such as the average number of wordforms per lexeme, also known as the mean size of paradigm (MSP) index. In this contribution we argue that, while random sampling can indeed be used to increase the robustness of inflectional diversity measures, using a fixed subsample size is only justified under the hypothesis that the corpora that we compare have the same degree of lexematic diversity. In the more general case where they may have differing degrees of lexematic diversity, a more sophisticated strategy can and should be adopted. A novel approach to the measurement of inflectional diversity is proposed, aiming to cope not only with variations of sample size, but also with variations of lexematic diversity. The robustness of this new method is empirically assessed and the results show that while there is still room for improvement, the proposed methodology considerably attenuates the impact of lexematic diversity discrepancies on the measurement of inflectional diversity.
Resumo:
Computed tomography (CT) is a modality of choice for the study of the musculoskeletal system for various indications including the study of bone, calcifications, internal derangements of joints (with CT arthrography), as well as periprosthetic complications. However, CT remains intrinsically limited by the fact that it exposes patients to ionizing radiation. Scanning protocols need to be optimized to achieve diagnostic image quality at the lowest radiation dose possible. In this optimization process, the radiologist needs to be familiar with the parameters used to quantify radiation dose and image quality. CT imaging of the musculoskeletal system has certain specificities including the focus on high-contrast objects (i.e., in CT of bone or CT arthrography). These characteristics need to be taken into account when defining a strategy to optimize dose and when choosing the best combination of scanning parameters. In the first part of this review, we present the parameters used for the evaluation and quantification of radiation dose and image quality. In the second part, we discuss different strategies to optimize radiation dose and image quality at CT, with a focus on the musculoskeletal system and the use of novel iterative reconstruction techniques.