940 resultados para Evolutionary optimization methods
Resumo:
Dissertação para obtenção do Grau de Doutor em Matemática
Resumo:
INTRODUCTION: Authors describe human schistosomal granuloma in late chronic phase, from the morphological and evolutionary viewpoints. METHODS: The study was based on a histological analysis of two fragments obtained from a surgical biopsy of peritoneum and large intestine of a 42-year-old patient, with a pseudotumoral form mimicking a peritoneal carcinomatosis associated to the schistosomiasis hepatointestinal form. RESULTS: Two hundred and three granulomas were identified in the pseudotumor and 27 in the intestinal biopsy, with similar morphological features, most in the late chronic phase, in fibrotic healing. A new structural classification was suggested for granulomas: zone 1 (internal), 2 (intermediate) and 3 (external). CONCLUSIONS: Regarding granuloma as a whole, we may conclude that fibrosis is likely to be controlled by different and independent mechanisms in the three zones of the granuloma. Lamellar fibrosis in zone 3 seems to be controlled by matrix mesenchymal cells (fibroblasts and myoepithelial cells) and by inflammatory exudate cells (lymphocytes, plasmocytes, neutrophils, eosinophils). Annular fibrosis in zone 2, comprising a dense fibrous connective tissue, with few cells in the advanced phase, would be controlled by epithelioid cells involving zone 1 in recent granulomas. In zone 1, replacing periovular necrosis, an initialy loose and tracery connective neoformation, housing stellate cells or with fusiform nuclei, a dense paucicellular nodular connctive tissue emerges, probably induced by fibroblasts. In several granulomas, one of the zones is missing and granuloma is represented by two of them: Z3 and Z2, Z3 and Z1 or Z2 and Z1 and, ultimately, by a scar.
Resumo:
The objective of this work was to propose a way of using the Tocher's method of clustering to obtain a matrix similar to the cophenetic one obtained for hierarchical methods, which would allow the calculation of a cophenetic correlation. To illustrate the obtention of the proposed cophenetic matrix, we used two dissimilarity matrices - one obtained with the generalized squared Mahalanobis distance and the other with the Euclidean distance - between 17 garlic cultivars, based on six morphological characters. Basically, the proposal for obtaining the cophenetic matrix was to use the average distances within and between clusters, after performing the clustering. A function in R language was proposed to compute the cophenetic matrix for Tocher's method. The empirical distribution of this correlation coefficient was briefly studied. For both dissimilarity measures, the values of cophenetic correlation obtained for the Tocher's method were higher than those obtained with the hierarchical methods (Ward's algorithm and average linkage - UPGMA). Comparisons between the clustering made with the agglomerative hierarchical methods and with the Tocher's method can be performed using a criterion in common: the correlation between matrices of original and cophenetic distances.
Resumo:
The Thesis gives a decision support framework that has significant impact on the economic performance and viability of a hydropower company. The studyaddresses the short-term hydropower planning problem in the Nordic deregulated electricity market. The basics of the Nordic electricity market, trading mechanisms, hydropower system characteristics and production planning are presented in the Thesis. The related modelling theory and optimization methods are covered aswell. The Thesis provides a mixed integer linear programming model applied in asuccessive linearization method for optimal bidding and scheduling decisions inthe hydropower system operation within short-term horizon. A scenario based deterministic approach is exploited for modelling uncertainty in market price and inflow. The Thesis proposes a calibration framework to examine the physical accuracy and economic optimality of the decisions suggested by the model. A calibration example is provided with data from a real hydropower system using a commercial modelling application with the mixed integer linear programming solver CPLEX.
Resumo:
Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.
Resumo:
Background: Optimization methods allow designing changes in a system so that specific goals are attained. These techniques are fundamental for metabolic engineering. However, they are not directly applicable for investigating the evolution of metabolic adaptation to environmental changes. Although biological systems have evolved by natural selection and result in well-adapted systems, we can hardly expect that actual metabolic processes are at the theoretical optimum that could result from an optimization analysis. More likely, natural systems are to be found in a feasible region compatible with global physiological requirements. Results: We first present a new method for globally optimizing nonlinear models of metabolic pathways that are based on the Generalized Mass Action (GMA) representation. The optimization task is posed as a nonconvex nonlinear programming (NLP) problem that is solved by an outer- approximation algorithm. This method relies on solving iteratively reduced NLP slave subproblems and mixed-integer linear programming (MILP) master problems that provide valid upper and lower bounds, respectively, on the global solution to the original NLP. The capabilities of this method are illustrated through its application to the anaerobic fermentation pathway in Saccharomyces cerevisiae. We next introduce a method to identify the feasibility parametric regions that allow a system to meet a set of physiological constraints that can be represented in mathematical terms through algebraic equations. This technique is based on applying the outer-approximation based algorithm iteratively over a reduced search space in order to identify regions that contain feasible solutions to the problem and discard others in which no feasible solution exists. As an example, we characterize the feasible enzyme activity changes that are compatible with an appropriate adaptive response of yeast Saccharomyces cerevisiae to heat shock Conclusion: Our results show the utility of the suggested approach for investigating the evolution of adaptive responses to environmental changes. The proposed method can be used in other important applications such as the evaluation of parameter changes that are compatible with health and disease states.
Resumo:
Tässä diplomityössä käsitellään monikappalesysteeminä mallinnetun toimilaitteen tai mekaanisen systeemin kappaleissa vaikuttavien rasitusten, siirtymien ja jännitysten laskentamenetelmiä. Työhön sisällytettyjen menetelmien valinta on toteutettu 2000-luvulla virtuaalisuunnittelua käsittelevissä tiedelehdissä julkaistujen artikkelien pohjalta. Työn tarkoituksena on muodostaa kirjallisuuskatsaus uusien laskentamenetelmien ominaisuuksista ja metodiikasta, mitä voidaan tarvittaessa soveltaa virtuaalisuunnittelun tarpeisiin. Kaksi esiteltävistä menetelmistä on optimointimenetelmiä (RBDO ja ESL). Muissa menetelmissä käsitellään muun muassa venymien rekonstruointia ja hankauskitkasta komponentteihin kohdistuvia jännityksiä. Moving frame-menetelmässä sovelletaan kelluvan koordinaatiston periaatetta, yksi menetelmistä perustuu selkeästi osarakennetekniikkaan ja yhdessä kappaleiden joustokäyttäytymistä mallinnetaan muotofunktioiden avulla. Lisäksi on kolme soveltavaa esimerkkiä rasitusten seurannasta teollisuuskoneissa. Laskentamenetelmät ovat luonteeltaan ja sovelluskelpoisuudeltaan erilaisia. Optimointimenetelmät ovat parhaimmillaan rakenteiden jatkokehitystyössä, siinä missä muut menetelmät soveltuvat joko olemassa olevien rakenteiden mallintamiseen tai kokonaan uusien systeemien suunnittelutyökaluiksi. Tätä eroavuutta voidaan pitää hyvänä asiana, jotta voidaan valita parhaiten omiin tarkoituksiin soveltuva menetelmä.
Resumo:
Vorgestellt wird eine weltweit neue Methode, Schnittstellen zwischen Menschen und Maschinen für individuelle Bediener anzupassen. Durch Anwenden von Abstraktionen evolutionärer Mechanismen wie Selektion, Rekombination und Mutation in der EOGUI-Methodik (Evolutionary Optimization of Graphical User Interfaces) kann eine rechnergestützte Umsetzung der Methode für Graphische Bedienoberflächen, insbesondere für industrielle Prozesse, bereitgestellt werden. In die Evolutionäre Optimierung fließen sowohl die objektiven, d.h. messbaren Größen wie Auswahlhäufigkeiten und -zeiten, mit ein, als auch das anhand von Online-Fragebögen erfasste subjektive Empfinden der Bediener. Auf diese Weise wird die Visualisierung von Systemen den Bedürfnissen und Präferenzen einzelner Bedienern angepasst. Im Rahmen dieser Arbeit kann der Bediener aus vier Bedienoberflächen unterschiedlicher Abstraktionsgrade für den Beispielprozess MIPS ( MIschungsProzess-Simulation) die Objekte auswählen, die ihn bei der Prozessführung am besten unterstützen. Über den EOGUI-Algorithmus werden diese Objekte ausgewählt, ggf. verändert und in einer neuen, dem Bediener angepassten graphischen Bedienoberfläche zusammengefasst. Unter Verwendung des MIPS-Prozesses wurden Experimente mit der EOGUI-Methodik durchgeführt, um die Anwendbarkeit, Akzeptanz und Wirksamkeit der Methode für die Führung industrieller Prozesse zu überprüfen. Anhand der Untersuchungen kann zu großen Teilen gezeigt werden, dass die entwickelte Methodik zur Evolutionären Optimierung von Mensch-Maschine-Schnittstellen industrielle Prozessvisualisierungen tatsächlich an den einzelnen Bediener anpaßt und die Prozessführung verbessert.
Resumo:
La optimización y armonización son factores clave para tener un buen desempeño en la industria química. BASF ha desarrollado un proyecto llamada acelerador. El objetivo de este proyecto ha sido la armonización y la integración de los procesos de la cadena de suministro a nivel mundial. El proceso básico de manejo de inventarios se quedó fuera del proyecto y debía ser analizado. El departamento de manejo de inventarios en BASF SE ha estado desarrollando su propia estrategia para la definición de procesos globales de manufactura. En este trabajo se presentará un informe de las fases de la formulación de la estrategia y establecer algunas pautas para la fase de implementación que está teniendo lugar en 2012 y 2013.
Resumo:
Dynamic optimization methods have become increasingly important over the last years in economics. Within the dynamic optimization techniques employed, optimal control has emerged as the most powerful tool for the theoretical economic analysis. However, there is the need to advance further and take account that many dynamic economic processes are, in addition, dependent on some other parameter different than time. One can think of relaxing the assumption of a representative (homogeneous) agent in macro- and micro-economic applications allowing for heterogeneity among the agents. For instance, the optimal adaptation and diffusion of a new technology over time, may depend on the age of the person that adopted the new technology. Therefore, the economic models must take account of heterogeneity conditions within the dynamic framework. This thesis intends to accomplish two goals. The first goal is to analyze and revise existing environmental policies that focus on defining the optimal management of natural resources over time, by taking account of the heterogeneity of environmental conditions. Thus, the thesis makes a policy orientated contribution in the field of environmental policy by defining the necessary changes to transform an environmental policy based on the assumption of homogeneity into an environmental policy which takes account of heterogeneity. As a result the newly defined environmental policy will be more efficient and likely also politically more acceptable since it is tailored more specifically to the heterogeneous environmental conditions. Additionally to its policy orientated contribution, this thesis aims making a methodological contribution by applying a new optimization technique for solving problems where the control variables depend on two or more arguments --- the so-called two-stage solution approach ---, and by applying a numerical method --- the Escalator Boxcar Train Method --- for solving distributed optimal control problems, i.e., problems where the state variables, in addition to the control variables, depend on two or more arguments. Chapter 2 presents a theoretical framework to determine optimal resource allocation over time for the production of a good by heterogeneous producers, who generate a stock externalit and derives government policies to modify the behavior of competitive producers in order to achieve optimality. Chapter 3 illustrates the method in a more specific context, and integrates the aspects of quality and time, presenting a theoretical model that allows to determine the socially optimal outcome over time and space for the problem of waterlogging in irrigated agricultural production. Chapter 4 of this thesis concentrates on forestry resources and analyses the optimal selective-logging regime of a size-distributed forest.
Resumo:
Different optimization methods can be employed to optimize a numerical estimate for the match between an instantiated object model and an image. In order to take advantage of gradient-based optimization methods, perspective inversion must be used in this context. We show that convergence can be very fast by extrapolating to maximum goodness-of-fit with Newton's method. This approach is related to methods which either maximize a similar goodness-of-fit measure without use of gradient information, or else minimize distances between projected model lines and image features. Newton's method combines the accuracy of the former approach with the speed of convergence of the latter.
Resumo:
Advances made over the past decade in structure determination from powder diffraction data are reviewed with particular emphasis on algorithmic developments and the successes and limitations of the technique. While global optimization methods have been successful in the solution of molecular crystal structures, new methods are required to make the solution of inorganic crystal structures more routine. The use of complementary techniques such as NMR to assist structure solution is discussed and the potential for the combined use of X-ray and neutron diffraction data for structure verification is explored. Structures that have proved difficult to solve from powder diffraction data are reviewed and the limitations of structure determination from powder diffraction data are discussed. Furthermore, the prospects of solving small protein crystal structures over the next decade are assessed.
Resumo:
Parameters to be determined in a least squares refinement calculation to fit a set of observed data may sometimes usefully be `predicated' to values obtained from some independent source, such as a theoretical calculation. An algorithm for achieving this in a least squares refinement calculation is described, which leaves the operator in full control of the weight that he may wish to attach to the predicate values of the parameters.
Resumo:
Advances made over the past decade in structure determination from powder diffraction data are reviewed with particular emphasis on algorithmic developments and the successes and limitations of the technique. While global optimization methods have been successful in the solution of molecular crystal structures, new methods are required to make the solution of inorganic crystal structures more routine. The use of complementary techniques such as NMR to assist structure solution is discussed and the potential for the combined use of X-ray and neutron diffraction data for structure verification is explored. Structures that have proved difficult to solve from powder diffraction data are reviewed and the limitations of structure determination from powder diffraction data are discussed. Furthermore, the prospects of solving small protein crystal structures over the next decade are assessed.
Resumo:
A new database of weather and circulation type catalogs is presented comprising 17 automated classification methods and five subjective classifications. It was compiled within COST Action 733 "Harmonisation and Applications of Weather Type Classifications for European regions" in order to evaluate different methods for weather and circulation type classification. This paper gives a technical description of the included methods using a new conceptual categorization for classification methods reflecting the strategy for the definition of types. Methods using predefined types include manual and threshold based classifications while methods producing types derived from the input data include those based on eigenvector techniques, leader algorithms and optimization algorithms. In order to allow direct comparisons between the methods, the circulation input data and the methods' configuration were harmonized for producing a subset of standard catalogs of the automated methods. The harmonization includes the data source, the climatic parameters used, the classification period as well as the spatial domain and the number of types. Frequency based characteristics of the resulting catalogs are presented, including variation of class sizes, persistence, seasonal and inter-annual variability as well as trends of the annual frequency time series. The methodological concept of the classifications is partly reflected by these properties of the resulting catalogs. It is shown that the types of subjective classifications compared to automated methods show higher persistence, inter-annual variation and long-term trends. Among the automated classifications optimization methods show a tendency for longer persistence and higher seasonal variation. However, it is also concluded that the distance metric used and the data preprocessing play at least an equally important role for the properties of the resulting classification compared to the algorithm used for type definition and assignment.