940 resultados para Evolutionary optimization methods
Resumo:
Microalgae have many applications, such as biodiesel production or food supplement. Depending on the application, the optimization of certain fractions of the biochemical composition (proteins, carbohydrates and lipids) is required. Therefore, samples obtained in different culture conditions must be analyzed in order to compare the content of such fractions. Nevertheless, traditional methods necessitate lengthy analytical procedures with prolonged sample turn-around times. Results of the biochemical composition of Nannochloropsis oculata samples with different protein, carbohydrate and lipid contents obtained by conventional analytical methods have been compared to those obtained by thermogravimetry (TGA) and a Pyroprobe device connected to a gas chromatograph with mass spectrometer detector (Py–GC/MS), showing a clear correlation. These results suggest a potential applicability of these techniques as fast and easy methods to qualitatively compare the biochemical composition of microalgal samples.
Resumo:
Modern compilers present a great and ever increasing number of options which can modify the features and behavior of a compiled program. Many of these options are often wasted due to the required comprehensive knowledge about both the underlying architecture and the internal processes of the compiler. In this context, it is usual, not having a single design goal but a more complex set of objectives. In addition, the dependencies between different goals are difficult to be a priori inferred. This paper proposes a strategy for tuning the compilation of any given application. This is accomplished by using an automatic variation of the compilation options by means of multi-objective optimization and evolutionary computation commanded by the NSGA-II algorithm. This allows finding compilation options that simultaneously optimize different objectives. The advantages of our proposal are illustrated by means of a case study based on the well-known Apache web server. Our strategy has demonstrated an ability to find improvements up to 7.5% and up to 27% in context switches and L2 cache misses, respectively, and also discovers the most important bottlenecks involved in the application performance.
Resumo:
This paper studies stability properties of linear optimization problems with finitely many variables and an arbitrary number of constraints, when only left hand side coefficients can be perturbed. The coefficients of the constraints are assumed to be continuous functions with respect to an index which ranges on certain compact Hausdorff topological space, and these properties are preserved by the admissible perturbations. More in detail, the paper analyzes the continuity properties of the feasible set, the optimal set and the optimal value, as well as the preservation of desirable properties (boundedness, uniqueness) of the feasible and of the optimal sets, under sufficiently small perturbations.
Resumo:
Superstructure approaches are the solution to the difficult problem which involves the rigorous economic design of a distillation column. These methods require complex initialization procedures and they are hard to solve. For this reason, these methods have not been extensively used. In this work, we present a methodology for the rigorous optimization of chemical processes implemented on a commercial simulator using surrogate models based on a kriging interpolation. Several examples were studied, but in this paper, we perform the optimization of a superstructure for a non-sharp separation to show the efficiency and effectiveness of the method. Noteworthy that it is possible to get surrogate models accurate enough with up to seven degrees of freedom.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Genetic assignment methods use genotype likelihoods to draw inference about where individuals were or were not born, potentially allowing direct, real-time estimates of dispersal. We used simulated data sets to test the power and accuracy of Monte Carlo resampling methods in generating statistical thresholds for identifying F-0 immigrants in populations with ongoing gene flow, and hence for providing direct, real-time estimates of migration rates. The identification of accurate critical values required that resampling methods preserved the linkage disequilibrium deriving from recent generations of immigrants and reflected the sampling variance present in the data set being analysed. A novel Monte Carlo resampling method taking into account these aspects was proposed and its efficiency was evaluated. Power and error were relatively insensitive to the frequency assumed for missing alleles. Power to identify F-0 immigrants was improved by using large sample size (up to about 50 individuals) and by sampling all populations from which migrants may have originated. A combination of plotting genotype likelihoods and calculating mean genotype likelihood ratios (D-LR) appeared to be an effective way to predict whether F-0 immigrants could be identified for a particular pair of populations using a given set of markers.
Resumo:
DNA Microarray is a powerful tool to measure the level of a mixed population of nucleic acids at one time, which has great impact in many aspects of life sciences research. In order to distinguish nucleic acids with very similar composition by hybridization, it is necessary to design microarray probes with high specificities and sensitivities. Highly specific probes correspond to probes having unique DNA sequences; whereas highly sensitive probes correspond to those with melting temperature within a desired range and having no secondary structure. The selection of these probes from a set of functional DNA sequences (exons) constitutes a computationally expensive discrete non-linear search problem. We delegate the search task to a simple yet effective Evolution Strategy algorithm. The computational efficiency is also greatly improved by making use of an available bioinformatics tool.
Resumo:
Background: Renal transplant recipients were noted to appear cushingoid while on low doses of steroid as part of a triple therapy immunosuppression of cyclosporin A (CsA), prednisolone, and azathioprine. Methods: The study group comprised adult renal transplant recipients with stable graft function who had received their renal allograft a minimum of 1 year previously (43 studies undertaken in 22 men and 20 women) with median daily prednisone dose of 7 mg (range 3-10). The control group was healthy nontransplant subjects [median dose 10 mg (10-30)]. Prednisolone bioavailability was measured using a limited 6-hour area under the curve (AUC), with prednisolone measured using specific HPLC assay. Results: The median prednisolone AUC/mg dose for all transplant recipients was significantly greater than the control group by approximately 50% (316 nmol(.)h/L/mg prednisolone versus 218). AUC was significantly higher in female recipients (median 415 versus 297 for men) and in recipients receiving cyclospotin (348 versus 285). The highest AUC was in women on estrogen supplements who were receiving cyclosporin (median 595). A significantly higher proportion of patients on triple therapy had steroid side effects compared with those on steroid and azathioprine (17/27 versus 4/15), more women than men had side effects (14/16 versus 7/22), and the AUC/mg prednisone was greater in those with side effects than without (median 377 versus 288 nmol-h/L/mg). Discussion: The results are consistent with the hypothesis that CsA increases the bioavailability of prednisolone, most likely through inhibition of beta-glycoprotein. The increased exposure to steroid increased the side-effect profile of steroids in the majority of patients. Because the major contributor to AUC is the maximum postdose concentration, it may be possible to use single-point monitoring (2 hours postdose) for routine clinical studies.
Resumo:
The research literature on metalieuristic and evolutionary computation has proposed a large number of algorithms for the solution of challenging real-world optimization problems. It is often not possible to study theoretically the performance of these algorithms unless significant assumptions are made on either the algorithm itself or the problems to which it is applied, or both. As a consequence, metalieuristics are typically evaluated empirically using a set of test problems. Unfortunately, relatively little attention has been given to the development of methodologies and tools for the large-scale empirical evaluation and/or comparison of metaheuristics. In this paper, we propose a landscape (test-problem) generator that can be used to generate optimization problem instances for continuous, bound-constrained optimization problems. The landscape generator is parameterized by a small number of parameters, and the values of these parameters have a direct and intuitive interpretation in terms of the geometric features of the landscapes that they produce. An experimental space is defined over algorithms and problems, via a tuple of parameters for any specified algorithm and problem class (here determined by the landscape generator). An experiment is then clearly specified as a point in this space, in a way that is analogous to other areas of experimental algorithmics, and more generally in experimental design. Experimental results are presented, demonstrating the use of the landscape generator. In particular, we analyze some simple, continuous estimation of distribution algorithms, and gain new insights into the behavior of these algorithms using the landscape generator.
Resumo:
In empirical studies of Evolutionary Algorithms, it is usually desirable to evaluate and compare algorithms using as many different parameter settings and test problems as possible, in border to have a clear and detailed picture of their performance. Unfortunately, the total number of experiments required may be very large, which often makes such research work computationally prohibitive. In this paper, the application of a statistical method called racing is proposed as a general-purpose tool to reduce the computational requirements of large-scale experimental studies in evolutionary algorithms. Experimental results are presented that show that racing typically requires only a small fraction of the cost of an exhaustive experimental study.
Resumo:
Multiresolution (or multi-scale) techniques make it possible for Web-based GIS applications to access large dataset. The performance of such systems relies on data transmission over network and multiresolution query processing. In the literature the latter has received little research attention so far, and the existing methods are not capable of processing large dataset. In this paper, we aim to improve multiresolution query processing in an online environment. A cost model for such query is proposed first, followed by three strategies for its optimization. Significant theoretical improvement can be observed when comparing against available methods. Application of these strategies is also discussed, and similar performance enhancement can be expected if implemented in online GIS applications.
Resumo:
The optimization of resource allocation in sparse networks with real variables is studied using methods of statistical physics. Efficient distributed algorithms are devised on the basis of insight gained from the analysis and are examined using numerical simulations, showing excellent performance and full agreement with the theoretical results.