947 resultados para nonlinear optimization problems
Resumo:
One major methodological problem in analysis of sequence data is the determination of costs from which distances between sequences are derived. Although this problem is currently not optimally dealt with in the social sciences, it has some similarity with problems that have been solved in bioinformatics for three decades. In this article, the authors propose an optimization of substitution and deletion/insertion costs based on computational methods. The authors provide an empirical way of determining costs for cases, frequent in the social sciences, in which theory does not clearly promote one cost scheme over another. Using three distinct data sets, the authors tested the distances and cluster solutions produced by the new cost scheme in comparison with solutions based on cost schemes associated with other research strategies. The proposed method performs well compared with other cost-setting strategies, while it alleviates the justification problem of cost schemes.
Resumo:
The highway departments of all fifty states were contacted to find the extent of application of integral abutment bridges, to survey the different guidelines used for analysis and design of integral abutment bridges, and to assess the performance of such bridges through the years. The variation in design assumptions and length limitations among the various states in their approach to the use of integral abutments is discussed. The problems associated with lateral displacements at the abutment, and the solutions developed by the different states for most of the ill effects of abutment movements are summarized in the report. An algorithm based on a state-of-the-art nonlinear finite element procedure was developed and used to study piling stresses and pile-soil interaction in integral abutment bridges. The finite element idealization consists of beam-column elements with geometric and material nonlinearities for the pile and nonlinear springs for the soil. An idealized soil model (modified Ramberg-Osgood model) was introduced in this investigation to obtain the tangent stiffness of the nonlinear spring elements. Several numerical examples are presented in order to establish the reliability of the finite element model and the computer software developed. Three problems with analytical solutions were first solved and compared with theoretical solutions. A 40 ft H pile (HP 10 X 42) in six typical Iowa soils was then analyzed by first applying a horizontal displacement (to simulate bridge motion) and no rotation at the top and then applying a vertical load V incrementally until failure occurred. Based on the numerical results, the failure mechanisms were generalized to be of two types: (a) lateral type failure and (b) vertical type failure. It appears that most piles in Iowa soils (sand, soft clay and stiff clay) failed when the applied vertical load reached the ultimate soil frictional resistance (vertical type failure). In very stiff clays, however, the lateral type failure occurs before vertical type failure because the soil is sufficiently stiff to force a plastic hinge to form in the pile as the specified lateral displacement is applied. Preliminary results from this investigation showed that the vertical load-carrying capacity of H piles is not significantly affected by lateral displacements of 2 inches in soft clay, stiff clay, loose sand, medium sand and dense sand. However, in very stiff clay (average blow count of 50 from standard penetration tests), it was found that the vertical load carrying capacity of the H pile is reduced by about 50 percent for 2 inches of lateral displacement and by about 20 percent for lateral displacement of 1 inch. On the basis of the preliminary results of this investigation, the 265-feet length limitation in Iowa for integral abutment concrete bridges appears to be very conservative.
Resumo:
The highway departments of the states which use integral abutments in bridge design were contacted in order to study the extent of integral abutment use in skewed bridges and to survey the different guidelines used for analysis and design of integral abutments in skewed bridges. The variation in design assumptions and pile orientations among the various states in their approach to the use of integral abutments on skewed bridges is discussed. The problems associated with the treatment of the approach slab, backfill, and pile cap, and the reason for using different pile orientations are summarized in the report. An algorithm based on a state-of-the-art nonlinear finite element procedure previously developed by the authors was modified and used to study the influence of different factors on behavior of piles in integral abutment bridges. An idealized integral abutment was introduced by assuming that the pile is rigidly cast into the pile cap and that the approach slab offers no resistance to lateral thermal expansion. Passive soil and shear resistance of the cap are neglected in design. A 40-foot H pile (HP 10 X 42) in six typical Iowa soils was analyzed for fully restrained pile head and pinned pile head. According to numerical results, the maximum safe length for fully restrained pile head is one-half the maximum safe length for pinned pile head. If the pile head is partially restrained, the maximum safe length will lie between the two limits. The numerical results from an investigation of the effect of predrilled oversized holes indicate that if the length of the predrilled oversized hole is at least 4 feet below the ground, the vertical load-carrying capacity of the H pile is only reduced by 10 percent for 4 inches of lateral displacement in very stiff clay. With no predrilled oversized hole, the pile failed before the 4-inch lateral displacement was reached. Thus, the maximum safe lengths for integral abutment bridges may be increased by predrilling. Four different typical Iowa layered soils were selected and used in this investigation. In certain situations, compacted soil (> 50 blow count in standard penetration tests) is used as fill on top of natural soil. The numerical results showed that the critical conditions will depend on the length of the compacted soil. If the length of the compacted soil exceeds 4 feet, the failure mechanism for the pile is similar to one in a layer of very stiff clay. That is, the vertical load-carrying capacity of the H pile will be greatly reduced as the specified lateral displacement increases.
Resumo:
As a result of forensic investigations of problems across Iowa, a research study was developed aimed at providing solutions to identified problems through better management and optimization of the available pavement geotechnical materials and through ground improvement, soil reinforcement, and other soil treatment techniques. The overall goal was worked out through simple laboratory experiments, such as particle size analysis, plasticity tests, compaction tests, permeability tests, and strength tests. A review of the problems suggested three areas of study: pavement cracking due to improper management of pavement geotechnical materials, permeability of mixed-subgrade soils, and settlement of soil above the pipe due to improper compaction of the backfill. This resulted in the following three areas of study: (1) The optimization and management of earthwork materials through general soil mixing of various select and unsuitable soils and a specific example of optimization of materials in earthwork construction by soil mixing; (2) An investigation of the saturated permeability of compacted glacial till in relation to validation and prediction with the Enhanced Integrated Climatic Model (EICM); and (3) A field investigation and numerical modeling of culvert settlement. For each area of study, a literature review was conducted, research data were collected and analyzed, and important findings and conclusions were drawn. It was found that optimum mixtures of select and unsuitable soils can be defined that allow the use of unsuitable materials in embankment and subgrade locations. An improved model of saturated hydraulic conductivity was proposed for use with glacial soils from Iowa. The use of proper trench backfill compaction or the use of flowable mortar will reduce the potential for developing a bump above culverts.
Resumo:
Mixture materials, mix design, and pavement construction are not isolated steps in the concrete paving process. Each affects the other in ways that determine overall pavement quality and long-term performance. However, equipment and procedures commonly used to test concrete materials and concrete pavements have not changed in decades, leaving gaps in our ability to understand and control the factors that determine concrete durability. The concrete paving community needs tests that will adequately characterize the materials, predict interactions, and monitor the properties of the concrete. The overall objectives of this study are (1) to evaluate conventional and new methods for testing concrete and concrete materials to prevent material and construction problems that could lead to premature concrete pavement distress and (2) to examine and refine a suite of tests that can accurately evaluate concrete pavement properties. The project included three phases. In Phase I, the research team contacted each of 16 participating states to gather information about concrete and concrete material tests. A preliminary suite of tests to ensure long-term pavement performance was developed. The tests were selected to provide useful and easy-to-interpret results that can be performed reasonably and routinely in terms of time, expertise, training, and cost. The tests examine concrete pavement properties in five focal areas critical to the long life and durability of concrete pavements: (1) workability, (2) strength development, (3) air system, (4) permeability, and (5) shrinkage. The tests were relevant at three stages in the concrete paving process: mix design, preconstruction verification, and construction quality control. In Phase II, the research team conducted field testing in each participating state to evaluate the preliminary suite of tests and demonstrate the testing technologies and procedures using local materials. A Mobile Concrete Research Lab was designed and equipped to facilitate the demonstrations. This report documents the results of the 16 state projects. Phase III refined and finalized lab and field tests based on state project test data. The results of the overall project are detailed herein. The final suite of tests is detailed in the accompanying testing guide.
Resumo:
MOTIVATION: The detection of positive selection is widely used to study gene and genome evolution, but its application remains limited by the high computational cost of existing implementations. We present a series of computational optimizations for more efficient estimation of the likelihood function on large-scale phylogenetic problems. We illustrate our approach using the branch-site model of codon evolution. RESULTS: We introduce novel optimization techniques that substantially outperform both CodeML from the PAML package and our previously optimized sequential version SlimCodeML. These techniques can also be applied to other likelihood-based phylogeny software. Our implementation scales well for large numbers of codons and/or species. It can therefore analyse substantially larger datasets than CodeML. We evaluated FastCodeML on different platforms and measured average sequential speedups of FastCodeML (single-threaded) versus CodeML of up to 5.8, average speedups of FastCodeML (multi-threaded) versus CodeML on a single node (shared memory) of up to 36.9 for 12 CPU cores, and average speedups of the distributed FastCodeML versus CodeML of up to 170.9 on eight nodes (96 CPU cores in total).Availability and implementation: ftp://ftp.vital-it.ch/tools/FastCodeML/. CONTACT: selectome@unil.ch or nicolas.salamin@unil.ch.
Resumo:
An alternative relation to Pareto-dominance relation is proposed. The new relation is based on ranking a set of solutions according to each separate objective and an aggregation function to calculate a scalar fitness value for each solution. The relation is called as ranking-dominance and it tries to tackle the curse of dimensionality commonly observedin evolutionary multi-objective optimization. Ranking-dominance can beused to sort a set of solutions even for a large number of objectives when Pareto-dominance relation cannot distinguish solutions from one another anymore. This permits search to advance even with a large number of objectives. It is also shown that ranking-dominance does not violate Pareto-dominance. Results indicate that selection based on ranking-dominance is able to advance search towards the Pareto-front in some cases, where selection based on Pareto-dominance stagnates. However, in some cases it is also possible that search does not proceed into direction of Pareto-front because the ranking-dominance relation permits deterioration of individual objectives. Results also show that when the number of objectives increases, selection based on just Pareto-dominance without diversity maintenance is able to advance search better than with diversity maintenance. Therefore, diversity maintenance is connive at the curse of dimensionality.
Resumo:
Background: Optimization methods allow designing changes in a system so that specific goals are attained. These techniques are fundamental for metabolic engineering. However, they are not directly applicable for investigating the evolution of metabolic adaptation to environmental changes. Although biological systems have evolved by natural selection and result in well-adapted systems, we can hardly expect that actual metabolic processes are at the theoretical optimum that could result from an optimization analysis. More likely, natural systems are to be found in a feasible region compatible with global physiological requirements. Results: We first present a new method for globally optimizing nonlinear models of metabolic pathways that are based on the Generalized Mass Action (GMA) representation. The optimization task is posed as a nonconvex nonlinear programming (NLP) problem that is solved by an outer- approximation algorithm. This method relies on solving iteratively reduced NLP slave subproblems and mixed-integer linear programming (MILP) master problems that provide valid upper and lower bounds, respectively, on the global solution to the original NLP. The capabilities of this method are illustrated through its application to the anaerobic fermentation pathway in Saccharomyces cerevisiae. We next introduce a method to identify the feasibility parametric regions that allow a system to meet a set of physiological constraints that can be represented in mathematical terms through algebraic equations. This technique is based on applying the outer-approximation based algorithm iteratively over a reduced search space in order to identify regions that contain feasible solutions to the problem and discard others in which no feasible solution exists. As an example, we characterize the feasible enzyme activity changes that are compatible with an appropriate adaptive response of yeast Saccharomyces cerevisiae to heat shock Conclusion: Our results show the utility of the suggested approach for investigating the evolution of adaptive responses to environmental changes. The proposed method can be used in other important applications such as the evaluation of parameter changes that are compatible with health and disease states.
Resumo:
The use of carbon paste electrodes (CPE) of mineral sulfides can be useful for electrochemical studies to overcome problems by using massive ones. Using CPE-chalcopyrite some variables were electrochemically evaluated. These variables were: (i) the atmosphere of preparation (air or argon) of CPE and elapsed time till its use; (ii) scan rate for voltammetric measurements and (iii) chalcopyrite concentration in the CPE. Based on cyclic voltammetry, open-circuit potential and electrochemical impedance results the recommendations are: oxygen-free atmosphere to prepare and kept the CPE until around two ours, scan rates from 10 to 40 mV s-1, and chalcopyrite concentrations > 20%.
Resumo:
A neural network procedure to solve inverse chemical kinetic problems is discussed in this work. Rate constants are calculated from the product concentration of an irreversible consecutive reaction: the hydrogenation of Citral molecule, a process with industrial interest. Simulated and experimental data are considered. Errors in the simulated data, up to 7% in the concentrations, were assumed to investigate the robustness of the inverse procedure. Also, the proposed method is compared with two common methods in nonlinear analysis; the Simplex and Levenberg-Marquardt approaches. In all situations investigated, the neural network approach was numerically stable and robust with respect to deviations in the initial conditions or experimental noises.
Resumo:
An optimization tool has been developed to help companies to optimize their production cycles and thus improve their overall supply chain management processes. The application combines the functionality that traditional APS (Advanced Planning System) and ARP (Automatic Replenishment Program) systems provide into one optimization run. A qualitative study was organized to investigate opportunities to expand the product’s market base. Twelve personal interviews were conducted and the results were collected in industry specific production planning analyses. Five process industries were analyzed to identify the product’s suitability to each industry sector and the most important product development areas. Based on the research the paper and the plastic film industries remain the most potential industry sectors at this point. To be successful in other industry sectors some product enhancements would be required, including capabilities to optimize multiple sequential and parallel production cycles, handle sequencing of complex finishing operations and to include master planning capabilities to support overall supply chain optimization. In product sales and marketing processes the key to success is to find and reach the people who are involved directly with the problems that the optimization tool can help to solve.
Resumo:
The purpose of this thesis was to create design a guideline for an LCL-filter. This thesis reviews briefly the relevant harmonics standards, old filter designs and problems faced with the previous filters. This thesis proposes a modified design method based on the “Liserre’s method” presented in the literature. This modified method will take into account network parameters better. As input parameters, the method uses the nominal power, allowed ripple current in converter and network side and desired resonant frequency of the filter. Essential component selection issues for LCL-filter, such as heating, voltage strength, current rating etc. are also discussed. Furthermore, a simulation model used to verify the operation of the designed filter in nominal power use and in transient situations is included in this thesis.
Resumo:
The goal of the Master’s thesis is to develop and to analyze the optimization method for finding a geometry shape of classical horizontal wind turbine blades based on set of criteria. The thesis develops a technique that allows the designer to determine the weight of such factors as power coefficient, sound pressure level and the cost function in the overall process of blade shape optimization. The optimization technique applies the Desirability function. It was never used before in that kind of technical problems, and in this sense it can claim to originality of research. To do the analysis and the optimization processes more convenient the software application was developed.
Resumo:
This thesis studies the use of heuristic algorithms in a number of combinatorial problems that occur in various resource constrained environments. Such problems occur, for example, in manufacturing, where a restricted number of resources (tools, machines, feeder slots) are needed to perform some operations. Many of these problems turn out to be computationally intractable, and heuristic algorithms are used to provide efficient, yet sub-optimal solutions. The main goal of the present study is to build upon existing methods to create new heuristics that provide improved solutions for some of these problems. All of these problems occur in practice, and one of the motivations of our study was the request for improvements from industrial sources. We approach three different resource constrained problems. The first is the tool switching and loading problem, and occurs especially in the assembly of printed circuit boards. This problem has to be solved when an efficient, yet small primary storage is used to access resources (tools) from a less efficient (but unlimited) secondary storage area. We study various forms of the problem and provide improved heuristics for its solution. Second, the nozzle assignment problem is concerned with selecting a suitable set of vacuum nozzles for the arms of a robotic assembly machine. It turns out that this is a specialized formulation of the MINMAX resource allocation formulation of the apportionment problem and it can be solved efficiently and optimally. We construct an exact algorithm specialized for the nozzle selection and provide a proof of its optimality. Third, the problem of feeder assignment and component tape construction occurs when electronic components are inserted and certain component types cause tape movement delays that can significantly impact the efficiency of printed circuit board assembly. Here, careful selection of component slots in the feeder improves the tape movement speed. We provide a formal proof that this problem is of the same complexity as the turnpike problem (a well studied geometric optimization problem), and provide a heuristic algorithm for this problem.
Resumo:
This work contains a series of studies on the optimization of three real-world scheduling problems, school timetabling, sports scheduling and staff scheduling. These challenging problems are solved to customer satisfaction using the proposed PEAST algorithm. The customer satisfaction refers to the fact that implementations of the algorithm are in industry use. The PEAST algorithm is a product of long-term research and development. The first version of it was introduced in 1998. This thesis is a result of a five-year development of the algorithm. One of the most valuable characteristics of the algorithm has proven to be the ability to solve a wide range of scheduling problems. It is likely that it can be tuned to tackle also a range of other combinatorial problems. The algorithm uses features from numerous different metaheuristics which is the main reason for its success. In addition, the implementation of the algorithm is fast enough for real-world use.