906 resultados para Normalization-based optimization
Resumo:
In this paper we present a new population-based implant design methodology, which advances the state-of-the-art approaches by combining shape and bone quality information into the design strategy. The method may enhance the mechanical stability of the fixation and reduces the intra-operative in-plane bending which might impede the functionality of the locking mechanism. The computational method is presented for the case of mandibular locking fixation plates, where the mandibular angle and the bone quality at screw locations are taken into account. The method automatically derives the mandibular angle and the bone thickness and intensity values at the path of every screw from a set of computed tomography images. An optimization strategy is then used to optimize the two parameters of plate angle and screw position. The method was applied to two populations of different genders. Results for the new design are presented along with a comparison with a commercially available mandibular locking fixation plate (MODUS(®) TriLock(®) 2.0/2.3/2.5, Medartis AG, Basel, Switzerland). The proposed designs resulted in a statistically significant improvement in the available bone thickness when compared to the standard plate. There is a higher probability that the proposed implants cover areas of thicker cortical bone without compromising the bone mineral density around the screws. The obtained results allowed us to conclude that an angle and screw separation of 129° and 9 mm for females and 121° and 10 mm for males are more suitable designs than the commercially available 120° and 9 mm.
Resumo:
Investigation uses simulation to explore the inherent tradeoffs ofcontrolling high-speed and highly robust walking robots while minimizing energy consumption. Using a novel controller which optimizes robustness, energy economy, and speed of a simulated robot on rough terrain, the user can adjust their priorities between these three outcome measures and systematically generate a performance curveassessing the tradeoffs associated with these metrics.
Resumo:
Bioplastics are polymers (such as polyesters) produced from bacterial fermentations that are biodegradable and nonhazardous. They are produced by a wide variety of bacteria and are made only when stress conditions allow, such as when nutrient levels are low, more specifically levels of nitrogen and oxygen. These stress conditions cause certain bacteria to build up excess carbon deposits as energy reserves in the form of polyhydroxyalkanoates (PHAs). PHAs can be extracted and formed into actual plastic with the same strength of conventional, synthetic-based plastics without the need to rely on foreign petroleum. The overall goal of this project was to select for a bacteria that could grow on sugars found in the lignocellulosic biomass, and get the bacteria to produce PHAs and peptidoglycan. Once this was accomplished the goal was to extract PHAs and peptidoglycan in order to make a stronger more rigid plastic, by combing them into a co-polymer. The individual goals of this project were to: (1) Select and screen bacteria that are capable of producing PHAs by utilizing the carbon/energy sources found in lignocellulosic biomass; (2) Maximize the utilization of those sugars present in woody biomass in order to produce optimal levels of PHAs. (3) Use room temperature ionic liquids (RTILs) in order to separate the cell membrane and peptidoglycan, allowing for better extraction of PHAs and more intact peptidoglycan. B. megaterium a Gram-positive PHA-producing bacterium was selected for study in this project. It was grown on a variety of different substrates in order to maximize both its growth and production of PHAs. The optimal conditions were found to be 30°C, pH 6.0 and sugar concentration of either 30g/L glucose or xylose. After optimal growth was obtained, both RTILs and enzymatic treatments were used to break the cell wall, in order to extract the PHAs, and peptidoglycan. PHAs and peptidoglycan were successfully extracted from the cell, and will be used in the future to create a new stronger co-polymer. Peptidoglycan recovery yield was 16% of the cells’ dry weight.
Resumo:
Ethanol from lignocellulosic feedstocks is not currently competitive with corn-based ethanol in terms of yields and commercial feasibility. Through optimization of the pretreatment and fermentation steps this could change. The overall goal of this study was to evaluate, characterize, and optimize ethanol production from lignocellulosic feedstocks by the yeasts Saccharomyces cerevisiae (strain Ethanol Red, ER) and Pichia stipitis CBS 6054. Through a series of fermentations and growth studies, P. stipitis CBS 6054 and S. cerevisiae (ER) were evaluated on their ability to produce ethanol from both single substrate (xylose and glucose) and mixed substrate (five sugars present in hemicellulose) fermentations. The yeasts were also evaluated on their ability to produce ethanol from dilute acid pretreated hydrolysate and enzymatic hydrolysate. Hardwood (aspen), softwood (balsam), and herbaceous (switchgrass) hydrolysates were also tested to determine the effect of the source of the feedstock. P. stipitis produced ethanol from 66-98% of the theoretical yield throughout the fermentation studies completed over the course of this work. S. cerevisiae (ER) was determined to not be ideal for dilute acid pretreated lignocellulose because it was not able to utilize all the sugars found in hemicellulose. S. cerevisiae (ER) was instead used to optimize enzymatic pretreated lignocellulose that contained only glucose monomers. It was able to produce ethanol from enzymatically pretreated hydrolysate but the sugar level was so low (>3 g/L) that it would not be commercially feasible. Two lignocellulosic degradation products, furfural and acetic acid, were evaluated for whether or not they had an inhibitory effect on biomass production, substrate utilization, and ethanol production by P. stipitis and S. cerevisiae (ER). It was determined that inhibition is directly related to the concentration of the inhibitor and the organism. The final phase for this thesis focused on adapting P. stipitis CBS 6054 to toxic compounds present in dilute acid pretreated hydrolysate through directed evolution. Cultures were transferred to increasing concentrations of dilute acid pretreated hydrolysate in the fermentation media. The adapted strains’ fermentation capabilities were tested against the unadapted parent strain at each hydrolysate concentration. The fermentation capabilities of the adapted strain were significantly improved over the unadapted parentstrain. On media containing 60% hydrolysate the adapted strain yielded 0.30 g_ethanol/g_sugar ± 0.033 (g/g) and the unadapted parent strain yielded 0.11 g/g ±0.028. The culture has been successfully adapted to growth on media containing 65%, 70%, 75%, and 80% hydrolysate but with below optimal ethanol yields (0.14-0.19 g/g). Cell recycle could be a viable option for improving ethanol yields in these cases. A study was conducted to determine the optimal media for production of ethanol from xylose and mixed substrate fermentations by P. stipitis. Growth, substrate utilization, and ethanol production were the three factors used to evaluate the media. The three media tested were Yeast Peptone (YP), Yeast Nitrogen Base (YNB), and Corn Steep Liquor (CSL). The ethanol yields (g/g) for each medium are as follows: YP - 0.40-0.42, YNB -0.28-.030, and CSL - 0.44-.051. The results show that media containing CSL result in slightly higher ethanol yields then other fermentation media. P. stipitis was successfully adapted to dilute acid pretreated aspen hydrolysate in increasing concentrations in order to produce higher ethanol yields compared to the unadapted parent strain. S. cerevisiae (ER) produced ethanol from enzymatic pretreated cellulose containing low concentrations of glucose (1-3g/L). These results show that fermentations of lignocellulosic feedstocks can be optimized based on the substrate and organism for increased ethanol yields.
Resumo:
In developing countries many water distribution systems are branched networks with little redundancy. If any component in the distribution system fails, many users are left relying on secondary water sources. These sources oftentimes do not provide potable water and prolonged use leads to increased cases of water borne illnesses. Increasing redundancy in branched networks increases the reliability of the networks, but is oftentimes viewed as unaffordable. This paper presents a procedure for water system managers to use to determine which loops when added to a branch network provide the most benefit for users. Two methods are presented, one ranking the loops based on total number of users benefited, and one ranking the loops of number of vulnerable users benefited. A case study is presented using the water distribution system of Medina Bank Village, Belize. It was found that forming loops in upstream pipes connected to the main line had the potential to benefit the most users.
Resumo:
An optimizing compiler internal representation fundamentally affects the clarity, efficiency and feasibility of optimization algorithms employed by the compiler. Static Single Assignment (SSA) as a state-of-the-art program representation has great advantages though still can be improved. This dissertation explores the domain of single assignment beyond SSA, and presents two novel program representations: Future Gated Single Assignment (FGSA) and Recursive Future Predicated Form (RFPF). Both FGSA and RFPF embed control flow and data flow information, enabling efficient traversal program information and thus leading to better and simpler optimizations. We introduce future value concept, the designing base of both FGSA and RFPF, which permits a consumer instruction to be encountered before the producer of its source operand(s) in a control flow setting. We show that FGSA is efficiently computable by using a series T1/T2/TR transformation, yielding an expected linear time algorithm for combining together the construction of the pruned single assignment form and live analysis for both reducible and irreducible graphs. As a result, the approach results in an average reduction of 7.7%, with a maximum of 67% in the number of gating functions compared to the pruned SSA form on the SPEC2000 benchmark suite. We present a solid and near optimal framework to perform inverse transformation from single assignment programs. We demonstrate the importance of unrestricted code motion and present RFPF. We develop algorithms which enable instruction movement in acyclic, as well as cyclic regions, and show the ease to perform optimizations such as Partial Redundancy Elimination on RFPF.
Resumo:
An extrusion die is used to continuously produce parts with a constant cross section; such as sheets, pipes, tire components and more complex shapes such as window seals. The die is fed by a screw extruder when polymers are used. The extruder melts, mixes and pressures the material by the rotation of either a single or double screw. The polymer can then be continuously forced through the die producing a long part in the shape of the die outlet. The extruded section is then cut to the desired length. Generally, the primary target of a well designed die is to produce a uniform outlet velocity without excessively raising the pressure required to extrude the polymer through the die. Other properties such as temperature uniformity and residence time are also important but are not directly considered in this work. Designing dies for optimal outlet velocity variation using simple analytical equations are feasible for basic die geometries or simple channels. Due to the complexity of die geometry and of polymer material properties design of complex dies by analytical methods is difficult. For complex dies iterative methods must be used to optimize dies. An automated iterative method is desired for die optimization. To automate the design and optimization of an extrusion die two issues must be dealt with. The first is how to generate a new mesh for each iteration. In this work, this is approached by modifying a Parasolid file that describes a CAD part. This file is then used in a commercial meshing software. Skewing the initial mesh to produce a new geometry was also employed as a second option. The second issue is an optimization problem with the presence of noise stemming from variations in the mesh and cumulative truncation errors. In this work a simplex method and a modified trust region method were employed for automated optimization of die geometries. For the trust region a discreet derivative and a BFGS Hessian approximation were used. To deal with the noise in the function the trust region method was modified to automatically adjust the discreet derivative step size and the trust region based on changes in noise and function contour. Generally uniformity of velocity at exit of the extrusion die can be improved by increasing resistance across the die but this is limited by the pressure capabilities of the extruder. In optimization, a penalty factor that increases exponentially from the pressure limit is applied. This penalty can be applied in two different ways; the first only to the designs which exceed the pressure limit, the second to both designs above and below the pressure limit. Both of these methods were tested and compared in this work.
Resumo:
To mitigate greenhouse gas (GHG) emissions and reduce U.S. dependence on imported oil, the United States (U.S.) is pursuing several options to create biofuels from renewable woody biomass (hereafter referred to as “biomass”). Because of the distributed nature of biomass feedstock, the cost and complexity of biomass recovery operations has significant challenges that hinder increased biomass utilization for energy production. To facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization and tapping unused forest residues, it is proposed to develop biofuel supply chain models based on optimization and simulation approaches. The biofuel supply chain is structured around four components: biofuel facility locations and sizes, biomass harvesting/forwarding, transportation, and storage. A Geographic Information System (GIS) based approach is proposed as a first step for selecting potential facility locations for biofuel production from forest biomass based on a set of evaluation criteria, such as accessibility to biomass, railway/road transportation network, water body and workforce. The development of optimization and simulation models is also proposed. The results of the models will be used to determine (1) the number, location, and size of the biofuel facilities, and (2) the amounts of biomass to be transported between the harvesting areas and the biofuel facilities over a 20-year timeframe. The multi-criteria objective is to minimize the weighted sum of the delivered feedstock cost, energy consumption, and GHG emissions simultaneously. Finally, a series of sensitivity analyses will be conducted to identify the sensitivity of the decisions, such as the optimal site selected for the biofuel facility, to changes in influential parameters, such as biomass availability and transportation fuel price. Intellectual Merit The proposed research will facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization in the renewable biofuel industry. The GIS-based facility location analysis considers a series of factors which have not been considered simultaneously in previous research. Location analysis is critical to the financial success of producing biofuel. The modeling of woody biomass supply chains using both optimization and simulation, combing with the GIS-based approach as a precursor, have not been done to date. The optimization and simulation models can help to ensure the economic and environmental viability and sustainability of the entire biofuel supply chain at both the strategic design level and the operational planning level. Broader Impacts The proposed models for biorefineries can be applied to other types of manufacturing or processing operations using biomass. This is because the biomass feedstock supply chain is similar, if not the same, for biorefineries, biomass fired or co-fired power plants, or torrefaction/pelletization operations. Additionally, the research results of this research will continue to be disseminated internationally through publications in journals, such as Biomass and Bioenergy, and Renewable Energy, and presentations at conferences, such as the 2011 Industrial Engineering Research Conference. For example, part of the research work related to biofuel facility identification has been published: Zhang, Johnson and Sutherland [2011] (see Appendix A). There will also be opportunities for the Michigan Tech campus community to learn about the research through the Sustainable Future Institute.
Resumo:
Reuse distance analysis, the prediction of how many distinct memory addresses will be accessed between two accesses to a given address, has been established as a useful technique in profile-based compiler optimization, but the cost of collecting the memory reuse profile has been prohibitive for some applications. In this report, we propose using the hardware monitoring facilities available in existing CPUs to gather an approximate reuse distance profile. The difficulties associated with this monitoring technique are discussed, most importantly that there is no obvious link between the reuse profile produced by hardware monitoring and the actual reuse behavior. Potential applications which would be made viable by a reliable hardware-based reuse distance analysis are identified.
Resumo:
A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.
Resumo:
The problem of optimal design of a multi-gravity-assist space trajectories, with free number of deep space maneuvers (MGADSM) poses multi-modal cost functions. In the general form of the problem, the number of design variables is solution dependent. To handle global optimization problems where the number of design variables varies from one solution to another, two novel genetic-based techniques are introduced: hidden genes genetic algorithm (HGGA) and dynamic-size multiple population genetic algorithm (DSMPGA). In HGGA, a fixed length for the design variables is assigned for all solutions. Independent variables of each solution are divided into effective and ineffective (hidden) genes. Hidden genes are excluded in cost function evaluations. Full-length solutions undergo standard genetic operations. In DSMPGA, sub-populations of fixed size design spaces are randomly initialized. Standard genetic operations are carried out for a stage of generations. A new population is then created by reproduction from all members based on their relative fitness. The resulting sub-populations have different sizes from their initial sizes. The process repeats, leading to increasing the size of sub-populations of more fit solutions. Both techniques are applied to several MGADSM problems. They have the capability to determine the number of swing-bys, the planets to swing by, launch and arrival dates, and the number of deep space maneuvers as well as their locations, magnitudes, and directions in an optimal sense. The results show that solutions obtained using the developed tools match known solutions for complex case studies. The HGGA is also used to obtain the asteroids sequence and the mission structure in the global trajectory optimization competition (GTOC) problem. As an application of GA optimization to Earth orbits, the problem of visiting a set of ground sites within a constrained time frame is solved. The J2 perturbation and zonal coverage are considered to design repeated Sun-synchronous orbits. Finally, a new set of orbits, the repeated shadow track orbits (RSTO), is introduced. The orbit parameters are optimized such that the shadow of a spacecraft on the Earth visits the same locations periodically every desired number of days.
Resumo:
Bioenergy and biobased products offer new opportunities for strengthening rural economies, enhancing environmental health, and providing a secure energy future. Realizing these benefits will require the development of many different biobased products and biobased production systems. The biomass feedstocks that will enable such development must be sustainable, widely available across many different regions, and compatible with industry requirements. The purpose of this research is to develop an economic model that will help decision makers identify the optimal size of a forest resource based biofuel production facility. The model must be applicable to decision makers anywhere, though the modeled case analysis will focus on a specific region; the Upper Peninsula (U.P.) of Michigan. This work will illustrate that several factors influence the optimal facility size. Further, this effort will reveal that the location of the facility does affect size. The results of the research show that an optimal facility size can be determined for a given location and are based on variables including forest biomass availability, transportation cost rate, and economy of scale factors. These variables acting alone and interacting together can influence the optimal size and the decision of where to locate the biofuel production facility. Further, adjustments to model variables like biomass resource and storage costs have no effect on facility size, but do affect the unit cost of the biofuel produced.
Resumo:
With the insatiable curiosity of human beings to explore the universe and our solar system, it is essential to benefit from larger propulsion capabilities to execute efficient transfers and carry more scientific equipment. In the field of space trajectory optimization the fundamental advances in using low-thrust propulsion and exploiting the multi-body dynamics has played pivotal role in designing efficient space mission trajectories. The former provides larger cumulative momentum change in comparison with the conventional chemical propulsion whereas the latter results in almost ballistic trajectories with negligible amount of propellant. However, the problem of space trajectory design translates into an optimal control problem which is, in general, time-consuming and very difficult to solve. Therefore, the goal of the thesis is to address the above problem by developing a methodology to simplify and facilitate the process of finding initial low-thrust trajectories in both two-body and multi-body environments. This initial solution will not only provide mission designers with a better understanding of the problem and solution but also serves as a good initial guess for high-fidelity optimal control solvers and increases their convergence rate. Almost all of the high-fidelity solvers enjoy the existence of an initial guess that already satisfies the equations of motion and some of the most important constraints. Despite the nonlinear nature of the problem, it is sought to find a robust technique for a wide range of typical low-thrust transfers with reduced computational intensity. Another important aspect of our developed methodology is the representation of low-thrust trajectories by Fourier series with which the number of design variables reduces significantly. Emphasis is given on simplifying the equations of motion to the possible extent and avoid approximating the controls. These facts contribute to speeding up the solution finding procedure. Several example applications of two and three-dimensional two-body low-thrust transfers are considered. In addition, in the multi-body dynamic, and in particular the restricted-three-body dynamic, several Earth-to-Moon low-thrust transfers are investigated.
Resumo:
Heuristic optimization algorithms are of great importance for reaching solutions to various real world problems. These algorithms have a wide range of applications such as cost reduction, artificial intelligence, and medicine. By the term cost, one could imply that that cost is associated with, for instance, the value of a function of several independent variables. Often, when dealing with engineering problems, we want to minimize the value of a function in order to achieve an optimum, or to maximize another parameter which increases with a decrease in the cost (the value of this function). The heuristic cost reduction algorithms work by finding the optimum values of the independent variables for which the value of the function (the “cost”) is the minimum. There is an abundance of heuristic cost reduction algorithms to choose from. We will start with a discussion of various optimization algorithms such as Memetic algorithms, force-directed placement, and evolution-based algorithms. Following this initial discussion, we will take up the working of three algorithms and implement the same in MATLAB. The focus of this report is to provide detailed information on the working of three different heuristic optimization algorithms, and conclude with a comparative study on the performance of these algorithms when implemented in MATLAB. In this report, the three algorithms we will take in to consideration will be the non-adaptive simulated annealing algorithm, the adaptive simulated annealing algorithm, and random restart hill climbing algorithm. The algorithms are heuristic in nature, that is, the solution these achieve may not be the best of all the solutions but provide a means to reach a quick solution that may be a reasonably good solution without taking an indefinite time to implement.
Resumo:
Quantitative reverse transcriptase real-time PCR (QRT-PCR) is a robust method to quantitate RNA abundance. The procedure is highly sensitive and reproducible as long as the initial RNA is intact. However, breaks in the RNA due to chemical or enzymatic cleavage may reduce the number of RNA molecules that contain intact amplicons. As a consequence, the number of molecules available for amplification decreases. We determined the relation between RNA fragmentation and threshold values (Ct values) in subsequent QRT-PCR for four genes in an experimental model of intact and partially hydrolyzed RNA derived from a cell line and we describe the relation between RNA integrity, amplicon size and Ct values in this biologically homogenous system. We demonstrate that degradation-related shifts of Ct values can be compensated by calculating delta Ct values between test genes and the mean values of several control genes. These delta Ct values are less sensitive to fragmentation of the RNA and are unaffected by varying amounts of input RNA. The feasibility of the procedure was demonstrated by comparing Ct values from a larger panel of genes in intact and in partially degraded RNA. We compared Ct values from intact RNA derived from well-preserved tumor material and from fragmented RNA derived from formalin-fixed, paraffin-embedded (FFPE) samples of the same tumors. We demonstrate that the relative abundance of gene expression can be based on FFPE material even when the amount of RNA in the sample and the extent of fragmentation are not known.