855 resultados para Evolutionary optimization methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

International audience

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An increasing focus in evolutionary biology is on the interplay between mesoscale ecological and evolutionary processes such as population demographics, habitat tolerance, and especially geographic distribution, as potential drivers responsible for patterns of diversification and extinction over geologic time. However, few studies to date connect organismal processes such as survival and reproduction through mesoscale patterns to long-term macroevolutionary trends. In my dissertation, I investigate how mechanism of seed dispersal, mediated through geographic range size, influences diversification rates in the Rosales (Plantae: Anthophyta). In my first chapter, I validate the phylogenetic comparative methods that I use in my second and third chapters. Available state speciation and extinction (SSE) models assumptions about evolution known to be false through fossil data. I show, however, that as long as net diversification rates remain positive – a condition likely true for the Rosales – these violations of SSE’s assumptions do not cause significantly biased results. With SSE methods validated, my second chapter reconstructs three associations that appear to increase diversification rate for Rosalean genera: (1) herbaceous habit; (2) a three-way interaction combining animal dispersal, high within-genus species richness, and geographic range on multiple continents; (3) a four-way interaction combining woody habit with the other three characteristics of (2). I suggest that the three- and four-way interactions represent colonization ability and resulting extinction resistance in the face of late Cenozoic climate change; however, there are other possibilities as well that I hope to investigate in future research. My third chapter reconstructs the phylogeographic history of the Rosales using both non-fossil-assisted SSE methods as well as fossil-informed traditional phylogeographic analysis. Ancestral state reconstructions indicate that the Rosaceae diversified in North America while the other Rosalean families diversified elsewhere, possibly in Eurasia. SSE is able to successfully identify groups of genera that were likely to have been ancestrally widespread, but has poorer taxonomic resolution than methods that use fossil data. In conclusion, these chapters together suggest several potential causal links between organismal, mesoscale, and geologic scale processes, but further work will be needed to test the hypotheses that I raise here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse’s assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To develop and optimise some variables that influence fluoxetine orally disintegrating tablets (ODTs) formulation. Methods: Fluoxetine ODTs tablets were prepared using direct compression method. Three-factor, 3- level Box-Behnken design was used to optimize and develop fluoxetine ODT formulation. The design suggested 15 formulations of different lubricant concentration (X1), lubricant mixing time (X2), and compression force (X3) and then their effect was monitored on tablet weight (Y1), thickness (Y2), hardness (Y3), % friability (Y4), and disintegration time (Y5). Results: All powder blends showed acceptable flow properties, ranging from good to excellent. The disintegration time (Y5) was affected directly by lubricant concentration (X1). Lubricant mixing time (X2) had a direct effect on tablet thickness (Y2) and hardness (Y3), while compression force (X3) had a direct impact on tablet hardness (Y3), % friability (Y4) and disintegration time (Y5). Accordingly, Box-Behnken design suggested an optimized formula of 0.86 mg (X1), 15.3 min (X2), and 10.6 KN (X3). Finally, the prediction error percentage responses of Y1, Y2, Y3, Y4, and Y5 were 0.31, 0.52, 2.13, 3.92 and 3.75 %, respectively. Formula 4 and 8 achieved 90 % of drug release within the first 5 min of dissolution test. Conclusion: Fluoxetine ODT formulation has been developed and optimized successfully using Box- Behnken design and has also been manufactured efficiently using direct compression technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is remarkable that there are no deployed military hybrid vehicles since battlefield fuel is approximately 100 times the cost of civilian fuel. In the commercial marketplace, where fuel prices are much lower, electric hybrid vehicles have become increasingly common due to their increased fuel efficiency and the associated operating cost benefit. An absence of military hybrid vehicles is not due to a lack of investment in research and development, but rather because applying hybrid vehicle architectures to a military application has unique challenges. These challenges include inconsistent duty cycles for propulsion requirements and the absence of methods to look at vehicle energy in a holistic sense. This dissertation provides a remedy to these challenges by presenting a method to quantify the benefits of a military hybrid vehicle by regarding that vehicle as a microgrid. This innovative concept allowed for the creation of an expandable multiple input numerical optimization method that was implemented for both real-time control and system design optimization. An example of each of these implementations was presented. Optimization in the loop using this new method was compared to a traditional closed loop control system and proved to be more fuel efficient. System design optimization using this method successfully illustrated battery size optimization by iterating through various electric duty cycles. By utilizing this new multiple input numerical optimization method, a holistic view of duty cycle synthesis, vehicle energy use, and vehicle design optimization can be achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combinatorial optimization is a complex engineering subject. Although formulation often depends on the nature of problems that differs from their setup, design, constraints, and implications, establishing a unifying framework is essential. This dissertation investigates the unique features of three important optimization problems that can span from small-scale design automation to large-scale power system planning: (1) Feeder remote terminal unit (FRTU) planning strategy by considering the cybersecurity of secondary distribution network in electrical distribution grid, (2) physical-level synthesis for microfluidic lab-on-a-chip, and (3) discrete gate sizing in very-large-scale integration (VLSI) circuit. First, an optimization technique by cross entropy is proposed to handle FRTU deployment in primary network considering cybersecurity of secondary distribution network. While it is constrained by monetary budget on the number of deployed FRTUs, the proposed algorithm identi?es pivotal locations of a distribution feeder to install the FRTUs in different time horizons. Then, multi-scale optimization techniques are proposed for digital micro?uidic lab-on-a-chip physical level synthesis. The proposed techniques handle the variation-aware lab-on-a-chip placement and routing co-design while satisfying all constraints, and considering contamination and defect. Last, the first fully polynomial time approximation scheme (FPTAS) is proposed for the delay driven discrete gate sizing problem, which explores the theoretical view since the existing works are heuristics with no performance guarantee. The intellectual contribution of the proposed methods establishes a novel paradigm bridging the gaps between professional communities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective for physics based modeling of the power converter components is to design the whole converter with respect to physical and operational constraints. Therefore, all the elements and components of the energy conversion system are modeled numerically and combined together to achieve the whole system behavioral model. Previously proposed high frequency (HF) models of power converters are based on circuit models that are only related to the parasitic inner parameters of the power devices and the connections between the components. This dissertation aims to obtain appropriate physics-based models for power conversion systems, which not only can represent the steady state behavior of the components, but also can predict their high frequency characteristics. The developed physics-based model would represent the physical device with a high level of accuracy in predicting its operating condition. The proposed physics-based model enables us to accurately develop components such as; effective EMI filters, switching algorithms and circuit topologies [7]. One of the applications of the developed modeling technique is design of new sets of topologies for high-frequency, high efficiency converters for variable speed drives. The main advantage of the modeling method, presented in this dissertation, is the practical design of an inverter for high power applications with the ability to overcome the blocking voltage limitations of available power semiconductor devices. Another advantage is selection of the best matching topology with inherent reduction of switching losses which can be utilized to improve the overall efficiency. The physics-based modeling approach, in this dissertation, makes it possible to design any power electronic conversion system to meet electromagnetic standards and design constraints. This includes physical characteristics such as; decreasing the size and weight of the package, optimized interactions with the neighboring components and higher power density. In addition, the electromagnetic behaviors and signatures can be evaluated including the study of conducted and radiated EMI interactions in addition to the design of attenuation measures and enclosures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Catering to society’s demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A wide range of non-destructive testing (NDT) methods for the monitoring the health of concrete structure has been studied for several years. The recent rapid evolution of wireless sensor network (WSN) technologies has resulted in the development of sensing elements that can be embedded in concrete, to monitor the health of infrastructure, collect and report valuable related data. The monitoring system can potentially decrease the high installation time and reduce maintenance cost associated with wired monitoring systems. The monitoring sensors need to operate for a long period of time, but sensors batteries have a finite life span. Hence, novel wireless powering methods must be devised. The optimization of wireless power transfer via Strongly Coupled Magnetic Resonance (SCMR) to sensors embedded in concrete is studied here. First, we analytically derive the optimal geometric parameters for transmission of power in the air. This specifically leads to the identification of the local and global optimization parameters and conditions, it was validated through electromagnetic simulations. Second, the optimum conditions were employed in the model for propagation of energy through plain and reinforced concrete at different humidity conditions, and frequencies with extended Debye's model. This analysis leads to the conclusion that SCMR can be used to efficiently power sensors in plain and reinforced concrete at different humidity levels and depth, also validated through electromagnetic simulations. The optimization of wireless power transmission via SMCR to Wearable and Implantable Medical Device (WIMD) are also explored. The optimum conditions from the analytics were used in the model for propagation of energy through different human tissues. This analysis shows that SCMR can be used to efficiently transfer power to sensors in human tissue without overheating through electromagnetic simulations, as excessive power might result in overheating of the tissue. Standard SCMR is sensitive to misalignment; both 2-loops and 3-loops SCMR with misalignment-insensitive performances are presented. The power transfer efficiencies above 50% was achieved over the complete misalignment range of 0°-90° and dramatically better than typical SCMR with efficiencies less than 10% in extreme misalignment topologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Process systems design, operation and synthesis problems under uncertainty can readily be formulated as two-stage stochastic mixed-integer linear and nonlinear (nonconvex) programming (MILP and MINLP) problems. These problems, with a scenario based formulation, lead to large-scale MILPs/MINLPs that are well structured. The first part of the thesis proposes a new finitely convergent cross decomposition method (CD), where Benders decomposition (BD) and Dantzig-Wolfe decomposition (DWD) are combined in a unified framework to improve the solution of scenario based two-stage stochastic MILPs. This method alternates between DWD iterations and BD iterations, where DWD restricted master problems and BD primal problems yield a sequence of upper bounds, and BD relaxed master problems yield a sequence of lower bounds. A variant of CD, which includes multiple columns per iteration of DW restricted master problem and multiple cuts per iteration of BD relaxed master problem, called multicolumn-multicut CD is then developed to improve solution time. Finally, an extended cross decomposition method (ECD) for solving two-stage stochastic programs with risk constraints is proposed. In this approach, a CD approach at the first level and DWD at a second level is used to solve the original problem to optimality. ECD has a computational advantage over a bilevel decomposition strategy or solving the monolith problem using an MILP solver. The second part of the thesis develops a joint decomposition approach combining Lagrangian decomposition (LD) and generalized Benders decomposition (GBD), to efficiently solve stochastic mixed-integer nonlinear nonconvex programming problems to global optimality, without the need for explicit branch and bound search. In this approach, LD subproblems and GBD subproblems are systematically solved in a single framework. The relaxed master problem obtained from the reformulation of the original problem, is solved only when necessary. A convexification of the relaxed master problem and a domain reduction procedure are integrated into the decomposition framework to improve solution efficiency. Using case studies taken from renewable resource and fossil-fuel based application in process systems engineering, it can be seen that these novel decomposition approaches have significant benefit over classical decomposition methods and state-of-the-art MILP/MINLP global optimization solvers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis builds a framework for evaluating downside risk from multivariate data via a special class of risk measures (RM). The peculiarity of the analysis lies in getting rid of strong data distributional assumptions and in orientation towards the most critical data in risk management: those with asymmetries and heavy tails. At the same time, under typical assumptions, such as the ellipticity of the data probability distribution, the conformity with classical methods is shown. The constructed class of RM is a multivariate generalization of the coherent distortion RM, which possess valuable properties for a risk manager. The design of the framework is twofold. The first part contains new computational geometry methods for the high-dimensional data. The developed algorithms demonstrate computability of geometrical concepts used for constructing the RM. These concepts bring visuality and simplify interpretation of the RM. The second part develops models for applying the framework to actual problems. The spectrum of applications varies from robust portfolio selection up to broader spheres, such as stochastic conic optimization with risk constraints or supervised machine learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The diagnosis of mixed genotype hepatitis C virus (HCV) infection is rare and information on incidence in the UK, where genotypes 1a and 3 are the most prevalent, is sparse. Considerable variations in the efficacies of direct-acting antivirals (DAAs) for the HCV genotypes have been documented and the ability of DAAs to treat mixed genotype HCV infections remains unclear, with the possibility that genotype switching may occur. In order to estimate the prevalence of mixed genotype 1a/3 infections in Scotland, a cohort of 512 samples was compiled and then screened using a genotype-specific nested PCR assay. Mixed genotype 1a/3 infections were found in 3.8% of samples tested, with a significantly higher prevalence rate of 6.7% (p<0.05) observed in individuals diagnosed with genotype 3 infections than genotype 1a (0.8%). An analysis of the samples using genotypic-specific qPCR assays found that in two-thirds of samples tested, the minor strain contributed <1% of the total viral load. The potential of deep sequencing methods for the diagnosis of mixed genotype infections was assessed using two pan-genotypic PCR assays compatible with the Illumina MiSeq platform that were developed targeting the E1-E2 and NS5B regions of the virus. The E1-E2 assay detected 75% of the mixed genotype infections, proving to be more sensitive than the NS5B assay which identified only 25% of the mixed infections. Studies of sequence data and linked patient records also identified significantly more neurological disorders in genotype 3 patients. Evidence of distinctive dinucleotide expression within the genotypes was also uncovered. Taken together these findings raise interesting questions about the evolutionary history of the virus and indicate that there is still more to understand about the different genotypes. In an era where clinical medicine is frequently more personalised, the development of diagnostic methods for HCV providing increased patient stratification is increasingly important. This project has shown that sequence-based genotyping methods can be highly discriminatory and informative, and their use should be encouraged in diagnostic laboratories. Mixed genotype infections were challenging to identify and current deep sequencing methods were not as sensitive or cost-effective as Sanger-based approaches in this study. More research is needed to evaluate the clinical prognosis of patients with mixed genotype infection and to develop clinical guidelines on their treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Las organizaciones y sus entornos son sistemas complejos. Tales sistemas son difíciles de comprender y predecir. Pese a ello, la predicción es una tarea fundamental para la gestión empresarial y para la toma de decisiones que implica siempre un riesgo. Los métodos clásicos de predicción (entre los cuales están: la regresión lineal, la Autoregresive Moving Average y el exponential smoothing) establecen supuestos como la linealidad, la estabilidad para ser matemática y computacionalmente tratables. Por diferentes medios, sin embargo, se han demostrado las limitaciones de tales métodos. Pues bien, en las últimas décadas nuevos métodos de predicción han surgido con el fin de abarcar la complejidad de los sistemas organizacionales y sus entornos, antes que evitarla. Entre ellos, los más promisorios son los métodos de predicción bio-inspirados (ej. redes neuronales, algoritmos genéticos /evolutivos y sistemas inmunes artificiales). Este artículo pretende establecer un estado situacional de las aplicaciones actuales y potenciales de los métodos bio-inspirados de predicción en la administración.