791 resultados para Interval optimization
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Purpose: The purpose of this study was to examine the influence of three different high-intensity interval training (HIT) regimens on endurance performance in highly trained endurance athletes. Methods: Before, and after 2 and 4 wk of training, 38 cyclists and triathletes (mean +/- SD; age = 25 +/- 6 yr; mass = 75 +/- 7 kg; (V)over dot O-2peak = 64.5 +/- 5.2 mL.kg(-1).min(-1)) performed: 1) a progressive cycle test to measure peak oxygen consumption ((V)over dotO(2peak)) and peak aerobic power output (PPO), 2) a time to exhaustion test (T-max) at their (V)over dotO(2peak) power output (P-max), as well as 3) a 40-kin time-trial (TT40). Subjects were matched and assigned to one of four training groups (G(1), N = 8, 8 X 60% T-max P-max, 1:2 work:recovery ratio; G(2), N = 9, 8 X 60% T-max at P-max, recovery at 65% HRmax; G(3), N = 10, 12 X 30 s at 175% PPO, 4.5-min recovery; G(CON), N = 11). In addition to G(1) G(2), and G(3) performing HIT twice per week, all athletes maintained their regular low-intensity training throughout the experimental period. Results: All HIT groups improved TT40 performance (+4.4 to +5.8%) and PPO (+3.0 to +6.2%) significantly more than G(CON) (-0.9 to + 1.1 %; P < 0.05). Furthermore, G(1) (+5.4%) and G(2) (+8.1%) improved their (V)over dot O-2peak significantly more than G(CON) (+ 1.0%; P < 0.05). Conclusion: The present study has shown that when HIT incorporates P-max as the interval intensity and 60% of T-max as the interval duration, already highly trained cyclists can significantly improve their 40-km time trial performance. Moreover, the present data confirm prior research, in that repeated supramaximal HIT can significantly improve 40-km time trial performance.
Resumo:
In this paper we address an order processing optimization problem known as minimization of open stacks (MOSP). We present an integer pro gramming model, based on the existence of a perfect elimination scheme in interval graphs, which finds an optimal sequence for the costumers orders.
Resumo:
As one of the most competitive approaches to multi-objective optimization, evolutionary algorithms have been shown to obtain very good results for many realworld multi-objective problems. One of the issues that can affect the performance of these algorithms is the uncertainty in the quality of the solutions which is usually represented with the noise in the objective values. Therefore, handling noisy objectives in evolutionary multi-objective optimization algorithms becomes very important and is gaining more attention in recent years. In this paper we present ?-degree Pareto dominance relation for ordering the solutions in multi-objective optimization when the values of the objective functions are given as intervals. Based on this dominance relation, we propose an adaptation of the non-dominated sorting algorithm for ranking the solutions. This ranking method is then used in a standardmulti-objective evolutionary algorithm and a recently proposed novel multi-objective estimation of distribution algorithm based on joint variable-objective probabilistic modeling, and applied to a set of multi-objective problems with different levels of independent noise. The experimental results show that the use of the proposed method for solution ranking allows to approximate Pareto sets which are considerably better than those obtained when using the dominance probability-based ranking method, which is one of the main methods for noise handling in multi-objective optimization.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
The concentration of hydrogen peroxide is an important parameter in the azo dyes decoloration process through the utilization of advanced oxidizing processes, particularly by oxidizing via UV/H2O2. It is pointed out that, from a specific concentration, the hydrogen peroxide works as a hydroxyl radical self-consumer and thus a decrease of the system`s oxidizing power happens. The determination of the process critical point (maximum amount of hydrogen peroxide to be added) was performed through a ""thorough mapping"" or discretization of the target region, founded on the maximization of an objective function objective (constant of reaction kinetics of pseudo-first order). The discretization of the operational region occurred through a feedforward backpropagation neural model. The neural model obtained presented remarkable coefficient of correlation between real and predicted values for the absorbance variable, above 0.98. In the present work, the neural model had, as phenomenological basis the Acid Brown 75 dye decoloration process. The hydrogen peroxide addition critical point, represented by a value of mass relation (F) between the hydrogen peroxide mass and the dye mass, was established in the interval 50 < F < 60. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes a particle swarm optimization (PSO) approach to support electricity producers for multiperiod optimal contract allocation. The producer risk preference is stated by a utility function (U) expressing the tradeoff between the expectation and variance of the return. Variance estimation and expected return are based on a forecasted scenario interval determined by a price range forecasting model developed by the authors. A certain confidence level is associated to each forecasted scenario interval. The proposed model makes use of contracts with physical (spot and forward) and financial (options) settlement. PSO performance was evaluated by comparing it with a genetic algorithm-based approach. This model can be used by producers in deregulated electricity markets but can easily be adapted to load serving entities and retailers. Moreover, it can easily be adapted to the use of other type of contracts.
Resumo:
In this paper we address an order processing optimization problem known as the Minimization of Open Stacks Problem (MOSP). This problem consists in finding the best sequence for manufacturing the different products required by costumers, in a setting where only one product can be made at a time. The objective is to minimize the maximum number of incomplete orders from costumers that are being processed simultaneously. We present an integer programming model, based on the existence of a perfect elimination order in interval graphs, which finds an optimal sequence for the costumers orders. Among other economic advantages, manufacturing the products in this optimal sequence reduces the amount of space needed to store incomplete orders.
Resumo:
Optimization is a very important field for getting the best possible value for the optimization function. Continuous optimization is optimization over real intervals. There are many global and local search techniques. Global search techniques try to get the global optima of the optimization problem. However, local search techniques are used more since they try to find a local minimal solution within an area of the search space. In Continuous Constraint Satisfaction Problems (CCSP)s, constraints are viewed as relations between variables, and the computations are supported by interval analysis. The continuous constraint programming framework provides branch-and-prune algorithms for covering sets of solutions for the constraints with sets of interval boxes which are the Cartesian product of intervals. These algorithms begin with an initial crude cover of the feasible space (the Cartesian product of the initial variable domains) which is recursively refined by interleaving pruning and branching steps until a stopping criterion is satisfied. In this work, we try to find a convenient way to use the advantages in CCSP branchand- prune with local search of global optimization applied locally over each pruned branch of the CCSP. We apply local search techniques of continuous optimization over the pruned boxes outputted by the CCSP techniques. We mainly use steepest descent technique with different characteristics such as penalty calculation and step length. We implement two main different local search algorithms. We use “Procure”, which is a constraint reasoning and global optimization framework, to implement our techniques, then we produce and introduce our results over a set of benchmarks.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
An economic-statistical model is developed for variable parameters (VP) (X) over bar charts in which all design parameters vary adaptively, that is, each of the design parameters (sample size, sampling interval and control-limit width) vary as a function of the most recent process information. The cost function due to controlling the process quality through a VP (X) over bar chart is derived. During the optimization of the cost function, constraints are imposed on the expected times to signal when the process is in and out of control. In this way, required statistical properties can be assured. Through a numerical example, the proposed economic-statistical design approach for VP (X) over bar charts is compared to the economic design for VP (X) over bar charts and to the economic-statistical and economic designs for fixed parameters (FP) (X) over bar charts in terms of the operating cost and the expected times to signal. From this example, it is possible to assess the benefits provided by the proposed model. Varying some input parameters, their effect on the optimal cost and on the optimal values of the design parameters was analysed.
Resumo:
Nowadays, we return to live a period of lunar exploration. China, Japan and India heavily invest in missions to the moon, and then try to implement manned bases on this satellite. These bases must be installed in polar regions due to the apparent existence of water. Therefore, the study of the feasibility of satellite constellations for navigation, control and communication recovers importance. The Moon's gravitational potential and resonant movements due to the proximity to Earth as the Kozai-Lidov resonance, must be considered in addition to other perturbations of lesser magnitude. The usual satellite constellations provide, as a basic feature, continuous and global coverage of the Earth. With this goal, they are designed for the smallest number of objects possible to perform a specific task and this amount is directly related to the altitude of the orbits and visual abilities of the members of the constellation. However the problem is different when the area to be covered is reduced to a given zone. The required number of space objects can be reduced. Furthermore, depending on the mission requirements it may be not necessary to provide continuous coverage. Taking into account the possibility of setting up a constellation that covers a specific region of the Moon on a non-continuous base, in this study we seek a criterion of optimization related to the time between visits. The propagation of the orbits of objects in the constellation in conjunction with the coverage constraints, provide information on the periods of time in which points of the surface are covered by a satellite, and time intervals in which they are not. So we minimize the time between visits considering several sets of possible constellations and using genetic algorithms.
Resumo:
Electrical impedance tomography (EIT) is an imaging technique that attempts to reconstruct the impedance distribution inside an object from the impedance between electrodes placed on the object surface. The EIT reconstruction problem can be approached as a nonlinear nonconvex optimization problem in which one tries to maximize the matching between a simulated impedance problem and the observed data. This nonlinear optimization problem is often ill-posed, and not very suited to methods that evaluate derivatives of the objective function. It may be approached by simulated annealing (SA), but at a large computational cost due to the expensive evaluation process of the objective function, which involves a full simulation of the impedance problem at each iteration. A variation of SA is proposed in which the objective function is evaluated only partially, while ensuring boundaries on the behavior of the modified algorithm.