469 resultados para optimality


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Solver Add-in of Microsoft Excel is widely used in courses on Operations Research and in industrial applications. Since the 2010 version of Microsoft Excel, the Solver Add-in comprises a so-called evolutionary solver. We analyze how this metaheuristic can be applied to the resource-constrained project scheduling problem (RCPSP). We present an implementation of a schedule-generation scheme in a spreadsheet, which combined with the evolutionary solver can be used for devising good feasible schedules. Our computational results indicate that using this approach, non-trivial instances of the RCPSP can be (approximately) solved to optimality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We prove exponential rates of convergence of hp-version discontinuous Galerkin (dG) interior penalty finite element methods for second-order elliptic problems with mixed Dirichlet-Neumann boundary conditions in axiparallel polyhedra. The dG discretizations are based on axiparallel, σ-geometric anisotropic meshes of mapped hexahedra and anisotropic polynomial degree distributions of μ-bounded variation. We consider piecewise analytic solutions which belong to a larger analytic class than those for the pure Dirichlet problem considered in [11, 12]. For such solutions, we establish the exponential convergence of a nonconforming dG interpolant given by local L 2 -projections on elements away from corners and edges, and by suitable local low-order quasi-interpolants on elements at corners and edges. Due to the appearance of non-homogeneous, weighted norms in the analytic regularity class, new arguments are introduced to bound the dG consistency errors in elements abutting on Neumann edges. The non-homogeneous norms also entail some crucial modifications of the stability and quasi-optimality proofs, as well as of the analysis for the anisotropic interpolation operators. The exponential convergence bounds for the dG interpolant constructed in this paper generalize the results of [11, 12] for the pure Dirichlet case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We review and extend the core literature on international transfer price manipulation to avoid or evade taxes. Under negotiated transfer pricing with a viable bargaining structure, including performance evaluation disconnected from the transfer price, divisions voluntarily exchange accurate information to obtain firm-wide optimality, a result not dependent on restraint from exercising internal market power. For intangible licenses, a larger optimal profit shift for a given tax rate change strengthens incentives for transfer pricing abuse. In practice, an intangible's arm's length range is viewed as a guideline, a context where incentives for abuse materialize. Transfer pricing for intangibles obliges greater tax authority scrutiny.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An image processing observational technique for the stereoscopic reconstruction of the wave form of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired wave form is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a remote sensing observational method for the measurement of the spatio-temporal dynamics of ocean waves. Variational techniques are used to recover a coherent space-time reconstruction of oceanic sea states given stereo video imagery. The stereoscopic reconstruction problem is expressed in a variational optimization framework. There, we design an energy functional whose minimizer is the desired temporal sequence of wave heights. The functional combines photometric observations as well as spatial and temporal regularizers. A nested iterative scheme is devised to numerically solve, via 3-D multigrid methods, the system of partial differential equations resulting from the optimality condition of the energy functional. The output of our method is the coherent, simultaneous estimation of the wave surface height and radiance at multiple snapshots. We demonstrate our algorithm on real data collected off-shore. Statistical and spectral analysis are performed. Comparison with respect to an existing sequential method is analyzed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this work is to solve a question raised for average sampling in shift-invariant spaces by using the well-known matrix pencil theory. In many common situations in sampling theory, the available data are samples of some convolution operator acting on the function itself: this leads to the problem of average sampling, also known as generalized sampling. In this paper we deal with the existence of a sampling formula involving these samples and having reconstruction functions with compact support. Thus, low computational complexity is involved and truncation errors are avoided. In practice, it is accomplished by means of a FIR filter bank. An answer is given in the light of the generalized sampling theory by using the oversampling technique: more samples than strictly necessary are used. The original problem reduces to finding a polynomial left inverse of a polynomial matrix intimately related to the sampling problem which, for a suitable choice of the sampling period, becomes a matrix pencil. This matrix pencil approach allows us to obtain a practical method for computing the compactly supported reconstruction functions for the important case where the oversampling rate is minimum. Moreover, the optimality of the obtained solution is established.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop a novel remote sensing technique for the observation of waves on the ocean surface. Our method infers the 3-D waveform and radiance of oceanic sea states via a variational stereo imagery formulation. In this setting, the shape and radiance of the wave surface are given by minimizers of a composite energy functional that combines a photometric matching term along with regularization terms involving the smoothness of the unknowns. The desired ocean surface shape and radiance are the solution of a system of coupled partial differential equations derived from the optimality conditions of the energy functional. The proposed method is naturally extended to study the spatiotemporal dynamics of ocean waves and applied to three sets of stereo video data. Statistical and spectral analysis are carried out. Our results provide evidence that the observed omnidirectional wavenumber spectrum S(k) decays as k-2.5 is in agreement with Zakharov's theory (1999). Furthermore, the 3-D spectrum of the reconstructed wave surface is exploited to estimate wave dispersion and currents.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many existing engineering works model the statistical characteristics of the entities under study as normal distributions. These models are eventually used for decision making, requiring in practice the definition of the classification region corresponding to the desired confidence level. Surprisingly enough, however, a great amount of computer vision works using multidimensional normal models leave unspecified or fail to establish correct confidence regions due to misconceptions on the features of Gaussian functions or to wrong analogies with the unidimensional case. The resulting regions incur in deviations that can be unacceptable in high-dimensional models. Here we provide a comprehensive derivation of the optimal confidence regions for multivariate normal distributions of arbitrary dimensionality. To this end, firstly we derive the condition for region optimality of general continuous multidimensional distributions, and then we apply it to the widespread case of the normal probability density function. The obtained results are used to analyze the confidence error incurred by previous works related to vision research, showing that deviations caused by wrong regions may turn into unacceptable as dimensionality increases. To support the theoretical analysis, a quantitative example in the context of moving object detection by means of background modeling is given.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an ant colony optimization algorithm to sequence the mixed assembly lines considering the inventory and the replenishment of components. This is a NP-problem that cannot be solved to optimality by exact methods when the size of the problem growth. Groups of specialized ants are implemented to solve the different parts of the problem. This is intended to differentiate each part of the problem. Different types of pheromone structures are created to identify good car sequences, and good routes for the replenishment of components vehicle. The contribution of this paper is the collaborative approach of the ACO for the mixed assembly line and the replenishment of components and the jointly solution of the problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multigroup diffusion codes for three dimensional LWR core analysis use as input data pre-generated homogenized few group cross sections and discontinuity factors for certain combinations of state variables, such as temperatures or densities. The simplest way of compiling those data are tabulated libraries, where a grid covering the domain of state variables is defined and the homogenized cross sections are computed at the grid points. Then, during the core calculation, an interpolation algorithm is used to compute the cross sections from the table values. Since interpolation errors depend on the distance between the grid points, a determined refinement of the mesh is required to reach a target accuracy, which could lead to large data storage volume and a large number of lattice transport calculations. In this paper, a simple and effective procedure to optimize the distribution of grid points for tabulated libraries is presented. Optimality is considered in the sense of building a non-uniform point distribution with the minimum number of grid points for each state variable satisfying a given target accuracy in k-effective. The procedure consists of determining the sensitivity coefficients of k-effective to cross sections using perturbation theory; and estimating the interpolation errors committed with different mesh steps for each state variable. These results allow evaluating the influence of interpolation errors of each cross section on k-effective for any combination of state variables, and estimating the optimal distance between grid points.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dynamic and Partial Reconfiguration (DPR) allows a system to be able to modify certain parts of itself during run-time. This feature gives rise to the capability of evolution: changing parts of the configuration according to the online evaluation of performance or other parameters. The evolution is achieved through a bio-inspired model in which the features of the system are identified as genes. The objective of the evolution may not be a single one; in this work, power consumption is taken into consideration, together with the quality of filtering, as the measure of performance, of a noisy image. Pareto optimality is applied to the evolutionary process, in order to find a representative set of optimal solutions as for performance and power consumption. The main contributions of this paper are: implementing an evolvable system on a low-power Spartan-6 FPGA included in a Wireless Sensor Network node and, by enabling the availability of a real measure of power consumption at run-time, achieving the capability of multi-objective evolution, that yields different optimal configurations, among which the selected one will depend on the relative “weights” of performance and power consumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a registration method for images with global illumination variations. The method is based on a joint iterative optimization (geometric and photometric) of the L1 norm of the intensity error. Two strategies are compared to directly find the appropriate intensity transformation within each iteration: histogram specification and the solution obtained by analyzing the necessary optimality conditions. Such strategies reduce the search space of the joint optimization to that of the geometric transformation between the images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This PhD dissertation is framed in the emergent fields of Reverse Logistics and ClosedLoop Supply Chain (CLSC) management. This subarea of supply chain management has gained researchers and practitioners' attention over the last 15 years to become a fully recognized subdiscipline of the Operations Management field. More specifically, among all the activities that are included within the CLSC area, the focus of this dissertation is centered in direct reuse aspects. The main contribution of this dissertation to current knowledge is twofold. First, a framework for the so-called reuse CLSC is developed. This conceptual model is grounded in a set of six case studies conducted by the author in real industrial settings. The model has also been contrasted with existing literature and with academic and professional experts on the topic as well. The framework encompasses four building blocks. In the first block, a typology for reusable articles is put forward, distinguishing between Returnable Transport Items (RTI), Reusable Packaging Materials (RPM), and Reusable Products (RP). In the second block, the common characteristics that render reuse CLSC difficult to manage from a logistical standpoint are identified, namely: fleet shrinkage, significant investment and limited visibility. In the third block, the main problems arising in the management of reuse CLSC are analyzed, such as: (1) define fleet size dimension, (2) control cycle time and promote articles rotation, (3) control return rate and prevent shrinkage, (4) define purchase policies for new articles, (5) plan and control reconditioning activities, and (6) balance inventory between depots. Finally, in the fourth block some solutions to those issues are developed. Firstly, problems (2) and (3) are addressed through the comparative analysis of alternative strategies for controlling cycle time and return rate. Secondly, a methodology for calculating the required fleet size is elaborated (problem (1)). This methodology is valid for different configurations of the physical flows in the reuse CLSC. Likewise, some directions are pointed out for further development of a similar method for defining purchase policies for new articles (problem (4)). The second main contribution of this dissertation is embedded in the solutions part (block 4) of the conceptual framework and comprises a two-level decision problem integrating two mixed integer linear programming (MILP) models that have been formulated and solved to optimality using AIMMS as modeling language, CPLEX as solver and Excel spreadsheet for data introduction and output presentation. The results obtained are analyzed in order to measure in a client-supplier system the economic impact of two alternative control strategies (recovery policies) in the context of reuse. In addition, the models support decision-making regarding the selection of the appropriate recovery policy against the characteristics of demand pattern and the structure of the relevant costs in the system. The triangulation of methods used in this thesis has enabled to address the same research topic with different approaches and thus, the robustness of the results obtained is strengthened.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En el proceso de cálculo de redes de tuberías se maneja un conjunto de variables con unas características muy peculiares, ya que son discretas y estandarizadas. Por lo tanto su evolución se produce por escalones (la presión nominal, el diámetro y el costo de los tubos). Por otro lado la presión de diseño de la red es una función directa de la presión de cabecera. En el proceso de optimización mediante programación dinámica la presión de cabecera se va reduciendo gradualmente en cada secuencia del proceso, haciendo que evolucione a la par la presión de diseño, lo que genera a su vez saltos discriminados en la presión nominal de los tramos, y con ello en su costo y en su gradiente de cambio. En esta tesis doctoral se analiza si estos cambios discriminados que se producen en el gradiente de cambio de algunos tramos en el curso de una secuencia, ocasionados por la evolución de la presión de cabecera de la red, generan interferencias que alteran el proceso secuencial de la programación dinámica. La modificación del gradiente de cambio durante el transcurso de una secuencia se conoce con el nombre de mutación, la cual puede ser activa cuando involucra a un tramo optimo modificando las condiciones de la transacción o pasiva si no crea afección alguna. En el análisis realizado se distingue entre la mutación del gradiente de cambio de los tramos óptimos (que puede generarse exclusivamente en el conjunto de los trayectos que los albergan), y entre los efectos que el cambio de timbraje produce en el resto de los tramos de la red (incluso los situados aguas abajo de los nudos con holgura de presión nula) sobre el mecanismo iterativo, estudiando la compatibilidad de este fenómeno con el principio de óptimo de Bellman. En el proceso de investigación llevado a cabo se destaca la fortaleza que da al proceso secuencial del método Granados el hecho de que el gradiente de cambio siempre sea creciente en el avance hacia el óptimo, es decir que el costo marginal de la reducción de las pérdidas de carga de la red que se consigue en una iteración siempre sea más caro que el de la iteración precedente. Asimismo, en el estudio realizado se revisan los condicionantes impuestos al proceso de optimización, incluyendo algunos que hasta ahora no se han tenido en cuenta en los estudios de investigación, pero que están totalmente integrados en la ingeniería práctica, como es la disposición telescópica de las redes (reordenación de los diámetros de mayor a menor de cabeza a cola de la red), y la disposición de un único diámetro por tramo, en lugar de que estén compartidos por dos diámetros contiguos (con sus salvedades en caso de tramos de gran longitud, o en otras situaciones muy específicas). Finalmente se incluye un capítulo con las conclusiones, aportaciones y recomendaciones, las cuales se consideran de gran utilidad para la ingeniería práctica, entre las que se destaca la perfección del método secuencial, la escasa transcendencia de las mutaciones del gradiente de cambio y la forma en que pueden obviarse, la inocuidad de las mutaciones pasivas y el cumplimiento del principio de Bellman en todo el proceso de optimización. The sizing process of a water distribution network is based on several variables, being some of them special, as they are discrete and their values are standardized: pipe pressure rating, pipe diameter and pipe cost. On another note, the sizing process is directly related with the pressure at the network head. Given that during the optimization by means of the Granados’ Method (based on dynamic programming) the pressure at the network head is being gradually reduced, a jump from one pipe pressure rating to another may arise during the sequential process, leading to changes on the pipe cost and on the gradient change (unitary cost for reducing the head losses). This chain of changes may, in turn, affect the sequential process diverting it from an optimal policies path. This thesis analyses how the abovementioned alterations could influence the results of the dynamic programming algorithm, that is to say the compatibility with the Bellman’s Principle of Optimality, which states that the sequence has to follow a route of optimal policies, and that past decisions should not influence the remaining ones. The modification of the gradient change is known as mutation. Mutations are active when they affect the optimal link (the one which was selected to be changed during iteration) or passive when they do not alter the selection of the optimal link. The thesis analysed the potential mutations processes along the network, both on the optimal paths and also on the rest of the network, and its influence on the final results. Moreover, the investigation analysed the practical restrictions of the sizing process that are fully integrated in the applied engineering, but not always taken into account by the optimization tools. As the telescopic distribution of the diameters (i.e. larger diameters are placed at the network head) and the use of a unique diameter per link (with the exception of very large links, where two consecutive diameters may be placed). Conclusions regarding robustness of the dynamic programming algorithm are given. The sequence of the Granados Method is quite robust and it has been shown capable to auto-correct the mutations that could arise during the optimization process, and to achieve an optimal distribution even when the Bellman’s Principle of Optimality is not fully accomplished. The fact that the gradient change is always increasing during the optimization (that is to say, the marginal cost of reducing head losses is always increasing), provides robustness to the algorithm, as looping are avoided in the optimization sequence. Additionally, insight into the causes of the mutation process is provided and practical rules to avoid it are given, improving the current definition and utilization of the Granados’ Method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El objetivo de esta tesis es la caracterización de la generación térmica representativa de la existente en la realidad, para posteriormente proceder a su modelización y simulación integrándolas en una red eléctrica tipo y llevar a cabo estudios de optimización multiobjetivo económico medioambiental. Para ello, en primera instancia se analiza el contexto energético y eléctrico actual, y más concretamente el peninsular, en el que habiendo desaparecido las centrales de fuelóleo, sólo quedan ciclos combinados y centrales de carbón de distinto rango. Seguidamente se lleva a cabo un análisis de los principales impactos medioambientales de las centrales eléctricas basadas en combustión, representados sobre todo por sus emisiones de CO2, SO2 y NOx, de las medidas de control y mitigación de las mismas y de la normativa que les aplica. A continuación, a partir de las características de los combustibles y de la información de los consumos específicos, se caracterizan los grupos térmicos frente a las funciones relevantes que definen su comportamiento energético, económico y medioambiental, en términos de funciones de salida horarias dependiendo de la carga. Se tiene en cuenta la posibilidad de desnitrificación y desulfuración. Dado que las funciones objetivo son múltiples, y que están en conflicto unas con otras, se ha optado por usar métodos multiobjetivo que son capaces de identificar el contorno de puntos óptimos o frente de Pareto, en los que tomando una solución no existe otra que lo mejore en alguna de las funciones objetivo sin empeorarlo en otra. Se analizaron varios métodos de optimización multiobjetivo y se seleccionó el de las ε constraint, capaz de encontrar frentes no convexos y cuya optimalidad estricta se puede comprobar. Se integró una representación equilibrada de centrales de antracita, hulla nacional e importada, lignito y ciclos combinados en la red tipo IEEE-57, en la que se puede trabajar con siete centrales sin distorsionar demasiado las potencias nominales reales de los grupos, y se programó en Matlab la resolución de flujos óptimos de carga en alterna con el método multiobjetivo integrado. Se identifican los frentes de Pareto de las combinaciones de coste y cada uno de los tres tipos de emisión, y también el de los cuatro objetivos juntos, obteniendo los resultados de costes óptimos del sistema para todo el rango de emisiones. Se valora cuánto le cuesta al sistema reducir una tonelada adicional de cualquier tipo de emisión a base de desplazarse a combinaciones de generación más limpias. Los puntos encontrados aseguran que bajo unas determinadas emisiones no pueden ser mejorados económicamente, o que atendiendo a ese coste no se puede reducir más allá el sistema en lo relativo a emisiones. También se indica cómo usar los frentes de Pareto para trazar estrategias óptimas de producción ante cambios horarios de carga. ABSTRACT The aim of this thesis is the characterization of electrical generation based on combustion processes representative of the actual power plants, for the latter modelling and simulation of an electrical grid and the development of economic- environmental multiobjective optimization studies. In this line, the first step taken is the analysis of the current energetic and electrical framework, focused on the peninsular one, where the fuel power plants have been shut down, and the only ones remaining are coal units of different types and combined cycle. Then it is carried out an analysis of the main environmental impacts of the thermal power plants, represented basically by the emissions of CO2, SO2 y NOx, their control and reduction measures and the applicable regulations. Next, based on the combustibles properties and the information about the units heat rates, the different power plants are characterized in relation to the outstanding functions that define their energy, economic and environmental behaviour, in terms of hourly output functions depending on their load. Optional denitrification and desulfurization is considered. Given that there are multiple objectives, and that they go in conflictive directions, it has been decided the use of multiobjective techniques, that have the ability of identifying the optimal points set, which is called the Pareto front, where taken a solution there will be no other point that can beat the former in an objective without worsening it in another objective. Several multiobjective optimization methods were analysed and pondered, selecting the ε constraint technique, which is able to find no convex fronts and it is opened to be tested to prove the strict Pareto optimality of the obtained solutions. A balanced representation of the thermal power plants, formed by anthracite, lignite, bituminous national and imported coals and combined cycle, was integrated in the IEEE-57 network case. This system was selected because it deals with a total power that will admit seven units without distorting significantly the actual size of the power plants. Next, an AC optimal power flow with the multiobjective method implemented in the routines was programmed. The Pareto fronts of the combination of operative costs with each of the three emissions functions were found, and also the front of all of them together. The optimal production costs of the system for all the emissions range were obtained. It is also evaluated the cost of reducing an additional emission ton of any of the emissions when the optimal production mix is displaced towards cleaner points. The obtained solutions assure that under a determined level of emissions they cannot be improved economically or, in the other way, at a determined cost it cannot be found points of lesser emissions. The Pareto fronts are also applied for the search of optimal strategic paths to follow the hourly load changes.