930 resultados para abstract optimization problems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

For an increasing number of applications, mesoscale modelling systems now aim to better represent urban areas. The complexity of processes resolved by urban parametrization schemes varies with the application. The concept of fitness-for-purpose is therefore critical for both the choice of parametrizations and the way in which the scheme should be evaluated. A systematic and objective model response analysis procedure (Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm) is used to assess the fitness of the single-layer urban canopy parametrization implemented in the Weather Research and Forecasting (WRF) model. The scheme is evaluated regarding its ability to simulate observed surface energy fluxes and the sensitivity to input parameters. Recent amendments are described, focussing on features which improve its applicability to numerical weather prediction, such as a reduced and physically more meaningful list of input parameters. The study shows a high sensitivity of the scheme to parameters characterizing roof properties in contrast to a low response to road-related ones. Problems in partitioning of energy between turbulent sensible and latent heat fluxes are also emphasized. Some initial guidelines to prioritize efforts to obtain urban land-cover class characteristics in WRF are provided. Copyright © 2010 Royal Meteorological Society and Crown Copyright.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In The Conduct of Inquiry in International Relations, Patrick Jackson situates methodologies in International Relations in relation to their underlying philosophical assumptions. One of his aims is to map International Relations debates in a way that ‘capture[s] current controversies’ (p. 40). This ambition is overstated: whilst Jackson’s typology is useful as a clarificatory tool, (re)classifying existing scholarship in International Relations is more problematic. One problem with Jackson’s approach is that he tends to run together the philosophical assumptions which decisively differentiate his methodologies (by stipulating a distinctive warrant for knowledge claims) and the explanatory strategies that are employed to generate such knowledge claims, suggesting that the latter are entailed by the former. In fact, the explanatory strategies which Jackson associates with each methodology reflect conventional practice in International Relations just as much as they reflect philosophical assumptions. This makes it more difficult to identify each methodology at work than Jackson implies. I illustrate this point through a critical analysis of Jackson’s controversial reclassification of Waltz as an analyticist, showing that whilst Jackson’s typology helps to expose inconsistencies in Waltz’s approach, it does not fully support the proposed reclassification. The conventional aspect of methodologies in International Relations also raises questions about the limits of Jackson’s ‘engaged pluralism’.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the calibration and evaluation of flood inundation models are a prerequisite for their successful application, there is a clear need to ensure that the performance measures that quantify how well models match the available observations are fit for purpose. This paper evaluates the binary pattern performance measures that are frequently used to compare flood inundation models with observations of flood extent. This evaluation considers whether these measures are able to calibrate and evaluate model predictions in a credible and consistent way, i.e. identifying the underlying model behaviour for a number of different purposes such as comparing models of floods of different magnitudes or on different catchments. Through theoretical examples, it is shown that the binary pattern measures are not consistent for floods of different sizes, such that for the same vertical error in water level, a model of a flood of large magnitude appears to perform better than a model of a smaller magnitude flood. Further, the commonly used Critical Success Index (usually referred to as F<2 >) is biased in favour of overprediction of the flood extent, and is also biased towards correctly predicting areas of the domain with smaller topographic gradients. Consequently, it is recommended that future studies consider carefully the implications of reporting conclusions using these performance measures. Additionally, future research should consider whether a more robust and consistent analysis could be achieved by using elevation comparison methods instead.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Variational data assimilation is commonly used in environmental forecasting to estimate the current state of the system from a model forecast and observational data. The assimilation problem can be written simply in the form of a nonlinear least squares optimization problem. However the practical solution of the problem in large systems requires many careful choices to be made in the implementation. In this article we present the theory of variational data assimilation and then discuss in detail how it is implemented in practice. Current solutions and open questions are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We extend extreme learning machine (ELM) classifiers to complex Reproducing Kernel Hilbert Spaces (RKHS) where the input/output variables as well as the optimization variables are complex-valued. A new family of classifiers, called complex-valued ELM (CELM) suitable for complex-valued multiple-input–multiple-output processing is introduced. In the proposed method, the associated Lagrangian is computed using induced RKHS kernels, adopting a Wirtinger calculus approach formulated as a constrained optimization problem similarly to the conventional ELM classifier formulation. When training the CELM, the Karush–Khun–Tuker (KKT) theorem is used to solve the dual optimization problem that consists of satisfying simultaneously smallest training error as well as smallest norm of output weights criteria. The proposed formulation also addresses aspects of quaternary classification within a Clifford algebra context. For 2D complex-valued inputs, user-defined complex-coupled hyper-planes divide the classifier input space into four partitions. For 3D complex-valued inputs, the formulation generates three pairs of complex-coupled hyper-planes through orthogonal projections. The six hyper-planes then divide the 3D space into eight partitions. It is shown that the CELM problem formulation is equivalent to solving six real-valued ELM tasks, which are induced by projecting the chosen complex kernel across the different user-defined coordinate planes. A classification example of powdered samples on the basis of their terahertz spectral signatures is used to demonstrate the advantages of the CELM classifiers compared to their SVM counterparts. The proposed classifiers retain the advantages of their ELM counterparts, in that they can perform multiclass classification with lower computational complexity than SVM classifiers. Furthermore, because of their ability to perform classification tasks fast, the proposed formulations are of interest to real-time applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of several fat replacement levels (0%, 35%, 50%, 70%, and 100%) by inulin in sponge cake microstructure and physicochemical properties were studied. Oil substitution for inulin decreased significantly (P < 0.05) batter viscosity, giving heterogeneous bubbles size distributions as it was observed by light microscopy. Using confocal laser scanning microscopy the fat was observed to be located at the bubbles’ interface, enabling an optimum crumb cake structure development during baking. Cryo-SEM micrographs of cake crumbs showed a continuous matrix with embedded starch granules and coated with oil; when fat replacement levels increased, starch granules appeared as detached structures. Cakes with fat replacement up to 70% had a high crumb air cell values; they were softer and rated as acceptable by an untrained sensory panel (n = 51). So, the reformulation of a standard sponge cake recipe to obtain a new product with additional health benefits and accepted by consumers is achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We give an a posteriori analysis of a semidiscrete discontinuous Galerkin scheme approximating solutions to a model of multiphase elastodynamics, which involves an energy density depending not only on the strain but also the strain gradient. A key component in the analysis is the reduced relative entropy stability framework developed in Giesselmann (2014, SIAM J. Math. Anal., 46, 3518–3539). This framework allows energy-type arguments to be applied to continuous functions. Since we advocate the use of discontinuous Galerkin methods we make use of two families of reconstructions, one set of discrete reconstructions and a set of elliptic reconstructions to apply the reduced relative entropy framework in this setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a new reformulation of the KKT system associated to a variational inequality as a semismooth equation. The reformulation is derived from the concept of differentiable exact penalties for nonlinear programming. The best theoretical results are presented for nonlinear complementarity problems, where simple, verifiable, conditions ensure that the penalty is exact. We close the paper with some preliminary computational tests on the use of a semismooth Newton method to solve the equation derived from the new reformulation. We also compare its performance with the Newton method applied to classical reformulations based on the Fischer-Burmeister function and on the minimum. The new reformulation combines the best features of the classical ones, being as easy to solve as the reformulation that uses the Fischer-Burmeister function while requiring as few Newton steps as the one that is based on the minimum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given a fixed set of identical or different-sized circular items, the problem we deal with consists on finding the smallest object within which the items can be packed. Circular, triangular, squared, rectangular and also strip objects are considered. Moreover, 2D and 3D problems are treated. Twice-differentiable models for all these problems are presented. A strategy to reduce the complexity of evaluating the models is employed and, as a consequence, instances with a large number of items can be considered. Numerical experiments show the flexibility and reliability of the new unified approach. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ever-increasing robustness and reliability of flow-simulation methods have consolidated CFD as a major tool in virtually all branches of fluid mechanics. Traditionally, those methods have played a crucial role in the analysis of flow physics. In more recent years, though, the subject has broadened considerably, with the development of optimization and inverse design applications. Since then, the search for efficient ways to evaluate flow-sensitivity gradients has received the attention of numerous researchers. In this scenario, the adjoint method has emerged as, quite possibly, the most powerful tool for the job, which heightens the need for a clear understanding of its conceptual basis. Yet, some of its underlying aspects are still subject to debate in the literature, despite all the research that has been carried out on the method. Such is the case with the adjoint boundary and internal conditions, in particular. The present work aims to shed more light on that topic, with emphasis on the need for an internal shock condition. By following the path of previous authors, the quasi-1D Euler problem is used as a vehicle to explore those concepts. The results clearly indicate that the behavior of the adjoint solution through a shock wave ultimately depends upon the nature of the objective functional.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays in the world of mass consumption there is big demand for distributioncenters of bigger size. Managing such a center is a very complex and difficult taskregarding to the different processes and factors in a usual warehouse when we want tominimize the labor costs. Most of the workers’ working time is spent with travelingbetween source and destination points which cause deadheading. Even if a worker knowsthe structure of a warehouse well and because of that he or she can find the shortest pathbetween two points, it is still not guaranteed that there won’t be long traveling timebetween the locations of two consecutive tasks. We need optimal assignments betweentasks and workers.In the scientific literature Generalized Assignment Problem (GAP) is a wellknownproblem which deals with the assignment of m workers to n tasks consideringseveral constraints. The primary purpose of my thesis project was to choose a heuristics(genetic algorithm, tabu search or ant colony optimization) to be implemented into SAPExtended Warehouse Management (SAP EWM) by with task assignment will be moreeffective between tasks and resources.After system analysis I had to realize that due different constraints and businessdemands only 1:1 assingments are allowed in SAP EWM. Because of that I had to use adifferent and simpler approach – instead of the introduced heuristics – which could gainbetter assignments during the test phase in several cases. In the thesis I described indetails what ware the most important questions and problems which emerged during theplanning of my optimized assignment method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Train dispatchers faces lots of challenges due to conflicts which causes delays of trains as a result of solving possible dispatching problems the network faces. The major challenge is for the train dispatchers to make the right decision and have reliable, cost effective and much more faster approaches needed to solve dispatching problems. This thesis work provides detail information on the implementation of different heuristic algorithms for train dispatchers in solving train dispatching problems. The library data files used are in xml file format and deals with both single and double tracks between main stations. The main objective of this work is to build different heuristic algorithms to solve unexpected delays faced by train dispatchers and to help in making right decisions on steps to take to have reliable and cost effective solution to the problems. These heuristics algorithms proposed were able to help dispatchers in making right decisions when solving train dispatching problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a northern European climate a typical solar combisystem for a single family house normally saves between 10 and 30 % of the auxiliary energy needed for space heating and domestic water heating. It is considered uneconomical to dimension systems for higher energy savings. Overheating problems may also occur. One way of avoiding these problems is to use a collector that is designed so that it has a low optical efficiency in summer, when the solar elevation is high and the load is small, and a high optical efficiency in early spring and late fall when the solar elevation is low and the load is large.The study investigates the possibilities to design the system and, in particular, the collector optics, in order to match the system performance with the yearly variations of the heating load and the solar irradiation. It seems possible to design practically viable load adapted collectors, and to use them for whole roofs ( 40 m2) without causing more overheating stress on the system than with a standard 10 m2 system. The load adapted collectors collect roughly as much energy per unit area as flat plate collectors, but they may be produced at a lower cost due to lower material costs. There is an additional potential for a cost reduction since it is possible to design the load adapted collector for low stagnation temperatures making it possible to use less expensive materials. One and the same collector design is suitable for a wide range of system sizes and roof inclinations. The report contains descriptions of optimized collector designs, properties of realistic collectors, and results of calculations of system output, stagnation performance and cost performance. Appropriate computer tools for optical analysis, optimization of collectors in systems and a very fast simulation model have been developed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subgradient optimization method is a simple and flexible linear programming iterative algorithm. It is much simpler than Newton's method and can be applied to a wider variety of problems. It also converges when the objective function is non-differentiable. Since an efficient algorithm will not only produce a good solution but also take less computing time, we always prefer a simpler algorithm with high quality. In this study a series of step size parameters in the subgradient equation is studied. The performance is compared for a general piecewise function and a specific p-median problem. We examine how the quality of solution changes by setting five forms of step size parameter.