961 resultados para Problem formulation
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
Setup operations are significant in some production environments. It is mandatory that their production plans consider some features, as setup state conservation across periods through setup carryover and crossover. The modelling of setup crossover allows more flexible decisions and is essential for problems with long setup times. This paper proposes two models for the capacitated lot-sizing problem with backlogging and setup carryover and crossover. The first is in line with other models from the literature, whereas the second considers a disaggregated setup variable, which tracks the starting and completion times of the setup operation. This innovative approach permits a more compact formulation. Computational results show that the proposed models have outperformed other state-of-the-art formulation.
Resumo:
In der vorliegenden Arbeit werden zwei physikalischeFließexperimente an Vliesstoffen untersucht, die dazu dienensollen, unbekannte hydraulische Parameter des Materials, wiez. B. die Diffusivitäts- oder Leitfähigkeitsfunktion, ausMeßdaten zu identifizieren. Die physikalische undmathematische Modellierung dieser Experimente führt auf einCauchy-Dirichlet-Problem mit freiem Rand für die degeneriertparabolische Richardsgleichung in derSättigungsformulierung, das sogenannte direkte Problem. Ausder Kenntnis des freien Randes dieses Problems soll dernichtlineare Diffusivitätskoeffizient derDifferentialgleichung rekonstruiert werden. Für diesesinverse Problem stellen wir einOutput-Least-Squares-Funktional auf und verwenden zu dessenMinimierung iterative Regularisierungsverfahren wie dasLevenberg-Marquardt-Verfahren und die IRGN-Methode basierendauf einer Parametrisierung des Koeffizientenraumes durchquadratische B-Splines. Für das direkte Problem beweisen wirunter anderem Existenz und Eindeutigkeit der Lösung desCauchy-Dirichlet-Problems sowie die Existenz des freienRandes. Anschließend führen wir formal die Ableitung desfreien Randes nach dem Koeffizienten, die wir für dasnumerische Rekonstruktionsverfahren benötigen, auf einlinear degeneriert parabolisches Randwertproblem zurück.Wir erläutern die numerische Umsetzung und Implementierungunseres Rekonstruktionsverfahrens und stellen abschließendRekonstruktionsergebnisse bezüglich synthetischer Daten vor.
Resumo:
This thesis addresses the formulation of a referee assignment problem for the Italian Volleyball Serie A Championships. The problem has particular constraints such as a referee must be assigned to different teams in a given period of times, and the minimal/maximal level of workload for each referee is obtained by considering cost and profit in the objective function. The problem has been solved through an exact method by using an integer linear programming formulation and a clique based decomposition for improving the computing time. Extensive computational experiments on real-world instances have been performed to determine the effectiveness of the proposed approach.
Resumo:
Assuming that the heat capacity of a body is negligible outside certain inclusions the heat equation degenerates to a parabolic-elliptic interface problem. In this work we aim to detect these interfaces from thermal measurements on the surface of the body. We deduce an equivalent variational formulation for the parabolic-elliptic problem and give a new proof of the unique solvability based on Lions’s projection lemma. For the case that the heat conductivity is higher inside the inclusions, we develop an adaptation of the factorization method to this time-dependent problem. In particular this shows that the locations of the interfaces are uniquely determined by boundary measurements. The method also yields to a numerical algorithm to recover the inclusions and thus the interfaces. We demonstrate how measurement data can be simulated numerically by a coupling of a finite element method with a boundary element method, and finally we present some numerical results for the inverse problem.
Resumo:
This paper presents the first full-fledged branch-and-price (bap) algorithm for the capacitated arc-routing problem (CARP). Prior exact solution techniques either rely on cutting planes or the transformation of the CARP into a node-routing problem. The drawbacks are either models with inherent symmetry, dense underlying networks, or a formulation where edge flows in a potential solution do not allow the reconstruction of unique CARP tours. The proposed algorithm circumvents all these drawbacks by taking the beneficial ingredients from existing CARP methods and combining them in a new way. The first step is the solution of the one-index formulation of the CARP in order to produce strong cuts and an excellent lower bound. It is known that this bound is typically stronger than relaxations of a pure set-partitioning CARP model.rnSuch a set-partitioning master program results from a Dantzig-Wolfe decomposition. In the second phase, the master program is initialized with the strong cuts, CARP tours are iteratively generated by a pricing procedure, and branching is required to produce integer solutions. This is a cut-first bap-second algorithm and its main function is, in fact, the splitting of edge flows into unique CARP tours.
Resumo:
We present a new model formulation for a multi-product lot-sizing problem with product returns and remanufacturing subject to a capacity constraint. The given external demand of the products has to be satisfied by remanufactured or newly produced goods. The objective is to determine a feasible production plan, which minimizes production, holding, and setup costs. As the LP relaxation of a model formulation based on the well-known CLSP leads to very poor lower bounds, we propose a column-generation approach to determine tighter bounds. The lower bound obtained by column generation can be easily transferred into a feasible solution by a truncated branch-and-bound approach using CPLEX. The results of an extensive numerical study show the high solution quality of the proposed solution approach.
Resumo:
We derive the fermion loop formulation for the supersymmetric nonlinear O(N) sigma model by performing a hopping expansion using Wilson fermions. In this formulation the fermionic contribution to the partition function becomes a sum over all possible closed non-oriented fermion loop configurations. The interaction between the bosonic and fermionic degrees of freedom is encoded in the constraints arising from the supersymmetry and induces flavour changing fermion loops. For N ≥ 3 this leads to fermion loops which are no longer self-avoiding and hence to a potential sign problem. Since we use Wilson fermions the bare mass needs to be tuned to the chiral point. For N = 2 we determine the critical point and present boson and fermion masses in the critical regime.
Resumo:
Simulations of supersymmetric field theories on the lattice with (spontaneously) broken supersymmetry suffer from a fermion sign problem related to the vanishing of the Witten index. We propose a novel approach which solves this problem in low dimensions by formulating the path integral on the lattice in terms of fermion loops. For N=2 supersymmetric quantum mechanics the loop formulation becomes particularly simple and in this paper – the first in a series of three – we discuss in detail the reformulation of this model in terms of fermionic and bosonic bonds for various lattice discretisations including one which is Q-exact.
Resumo:
A proper allocation of resources targeted to solve hunger is essential to optimize the efficacy of actions and maximize results. This requires an adequate measurement and formulation of the problem as, paraphrasing Einstein, the formulation of a problem is essential to reach a solution. Different measurement methods have been designed to count, score, classify and compare hunger at local level and to allow comparisons between different places. However, the alternative methods produce significantly reach different results. These discrepancies make decisions on the targeting of resource allocations difficult. To assist decision makers, a new method taking into account the dimension of hunger and the coping capacities of countries, is proposed enabling to establish both geographical and sectoral priorities for the allocation of resources.
Resumo:
A proper allocation of resources targeted to solve hunger is essential to optimize the efficacy of actions and maximize results. This requires an adequate measurement and formulation of the problem as, paraphrasing Einstein, the formulation of a problem is essential to reach a solution. Different measurement methods have been designed to count, score, classify and compare hunger at local level and to allow comparisons between different places. However, the alternative methods reach significantly different results. These discrepancies make decisions on the targeting of resource allocations difficult. To assist decision makers, a new method taking into account the dimension of hunger and the coping capacities of countries is proposed enabling to establish both geographical and sectoral priorities for the allocation of resources
Resumo:
The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed
Resumo:
The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed
Resumo:
A mathematical formulation for finite strain elasto plastic consolidation of fully saturated soil media is presented. Strong and weak forms of the boundary-value problem are derived using both the material and spatial descriptions. The algorithmic treatment of finite strain elastoplasticity for the solid phase is based on multiplicative decomposition and is coupled with the algorithm for fluid flow via the Kirchhoff pore water pressure. Balance laws are written for the soil-water mixture following the motion of the soil matrix alone. It is shown that the motion of the fluid phase only affects the Jacobian of the solid phase motion, and therefore can be characterized completely by the motion of the soil matrix. Furthermore, it is shown from energy balance consideration that the effective, or intergranular, stress is the appropriate measure of stress for describing the constitutive response of the soil skeleton since it absorbs all the strain energy generated in the saturated soil-water mixture. Finally, it is shown that the mathematical model is amenable to consistent linearization, and that explicit expressions for the consistent tangent operators can be derived for use in numerical solutions such as those based on the finite element method.
Resumo:
Due to the high dependence of photovoltaic energy efficiency on environmental conditions (temperature, irradiation...), it is quite important to perform some analysis focusing on the characteristics of photovoltaic devices in order to optimize energy production, even for small-scale users. The use of equivalent circuits is the preferred option to analyze solar cells/panels performance. However, the aforementioned small-scale users rarely have the equipment or expertise to perform large testing/calculation campaigns, the only information available for them being the manufacturer datasheet. The solution to this problem is the development of new and simple methods to define equivalent circuits able to reproduce the behavior of the panel for any working condition, from a very small amount of information. In the present work a direct and completely explicit method to extract solar cell parameters from the manufacturer datasheet is presented and tested. This method is based on analytical formulation which includes the use of the Lambert W-function to turn the series resistor equation explicit. The presented method is used to analyze commercial solar panel performance (i.e., the current-voltage–I-V–curve) at different levels of irradiation and temperature. The analysis performed is based only on the information included in the manufacturer’s datasheet.