909 resultados para Convex optimization problem


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tese de Doutoramento em Engenharia Industrial e de Sistemas (PDEIS)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Les xarxes híbrides satèl·lit-terrestre ofereixen connectivitat a zones remotes i aïllades i permeten resoldre nombrosos problemes de comunicacions. No obstant, presenten diversos reptes, ja que realitzen la comunicació per un canal mòbil terrestre i un canal satèl·lit contigu. Un d'aquests reptes és trobar mecanismes per realitzar eficientment l'enrutament i el control de flux, de manera conjunta. L'objectiu d'aquest projecte és simular i estudiar algorismes existents que resolguin aquests problemes, així com proposar-ne de nous, mitjançant diverses tècniques d'optimització convexa. A partir de les simulacions realitzades en aquest estudi, s'han analitzat àmpliament els diversos problemes d'enrutament i control de flux, i s'han avaluat els resultats obtinguts i les prestacions dels algorismes emprats. En concret, s'han implementat de manera satisfactòria algorismes basats en el mètode de descomposició dual, el mètode de subgradient, el mètode de Newton i el mètode de la barrera logarítmica, entre d'altres, per tal de resoldre els problemes d'enrutament i control de flux plantejats.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Incorporating adaptive learning into macroeconomics requires assumptions about how agents incorporate their forecasts into their decision-making. We develop a theory of bounded rationality that we call finite-horizon learning. This approach generalizes the two existing benchmarks in the literature: Eulerequation learning, which assumes that consumption decisions are made to satisfy the one-step-ahead perceived Euler equation; and infinite-horizon learning, in which consumption today is determined optimally from an infinite-horizon optimization problem with given beliefs. In our approach, agents hold a finite forecasting/planning horizon. We find for the Ramsey model that the unique rational expectations equilibrium is E-stable at all horizons. However, transitional dynamics can differ significantly depending upon the horizon.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The choice network revenue management model incorporates customer purchase behavioras a function of the offered products, and is the appropriate model for airline and hotel networkrevenue management, dynamic sales of bundles, and dynamic assortment optimization.The optimization problem is a stochastic dynamic program and is intractable. A certainty-equivalencerelaxation of the dynamic program, called the choice deterministic linear program(CDLP) is usually used to generate dyamic controls. Recently, a compact linear programmingformulation of this linear program was given for the multi-segment multinomial-logit (MNL)model of customer choice with non-overlapping consideration sets. Our objective is to obtaina tighter bound than this formulation while retaining the appealing properties of a compactlinear programming representation. To this end, it is natural to consider the affine relaxationof the dynamic program. We first show that the affine relaxation is NP-complete even for asingle-segment MNL model. Nevertheless, by analyzing the affine relaxation we derive a newcompact linear program that approximates the dynamic programming value function betterthan CDLP, provably between the CDLP value and the affine relaxation, and often comingclose to the latter in our numerical experiments. When the segment consideration sets overlap,we show that some strong equalities called product cuts developed for the CDLP remain validfor our new formulation. Finally we perform extensive numerical comparisons on the variousbounds to evaluate their performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Preface In this thesis we study several questions related to transaction data measured at an individual level. The questions are addressed in three essays that will constitute this thesis. In the first essay we use tick-by-tick data to estimate non-parametrically the jump process of 37 big stocks traded on the Paris Stock Exchange, and of the CAC 40 index. We separate the total daily returns in three components (trading continuous, trading jump, and overnight), and we characterize each one of them. We estimate at the individual and index levels the contribution of each return component to the total daily variability. For the index, the contribution of jumps is smaller and it is compensated by the larger contribution of overnight returns. We test formally that individual stocks jump more frequently than the index, and that they do not respond independently to the arrive of news. Finally, we find that daily jumps are larger when their arrival rates are larger. At the contemporaneous level there is a strong negative correlation between the jump frequency and the trading activity measures. The second essay study the general properties of the trade- and volume-duration processes for two stocks traded on the Paris Stock Exchange. These two stocks correspond to a very illiquid stock and to a relatively liquid stock. We estimate a class of autoregressive gamma process with conditional distribution from the family of non-central gamma (up to a scale factor). This process was introduced by Gouriéroux and Jasiak and it is known as Autoregressive gamma process. We also evaluate the ability of the process to fit the data. For this purpose we use the Diebold, Gunther and Tay (1998) test; and the capacity of the model to reproduce the moments of the observed data, and the empirical serial correlation and the partial serial correlation functions. We establish that the model describes correctly the trade duration process of illiquid stocks, but have problems to adjust correctly the trade duration process of liquid stocks which present long-memory characteristics. When the model is adjusted to volume duration, it successfully fit the data. In the third essay we study the economic relevance of optimal liquidation strategies by calibrating a recent and realistic microstructure model with data from the Paris Stock Exchange. We distinguish the case of parameters which are constant through the day from time-varying ones. An optimization problem incorporating this realistic microstructure model is presented and solved. Our model endogenizes the number of trades required before the position is liquidated. A comparative static exercise demonstrates the realism of our model. We find that a sell decision taken in the morning will be liquidated by the early afternoon. If price impacts increase over the day, the liquidation will take place more rapidly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper derives the HJB (Hamilton-Jacobi-Bellman) equation for sophisticated agents in a finite horizon dynamic optimization problem with non-constant discounting in a continuous setting, by using a dynamic programming approach. A simple example is used in order to illustrate the applicability of this HJB equation, by suggesting a method for constructing the subgame perfect equilibrium solution to the problem.Conditions for the observational equivalence with an associated problem with constantdiscounting are analyzed. Special attention is paid to the case of free terminal time. Strotz¿s model (an eating cake problem of a nonrenewable resource with non-constant discounting) is revisited.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper derives the HJB (Hamilton-Jacobi-Bellman) equation for sophisticated agents in a finite horizon dynamic optimization problem with non-constant discounting in a continuous setting, by using a dynamic programming approach. A simple example is used in order to illustrate the applicability of this HJB equation, by suggesting a method for constructing the subgame perfect equilibrium solution to the problem.Conditions for the observational equivalence with an associated problem with constantdiscounting are analyzed. Special attention is paid to the case of free terminal time. Strotz¿s model (an eating cake problem of a nonrenewable resource with non-constant discounting) is revisited.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a new framework for large-scale data clustering. The main idea is to modify functional dimensionality reduction techniques to directly optimize over discrete labels using stochastic gradient descent. Compared to methods like spectral clustering our approach solves a single optimization problem, rather than an ad-hoc two-stage optimization approach, does not require a matrix inversion, can easily encode prior knowledge in the set of implementable functions, and does not have an ?out-of-sample? problem. Experimental results on both artificial and real-world datasets show the usefulness of our approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a method to reconstruct 3D surfaces of silicon wafers from 2D images of printed circuits taken with a scanning electron microscope. Our reconstruction method combines the physical model of the optical acquisition system with prior knowledge about the shapes of the patterns in the circuit; the result is a shape-from-shading technique with a shape prior. The reconstruction of the surface is formulated as an optimization problem with an objective functional that combines a data-fidelity term on the microscopic image with two prior terms on the surface. The data term models the acquisition system through the irradiance equation characteristic of the microscope; the first prior is a smoothness penalty on the reconstructed surface, and the second prior constrains the shape of the surface to agree with the expected shape of the pattern in the circuit. In order to account for the variability of the manufacturing process, this second prior includes a deformation field that allows a nonlinear elastic deformation between the expected pattern and the reconstructed surface. As a result, the minimization problem has two unknowns, and the reconstruction method provides two outputs: 1) a reconstructed surface and 2) a deformation field. The reconstructed surface is derived from the shading observed in the image and the prior knowledge about the pattern in the circuit, while the deformation field produces a mapping between the expected shape and the reconstructed surface that provides a measure of deviation between the circuit design models and the real manufacturing process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this letter, we obtain the Maximum LikelihoodEstimator of position in the framework of Global NavigationSatellite Systems. This theoretical result is the basis of a completelydifferent approach to the positioning problem, in contrastto the conventional two-steps position estimation, consistingof estimating the synchronization parameters of the in-viewsatellites and then performing a position estimation with thatinformation. To the authors’ knowledge, this is a novel approachwhich copes with signal fading and it mitigates multipath andjamming interferences. Besides, the concept of Position–basedSynchronization is introduced, which states that synchronizationparameters can be recovered from a user position estimation. Weprovide computer simulation results showing the robustness ofthe proposed approach in fading multipath channels. The RootMean Square Error performance of the proposed algorithm iscompared to those achieved with state-of-the-art synchronizationtechniques. A Sequential Monte–Carlo based method is used todeal with the multivariate optimization problem resulting fromthe ML solution in an iterative way.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Teollisuuden tuotannon eri prosessien optimointi on hyvin ajankohtainen aihe. Monet ohjausjärjestelmät ovat ajalta, jolloin tietokoneiden laskentateho oli hyvin vaatimaton nykyisiin verrattuna. Työssä esitetään tuotantoprosessi, joka sisältää teräksen leikkaussuunnitelman muodostamisongelman. Valuprosessi on yksi teräksen valmistuksen välivaiheita. Siinä sopivaan laatuun saatettu sula teräs valetaan linjastoon, jossa se jähmettyy ja leikataan aihioiksi. Myöhemmissä vaiheissa teräsaihioista muokataan pienempiä kokonaisuuksia, tehtaan lopputuotteita. Jatkuvavaletut aihiot voidaan leikata tilauskannasta riippuen monella eri tavalla. Tätä varten tarvitaan leikkaussuunnitelma, jonka muodostamiseksi on ratkaistava sekalukuoptimointiongelma. Sekalukuoptimointiongelmat ovat optimoinnin haastavin muoto. Niitä on tutkittu yksinkertaisempiin optimointiongelmiin nähden vähän. Nykyisten tietokoneiden laskentateho on kuitenkin mahdollistanut raskaampien ja monimutkaisempien optimointialgoritmien käytön ja kehittämisen. Työssä on käytetty ja esitetty eräs stokastisen optimoinnin menetelmä, differentiaalievoluutioalgoritmi. Tässä työssä esitetään teräksen leikkausoptimointialgoritmi. Kehitetty optimointimenetelmä toimii dynaamisesti tehdasympäristössä käyttäjien määrittelemien parametrien mukaisesti. Työ on osa Syncron Tech Oy:n Ovako Bar Oy Ab:lle toimittamaa ohjausjärjestelmää.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tässä diplomityössä määritellään varmistusjärjestelmän simulointimalli eli varmistusmalli. Varmistusjärjestelmän toiminta optimoidaan kyseisen varmistusmallin avulla. Optimoinnin tavoitteena on parantaa varmistusjärjestelmän tehokkuutta. Parannusta etsitään olemassa olevien varmistusjärjestelmän resurssien maksimaalisella hyödyntämisellä. Varmistusmalli optimoidaan evoluutioalgoritmin avulla. Optimoinnissa on useita tavoitteita, jotka ovat ristiriidassa keskenään. Monitavoiteoptimointiongelma muunnetaan yhden tavoitteen optimointiongelmaksi muodostamalla tavoitefunktio painotetun summan menetelmän avulla. Rinnakkain edellisen menetelmän kanssa käytetään myös Pareto-optimointia. Pareto-optimaalisen rintaman pisteiden etsintä ohjataan lähelle painotetun summan menetelmän optimipistettä. Evoluutioalgoritmin toteutuksessa käytetään hyväksi varmistusjärjestelmiin liittyvää ongelmakohtaista tietoa. Työn tuloksena saadaan varmistusjärjestelmän simulointi- sekä optimointityökalu. Simulointityökalua käytetään kartoittamaan nykyisen varmistusjärjestelmän toimivuutta. Optimoinnin avulla tehostetaan varmistusjärjestelmän toimintaa. Työkalua voidaan käyttää myös uusien varmistusjärjestelmien suunnittelussa sekä nykyisten varmistusjärjestelmien laajentamisessa.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In diffusion MRI, traditional tractography algorithms do not recover truly quantitative tractograms and the structural connectivity has to be estimated indirectly by counting the number of fiber tracts or averaging scalar maps along them. Recently, global and efficient methods have emerged to estimate more quantitative tractograms by combining tractography with local models for the diffusion signal, like the Convex Optimization Modeling for Microstructure Informed Tractography (COMMIT) framework. In this abstract, we show the importance of using both (i) proper multi-compartment diffusion models and (ii) adequate multi-shell acquisitions, in order to evaluate the accuracy and the biological plausibility of the tractograms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis studies the use of heuristic algorithms in a number of combinatorial problems that occur in various resource constrained environments. Such problems occur, for example, in manufacturing, where a restricted number of resources (tools, machines, feeder slots) are needed to perform some operations. Many of these problems turn out to be computationally intractable, and heuristic algorithms are used to provide efficient, yet sub-optimal solutions. The main goal of the present study is to build upon existing methods to create new heuristics that provide improved solutions for some of these problems. All of these problems occur in practice, and one of the motivations of our study was the request for improvements from industrial sources. We approach three different resource constrained problems. The first is the tool switching and loading problem, and occurs especially in the assembly of printed circuit boards. This problem has to be solved when an efficient, yet small primary storage is used to access resources (tools) from a less efficient (but unlimited) secondary storage area. We study various forms of the problem and provide improved heuristics for its solution. Second, the nozzle assignment problem is concerned with selecting a suitable set of vacuum nozzles for the arms of a robotic assembly machine. It turns out that this is a specialized formulation of the MINMAX resource allocation formulation of the apportionment problem and it can be solved efficiently and optimally. We construct an exact algorithm specialized for the nozzle selection and provide a proof of its optimality. Third, the problem of feeder assignment and component tape construction occurs when electronic components are inserted and certain component types cause tape movement delays that can significantly impact the efficiency of printed circuit board assembly. Here, careful selection of component slots in the feeder improves the tape movement speed. We provide a formal proof that this problem is of the same complexity as the turnpike problem (a well studied geometric optimization problem), and provide a heuristic algorithm for this problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pulse Response Based Control (PRBC) is a recently developed minimum time control method for flexible structures. The flexible behavior of the structure is represented through a set of discrete time sequences, which are the responses of the structure due to rectangular force pulses. The rectangular force pulses are given by the actuators that control the structure. The set of pulse responses, desired outputs, and force bounds form a numerical optimization problem. The solution of the optimization problem is a minimum time piecewise constant control sequence for driving the system to a desired final state. The method was developed for driving positive semi-definite systems. In case the system is positive definite, some final states of the system may not be reachable. Necessary conditions for reachability of the final states are derived for systems with a finite number of degrees of freedom. Numerical results are presented that confirm the derived analytical conditions. Numerical simulations of maneuvers of distributed parameter systems have shown a relationship between the error in the estimated minimum control time and sampling interval