962 resultados para Semi-infinite linear programming


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on the three-dimensional elastic inclusion model proposed by Dobrovolskii, we developed a rheological inclusion model to study earthquake preparation processes. By using the Corresponding Principle in the theory of rheologic mechanics, we derived the analytic expressions of viscoelastic displacement U(r, t) , V(r, t) and W(r, t), normal strains epsilon(xx) (r, t), epsilon(yy) (r, t) and epsilon(zz) (r, t) and the bulk strain theta (r, t) at an arbitrary point (x, y, z) in three directions of X axis, Y axis and Z axis produced by a three-dimensional inclusion in the semi-infinite rheologic medium defined by the standard linear rheologic model. Subsequent to the spatial-temporal variation of bulk strain being computed on the ground produced by such a spherical rheologic inclusion, interesting results are obtained, suggesting that the bulk strain produced by a hard inclusion change with time according to three stages (alpha, beta, gamma) with different characteristics, similar to that of geodetic deformation observations, but different with the results of a soft inclusion. These theoretical results can be used to explain the characteristics of spatial-temporal evolution, patterns, quadrant-distribution of earthquake precursors, the changeability, spontaneity and complexity of short-term and imminent-term precursors. It offers a theoretical base to build physical models for earthquake precursors and to predict the earthquakes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The object of this thesis is to develop a method for calculating the losses developed in steel conductors of circular cross-section and at temperatures below 100oC, by the direct passage of a sinusoidally alternating current. Three cases are considered. 1. Isolated solid or tubular conductor. 2. Concentric arrangement of tube and solid return conductor. 3. Concentric arrangement of two tubes. These cases find applications in process temperature maintenance of pipelines, resistance heating of bars and design of bus-bars. The problems associated with the non-linearity of steel are examined. Resistance heating of bars and methods of surface heating of pipelines are briefly described. Magnetic-linear solutions based on Maxwell's equations are critically examined and conditions under which various formulae apply investigated. The conditions under which a tube is electrically equivalent to a solid conductor and to a semi-infinite plate are derived. Existing solutions for the calculation of losses in isolated steel conductors of circular cross-section are reviewed, evaluated and compared. Two methods of solution are developed for the three cases considered. The first is based on the magnetic-linear solutions and offers an alternative to the available methods which are not universal. The second solution extends the existing B/H step-function approximation method to small diameter conductors and to tubes in isolation or in a concentric arrangement. A comprehensive experimental investigation is presented for cases 1 and 2 above which confirms the validity of the proposed methods of solution. These are further supported by experimental results reported in the literature. Good agreement is obtained between measured and calculated loss values for surface field strengths beyond the linear part of the d.c. magnetisation characteristic. It is also shown that there is a difference in the electrical behaviour of a small diameter conductor or thin tube under resistance or induction heating conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linear Programming (LP) is a powerful decision making tool extensively used in various economic and engineering activities. In the early stages the success of LP was mainly due to the efficiency of the simplex method. After the appearance of Karmarkar's paper, the focus of most research was shifted to the field of interior point methods. The present work is concerned with investigating and efficiently implementing the latest techniques in this field taking sparsity into account. The performance of these implementations on different classes of LP problems is reported here. The preconditional conjugate gradient method is one of the most powerful tools for the solution of the least square problem, present in every iteration of all interior point methods. The effect of using different preconditioners on a range of problems with various condition numbers is presented. Decomposition algorithms has been one of the main fields of research in linear programming over the last few years. After reviewing the latest decomposition techniques, three promising methods were chosen the implemented. Sparsity is again a consideration and suggestions have been included to allow improvements when solving problems with these methods. Finally, experimental results on randomly generated data are reported and compared with an interior point method. The efficient implementation of the decomposition methods considered in this study requires the solution of quadratic subproblems. A review of recent work on algorithms for convex quadratic was performed. The most promising algorithms are discussed and implemented taking sparsity into account. The related performance of these algorithms on randomly generated separable and non-separable problems is also reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Financial institutes are an integral part of any modern economy. In the 1970s and 1980s, Gulf Cooperation Council (GCC) countries made significant progress in financial deepening and in building a modern financial infrastructure. This study aims to evaluate the performance (efficiency) of financial institutes (banking sector) in GCC countries. Since, the selected variables include negative data for some banks and positive for others, and the available evaluation methods are not helpful in this case, so we developed a Semi Oriented Radial Model to perform this evaluation. Furthermore, since the SORM evaluation result provides a limited information for any decision maker (bankers, investors, etc...), we proposed a second stage analysis using classification and regression (C&R) method to get further results combining SORM results with other environmental data (Financial, economical and political) to set rules for the efficient banks, hence, the results will be useful for bankers in order to improve their bank performance and to the investors, maximize their returns. Mainly there are two approaches to evaluate the performance of Decision Making Units (DMUs), under each of them there are different methods with different assumptions. Parametric approach is based on the econometric regression theory and nonparametric approach is based on a mathematical linear programming theory. Under the nonparametric approaches, there are two methods: Data Envelopment Analysis (DEA) and Free Disposal Hull (FDH). While there are three methods under the parametric approach: Stochastic Frontier Analysis (SFA); Thick Frontier Analysis (TFA) and Distribution-Free Analysis (DFA). The result shows that DEA and SFA are the most applicable methods in banking sector, but DEA is seem to be most popular between researchers. However DEA as SFA still facing many challenges, one of these challenges is how to deal with negative data, since it requires the assumption that all the input and output values are non-negative, while in many applications negative outputs could appear e.g. losses in contrast with profit. Although there are few developed Models under DEA to deal with negative data but we believe that each of them has it is own limitations, therefore we developed a Semi-Oriented-Radial-Model (SORM) that could handle the negativity issue in DEA. The application result using SORM shows that the overall performance of GCC banking is relatively high (85.6%). Although, the efficiency score is fluctuated over the study period (1998-2007) due to the second Gulf War and to the international financial crisis, but still higher than the efficiency score of their counterpart in other countries. Banks operating in Saudi Arabia seem to be the highest efficient banks followed by UAE, Omani and Bahraini banks, while banks operating in Qatar and Kuwait seem to be the lowest efficient banks; this is because these two countries are the most affected country in the second Gulf War. Also, the result shows that there is no statistical relationship between the operating style (Islamic or Conventional) and bank efficiency. Even though there is no statistical differences due to the operational style, but Islamic bank seem to be more efficient than the Conventional bank, since on average their efficiency score is 86.33% compare to 85.38% for Conventional banks. Furthermore, the Islamic banks seem to be more affected by the political crisis (second Gulf War), whereas Conventional banks seem to be more affected by the financial crisis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AMS subject classification: 90C05, 90A14.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We develop a framework for proving approximation limits of polynomial size linear programs (LPs) from lower bounds on the nonnegative ranks of suitably defined matrices. This framework yields unconditional impossibility results that are applicable to any LP as opposed to only programs generated by hierarchies. Using our framework, we prove that O(n1/2-ε)-approximations for CLIQUE require LPs of size 2nΩ(ε). This lower bound applies to LPs using a certain encoding of CLIQUE as a linear optimization problem. Moreover, we establish a similar result for approximations of semidefinite programs by LPs. Our main technical ingredient is a quantitative improvement of Razborov's [38] rectangle corruption lemma for the high error regime, which gives strong lower bounds on the nonnegative rank of shifts of the unique disjointness matrix.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A teoria de jogos modela estratégias entre agentes (jogadores), os quais possuem recompensas ao fim do jogo conforme suas ações. O melhor par de estratégias para os jogadores constitui uma solução de equilíbrio. Porém, nem sempre se consegue estimar os dados do problema. Diante disso, os parâmetros incertos presentes em modelos de jogos são formalizados pela teoria fuzzy. Assim, a teoria fuzzy auxilia a teoria de jogos, formando jogos fuzzy. Dessa forma, parâmetros, como as recompensas, tornam-se números fuzzy. Mais ainda, quando há incerteza na representação desses números fuzzy utilizam-se os números fuzzy intervalares. Então, neste trabalho modelos de jogos fuzzy intervalares são analisados e métodos computacionais são desenvolvidos para a resolução desses jogos. Por fim, realizam-se simulações de programação linear para observar melhor a aplicação das teorias estudadas e avaliar a proposta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The municipal management in any country of the globe requires planning and allocation of resources evenly. In Brazil, the Law of Budgetary Guidelines (LDO) guides municipal managers toward that balance. This research develops a model that seeks to find the balance of the allocation of public resources in Brazilian municipalities, considering the LDO as a parameter. For this using statistical techniques and multicriteria analysis as a first step in order to define allocation strategies, based on the technical aspects arising from the municipal manager. In a second step, presented in linear programming based optimization where the objective function is derived from the preference of the results of the manager and his staff. The statistical representation is presented to support multicriteria development in the definition of replacement rates through time series. The multicriteria analysis was structured by defining the criteria, alternatives and the application of UTASTAR methods to calculate replacement rates. After these initial settings, an application of linear programming was developed to find the optimal allocation of enforcement resources of the municipal budget. Data from the budget of a municipality in southwestern Paraná were studied in the application of the model and analysis of results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multivariate orthogonal polynomials in D real dimensions are considered from the perspective of the Cholesky factorization of a moment matrix. The approach allows for the construction of corresponding multivariate orthogonal polynomials, associated second kind functions, Jacobi type matrices and associated three term relations and also Christoffel-Darboux formulae. The multivariate orthogonal polynomials, their second kind functions and the corresponding Christoffel-Darboux kernels are shown to be quasi-determinants as well as Schur complements of bordered truncations of the moment matrix; quasi-tau functions are introduced. It is proven that the second kind functions are multivariate Cauchy transforms of the multivariate orthogonal polynomials. Discrete and continuous deformations of the measure lead to Toda type integrable hierarchy, being the corresponding flows described through Lax and Zakharov-Shabat equations; bilinear equations are found. Varying size matrix nonlinear partial difference and differential equations of the 2D Toda lattice type are shown to be solved by matrix coefficients of the multivariate orthogonal polynomials. The discrete flows, which are shown to be connected with a Gauss-Borel factorization of the Jacobi type matrices and its quasi-determinants, lead to expressions for the multivariate orthogonal polynomials and their second kind functions in terms of shifted quasi-tau matrices, which generalize to the multidimensional realm, those that relate the Baker and adjoint Baker functions to ratios of Miwa shifted tau-functions in the 1D scenario. In this context, the multivariate extension of the elementary Darboux transformation is given in terms of quasi-determinants of matrices built up by the evaluation, at a poised set of nodes lying in an appropriate hyperplane in R^D, of the multivariate orthogonal polynomials. The multivariate Christoffel formula for the iteration of m elementary Darboux transformations is given as a quasi-determinant. It is shown, using congruences in the space of semi-infinite matrices, that the discrete and continuous flows are intimately connected and determine nonlinear partial difference-differential equations that involve only one site in the integrable lattice behaving as a Kadomstev-Petviashvili type system. Finally, a brief discussion of measures with a particular linear isometry invariance and some of its consequences for the corresponding multivariate polynomials is given. In particular, it is shown that the Toda times that preserve the invariance condition lay in a secant variety of the Veronese variety of the fixed point set of the linear isometry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract In this paper, we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the first-price model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to be paid. We introduce six variants of biased random-key genetic algorithms for this problem. Three of them use a novel initialization technique that makes use of solutions of intermediate linear programming relaxations of an exact mixed integer-linear programming model as initial chromosomes of the population. An experimental evaluation compares the effectiveness of the proposed algorithms with the standard mixed linear integer programming formulation, a specialized exact algorithm, and the best-performing heuristics proposed for this problem. The proposed algorithms are competitive and offer strong results, mainly for large-scale auctions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The implementation of confidential contracts between a container liner carrier and its customers, because of the Ocean Shipping Reform Act (OSRA) 1998, demands a revision in the methodology applied in the carrier's planning of marketing and sales. The marketing and sales planning process should be more scientific and with a better use of operational research tools considering the selection of the customers under contracts, the duration of the contracts, the freight, and the container imbalances of these contracts are basic factors for the carrier's yield. This work aims to develop a decision support system based on a linear programming model to generate the business plan for a container liner carrier, maximizing the contribution margin of its freight.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we use the Hermite-Biehler theorem to establish results on the design of proportional plus integral plus derivative (PID) controllers for a class of time delay systems. Using the property of interlacing at high frequencies of the class of systems considered and linear programming we obtain the set of all stabilizing PID controllers. As far as we know, previous results on the synthesis of PID controllers rely on the solution of transcendental equations. This paper also extends previous results on the synthesis of proportional controllers for a class of delay systems Of retarded type to a larger class of delay systems. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the validity of a simplified equivalent reservoir representation of a multi-reservoir hydroelectric system for modelling its optimal operation for power maximization. This simplification, proposed by Arvanitidis and Rosing (IEEE Trans Power Appar Syst 89(2):319-325, 1970), imputes a potential energy equivalent reservoir with energy inflows and outflows. The hydroelectric system is also modelled for power maximization considering individual reservoir characteristics without simplifications. Both optimization models employed MINOS package for solution of the non-linear programming problems. A comparison between total optimized power generation over the planning horizon by the two methods shows that the equivalent reservoir is capable of producing satisfactory power estimates with less than 6% underestimation. The generation and total reservoir storage trajectories along the planning horizon obtained by equivalent reservoir method, however, presented significant discrepancies as compared to those found in the detailed modelling. This study is motivated by the fact that Brazilian generation system operations are based on the equivalent reservoir method as part of the power dispatch procedures. The potential energy equivalent reservoir is an alternative which eliminates problems with the dimensionality of state variables in a dynamic programming model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sensors and actuators based on piezoelectric plates have shown increasing demand in the field of smart structures, including the development of actuators for cooling and fluid-pumping applications and transducers for novel energy-harvesting devices. This project involves the development of a topology optimization formulation for dynamic design of piezoelectric laminated plates aiming at piezoelectric sensors, actuators and energy-harvesting applications. It distributes piezoelectric material over a metallic plate in order to achieve a desired dynamic behavior with specified resonance frequencies, modes, and enhanced electromechanical coupling factor (EMCC). The finite element employs a piezoelectric plate based on the MITC formulation, which is reliable, efficient and avoids the shear locking problem. The topology optimization formulation is based on the PEMAP-P model combined with the RAMP model, where the design variables are the pseudo-densities that describe the amount of piezoelectric material at each finite element and its polarization sign. The design problem formulated aims at designing simultaneously an eigenshape, i.e., maximizing and minimizing vibration amplitudes at certain points of the structure in a given eigenmode, while tuning the eigenvalue to a desired value and also maximizing its EMCC, so that the energy conversion is maximized for that mode. The optimization problem is solved by using sequential linear programming. Through this formulation, a design with enhancing energy conversion in the low-frequency spectrum is obtained, by minimizing a set of first eigenvalues, enhancing their corresponding eigenshapes while maximizing their EMCCs, which can be considered an approach to the design of energy-harvesting devices. The implementation of the topology optimization algorithm and some results are presented to illustrate the method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electrical impedance tomography (EIT) captures images of internal features of a body. Electrodes are attached to the boundary of the body, low intensity alternating currents are applied, and the resulting electric potentials are measured. Then, based on the measurements, an estimation algorithm obtains the three-dimensional internal admittivity distribution that corresponds to the image. One of the main goals of medical EIT is to achieve high resolution and an accurate result at low computational cost. However, when the finite element method (FEM) is employed and the corresponding mesh is refined to increase resolution and accuracy, the computational cost increases substantially, especially in the estimation of absolute admittivity distributions. Therefore, we consider in this work a fast iterative solver for the forward problem, which was previously reported in the context of structural optimization. We propose several improvements to this solver to increase its performance in the EIT context. The solver is based on the recycling of approximate invariant subspaces, and it is applied to reduce the EIT computation time for a constant and high resolution finite element mesh. In addition, we consider a powerful preconditioner and provide a detailed pseudocode for the improved iterative solver. The numerical results show the effectiveness of our approach: the proposed algorithm is faster than the preconditioned conjugate gradient (CG) algorithm. The results also show that even on a standard PC without parallelization, a high mesh resolution (more than 150,000 degrees of freedom) can be used for image estimation at a relatively low computational cost. (C) 2010 Elsevier B.V. All rights reserved.