930 resultados para Linear optimization approach


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fuzzy Waste Load Allocation Model (FWLAM), developed in an earlier study, derives the optimal fractional levels, for the base flow conditions, considering the goals of the Pollution Control Agency (PCA) and dischargers. The Modified Fuzzy Waste Load Allocation Model (MFWLAM) developed subsequently is a stochastic model and considers the moments (mean, variance and skewness) of water quality indicators, incorporating uncertainty due to randomness of input variables along with uncertainty due to imprecision. The risk of low water quality is reduced significantly by using this modified model, but inclusion of new constraints leads to a low value of acceptability level, A, interpreted as the maximized minimum satisfaction in the system. To improve this value, a new model, which is a combination Of FWLAM and MFWLAM, is presented, allowing for some violations in the constraints of MFWLAM. This combined model is a multiobjective optimization model having the objectives, maximization of acceptability level and minimization of violation of constraints. Fuzzy multiobjective programming, goal programming and fuzzy goal programming are used to find the solutions. For the optimization model, Probabilistic Global Search Lausanne (PGSL) is used as a nonlinear optimization tool. The methodology is applied to a case study of the Tunga-Bhadra river system in south India. The model results in a compromised solution of a higher value of acceptability level as compared to MFWLAM, with a satisfactory value of risk. Thus the goal of risk minimization is achieved with a comparatively better value of acceptability level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article analyzes the effect of devising a new failure envelope by the combination of the most commonly used failure criteria for the composite laminates, on the design of composite structures. The failure criteria considered for the study are maximum stress and Tsai-Wu criteria. In addition to these popular phenomenological-based failure criteria, a micromechanics-based failure criterion called failure mechanism-based failure criterion is also considered. The failure envelopes obtained by these failure criteria are superimposed over one another and a new failure envelope is constructed based on the lowest absolute values of the strengths predicted by these failure criteria. Thus, the new failure envelope so obtained is named as most conservative failure envelope. A minimum weight design of composite laminates is performed using genetic algorithms. In addition to this, the effect of stacking sequence on the minimum weight of the laminate is also studied. Results are compared for the different failure envelopes and the conservative design is evaluated, with respect to the designs obtained by using only one failure criteria. The design approach is recommended for structures where composites are the key load-carrying members such as helicopter rotor blades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperbranched polyethers having poly(ethylene glycol) (PEG) segments at their molecular periphery were prepared by a simple procedure wherein an AB2 type monomer was melt-polycondensed with an A-type monomer, namely, heptaethylene glycol monomethyl ether. The presence of a large number of PEG units at the termini rendered a lower critical solution temperature (LCST) to these copolymers, above which they precipitated out of an aqueous solution. In an effort to understand the effect of various molecular structural parameters on their LCST, the length of the hydrophobic spacer segment within the hyperbranched core and the extent of PEGylation were varied. Additionally, linear analogues that incorporates pendant PEG segments were also prepared and comparison of their LCST with that of the hyperbranched analogue clearly revealed that hyperbranched topology leads to a substantial increase in the LCST, highlighting the importance of the peripheral placement of the PEG units.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of identification of stiffness, mass and damping properties of linear structural systems, based on multiple sets of measurement data originating from static and dynamic tests is considered. A strategy, within the framework of Kalman filter based dynamic state estimation, is proposed to tackle this problem. The static tests consists of measurement of response of the structure to slowly moving loads, and to static loads whose magnitude are varied incrementally; the dynamic tests involve measurement of a few elements of the frequency response function (FRF) matrix. These measurements are taken to be contaminated by additive Gaussian noise. An artificial independent variable τ, that simultaneously parameterizes the point of application of the moving load, the magnitude of the incrementally varied static load and the driving frequency in the FRFs, is introduced. The state vector is taken to consist of system parameters to be identified. The fact that these parameters are independent of the variable τ is taken to constitute the set of ‘process’ equations. The measurement equations are derived based on the mechanics of the problem and, quantities, such as displacements and/or strains, are taken to be measured. A recursive algorithm that employs a linearization strategy based on Neumann’s expansion of structural static and dynamic stiffness matrices, and, which provides posterior estimates of the mean and covariance of the unknown system parameters, is developed. The satisfactory performance of the proposed approach is illustrated by considering the problem of the identification of the dynamic properties of an inhomogeneous beam and the axial rigidities of members of a truss structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new form of a multi-step transversal linearization (MTL) method is developed and numerically explored in this study for a numeric-analytical integration of non-linear dynamical systems under deterministic excitations. As with other transversal linearization methods, the present version also requires that the linearized solution manifold transversally intersects the non-linear solution manifold at a chosen set of points or cross-section in the state space. However, a major point of departure of the present method is that it has the flexibility of treating non-linear damping and stiffness terms of the original system as damping and stiffness terms in the transversally linearized system, even though these linearized terms become explicit functions of time. From this perspective, the present development is closely related to the popular practice of tangent-space linearization adopted in finite element (FE) based solutions of non-linear problems in structural dynamics. The only difference is that the MTL method would require construction of transversal system matrices in lieu of the tangent system matrices needed within an FE framework. The resulting time-varying linearized system matrix is then treated as a Lie element using Magnus’ characterization [W. Magnus, On the exponential solution of differential equations for a linear operator, Commun. Pure Appl. Math., VII (1954) 649–673] and the associated fundamental solution matrix (FSM) is obtained through repeated Lie-bracket operations (or nested commutators). An advantage of this approach is that the underlying exponential transformation could preserve certain intrinsic structural properties of the solution of the non-linear problem. Yet another advantage of the transversal linearization lies in the non-unique representation of the linearized vector field – an aspect that has been specifically exploited in this study to enhance the spectral stability of the proposed family of methods and thus contain the temporal propagation of local errors. A simple analysis of the formal orders of accuracy is provided within a finite dimensional framework. Only a limited numerical exploration of the method is presently provided for a couple of popularly known non-linear oscillators, viz. a hardening Duffing oscillator, which has a non-linear stiffness term, and the van der Pol oscillator, which is self-excited and has a non-linear damping term.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper focuses on the reliability-based design optimization of gravity wall bridge abutments when subjected to active condition during earthquakes. An analytical study considering the effect of uncertainties in the seismic analysis of bridge abutments is presented. Planar failure surface has been considered in conjunction with the pseudostatic limit equilibrium method for the calculation of the seismic active earth pressure. Analysis is conducted to evaluate the external stability of bridge abutments when subjected to earthquake loads. Reliability analysis is used to estimate the probability of failure in three modes of failure viz. sliding failure of the wall on its base, overturning failure about its toe (or eccentricity failure of the resultant force) and bearing failure of foundation soil below the base of wall. The properties of backfill and foundation soil below the base of abutment are treated as random variables. In addition, the uncertainties associated with characteristics of earthquake ground motions such as horizontal seismic acceleration and shear wave velocity propagating through backfill soil are considered. The optimum proportions of the abutment needed to maintain the stability are obtained against three modes of failure by targeting various component and system reliability indices. Studies have also been made to study the influence of various parameters on the seismic stability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we consider a decentralized supply chain formation problem for linear multi-echelon supply chains when the managers of the individual echelons are autonomous, rational, and intelligent. At each echelon, there is a choice of service providers and the specific problem we solve is that of determining a cost-optimal mix of service providers so as to achieve a desired level of end-to-end delivery performance. The problem can be broken up into two sub-problems following a mechanism design approach: (1) Design of an incentive compatible mechanism to elicit the true cost functions from the echelon managers; (2) Formulation and solution of an appropriate optimization problem using the true cost information. In this paper we propose a novel Bayesian incentive compatible mechanism for eliciting the true cost functions. This improves upon existing solutions in the literature which are all based on the classical Vickrey-Clarke-Groves mechanisms, requiring significant incentives to be paid to the echelon managers for achieving dominant strategy incentive compatibility. The proposed solution, which we call SCF-BIC (Supply Chain Formation with Bayesian Incentive Compatibility), significantly reduces the cost of supply chain formation. We illustrate the efficacy of the proposed methodology using the example of a three echelon manufacturing supply chain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fluctuation of field emission in carbon nanotubes (CNTs) is riot desirable in many applications and the design of biomedical x-ray devices is one of them. In these applications, it is of great importance to have precise control of electron beams over multiple spatio-temporal scales. In this paper, a new design is proposed in order to optimize the field emission performance of CNT arrays. A diode configuration is used for analysis, where arrays of CNTs act as cathode. The results indicate that the linear height distribution of CNTs, as proposed in this study, shows more stable performance than the conventionally used unifrom distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Support Vector Machines(SVMs) are hyperplane classifiers defined in a kernel induced feature space. The data size dependent training time complexity of SVMs usually prohibits its use in applications involving more than a few thousands of data points. In this paper we propose a novel kernel based incremental data clustering approach and its use for scaling Non-linear Support Vector Machines to handle large data sets. The clustering method introduced can find cluster abstractions of the training data in a kernel induced feature space. These cluster abstractions are then used for selective sampling based training of Support Vector Machines to reduce the training time without compromising the generalization performance. Experiments done with real world datasets show that this approach gives good generalization performance at reasonable computational expense.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Considering a general linear model of signal degradation, by modeling the probability density function (PDF) of the clean signal using a Gaussian mixture model (GMM) and additive noise by a Gaussian PDF, we derive the minimum mean square error (MMSE) estimator. The derived MMSE estimator is non-linear and the linear MMSE estimator is shown to be a special case. For speech signal corrupted by independent additive noise, by modeling the joint PDF of time-domain speech samples of a speech frame using a GMM, we propose a speech enhancement method based on the derived MMSE estimator. We also show that the same estimator can be used for transform-domain speech enhancement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider the machining condition optimization models presented in earlier studies. Finding the optimal combination of machining conditions within the constraints is a difficult task. Hence, in earlier studies standard optimization methods are used. The non-linear nature of the objective function, and the constraints that need to be satisfied makes it difficult to use the standard optimization methods for the solution. In this paper, we present a real coded genetic algorithm (RCGA), to find the optimal combination of machining conditions. We present various issues related to real coded genetic algorithm such as solution representation, crossover operators, and repair algorithm in detail. We also present the results obtained for these models using real coded genetic algorithm and discuss the advantages of using real coded genetic algorithm for these problems. From the results obtained, we conclude that real coded genetic algorithm is reliable and accurate for solving the machining condition optimization models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we explore simultaneous geometry design and material selection for statically determinate trusses by posing it as a continuous optimization problem. The underlying principles of our approach are structural optimization and Ashby’s procedure for material selection from a database. For simplicity and ease of initial implementation, only static loads are considered in this work with the intent of maximum stiffness, minimum weight/cost, and safety against failure. Safety of tensile and compression members in the truss is treated differently to prevent yield and buckling failures, respectively. Geometry variables such as lengths and orientations of members are taken to be the design variables in an assumed layout. Areas of cross-section of the members are determined to satisfy the failure constraints in each member. Along the lines of Ashby’s material indices, a new design index is derived for trusses. The design index helps in choosing the most suitable material for any geometry of the truss. Using the design index, both the design space and the material database are searched simultaneously using gradient-based optimization algorithms. The important feature of our approach is that the formulated optimization problem is continuous, although the material selection from a database is an inherently discrete problem. A few illustrative examples are included. It is observed that the method is capable of determining the optimal topology in addition to optimal geometry when the assumed layout contains more links than are necessary for optimality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern database systems incorporate a query optimizer to identify the most efficient "query execution plan" for executing the declarative SQL queries submitted by users. A dynamic-programming-based approach is used to exhaustively enumerate the combinatorially large search space of plan alternatives and, using a cost model, to identify the optimal choice. While dynamic programming (DP) works very well for moderately complex queries with up to around a dozen base relations, it usually fails to scale beyond this stage due to its inherent exponential space and time complexity. Therefore, DP becomes practically infeasible for complex queries with a large number of base relations, such as those found in current decision-support and enterprise management applications. To address the above problem, a variety of approaches have been proposed in the literature. Some completely jettison the DP approach and resort to alternative techniques such as randomized algorithms, whereas others have retained DP by using heuristics to prune the search space to computationally manageable levels. In the latter class, a well-known strategy is "iterative dynamic programming" (IDP) wherein DP is employed bottom-up until it hits its feasibility limit, and then iteratively restarted with a significantly reduced subset of the execution plans currently under consideration. The experimental evaluation of IDP indicated that by appropriate choice of algorithmic parameters, it was possible to almost always obtain "good" (within a factor of twice of the optimal) plans, and in the few remaining cases, mostly "acceptable" (within an order of magnitude of the optimal) plans, and rarely, a "bad" plan. While IDP is certainly an innovative and powerful approach, we have found that there are a variety of common query frameworks wherein it can fail to consistently produce good plans, let alone the optimal choice. This is especially so when star or clique components are present, increasing the complexity of th- e join graphs. Worse, this shortcoming is exacerbated when the number of relations participating in the query is scaled upwards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider the design and bit-error performance analysis of linear parallel interference cancellers (LPIC) for multicarrier (MC) direct-sequence code division multiple access (DS-CDMA) systems. We propose an LPIC scheme where we estimate and cancel the multiple access interference (MAT) based on the soft decision outputs on individual subcarriers, and the interference cancelled outputs on different subcarriers are combined to form the final decision statistic. We scale the MAI estimate on individual subcarriers by a weight before cancellation. In order to choose these weights optimally, we derive exact closed-form expressions for the bit-error rate (BER) at the output of different stages of the LPIC, which we minimize to obtain the optimum weights for the different stages. In addition, using an alternate approach involving the characteristic function of the decision variable, we derive BER expressions for the weighted LPIC scheme, matched filter (MF) detector, decorrelating detector, and minimum mean square error (MMSE) detector for the considered multicarrier DS-CDMA system. We show that the proposed BER-optimized weighted LPIC scheme performs better than the MF detector and the conventional LPIC scheme (where the weights are taken to be unity), and close to the decorrelating and MMSE detectors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because of the bottlenecking operations in a complex coal rail system, millions of dollars are costed by mining companies. To handle this issue, this paper investigates a real-world coal rail system and aims to optimise the coal railing operations under constraints of limited resources (e.g., limited number of locomotives and wagons). In the literature, most studies considered the train scheduling problem on a single-track railway network to be strongly NP-hard and thus developed metaheuristics as the main solution methods. In this paper, a new mathematical programming model is formulated and coded by optimization programming language based on a constraint programming (CP) approach. A new depth-first-search technique is developed and embedded inside the CP model to obtain the optimised coal railing timetable efficiently. Computational experiments demonstrate that high-quality solutions are obtainable in industry-scale applications. To provide insightful decisions, sensitivity analysis is conducted in terms of different scenarios and specific criteria. Keywords Train scheduling · Rail transportation · Coal mining · Constraint programming