880 resultados para Fuzzy Multi-Objective Linear Programming
Resumo:
Many testing methods are based on program paths. A well-known problem with them is that some paths are infeasible. To decide the feasibility of paths, we may solve a set of constraints. In this paper, we describe constraint-based tools that can be used for this purpose. They accept constraints expressed in a natural form, which may involve variables of different types such as integers, Booleans, reals and fixed-size arrays. The constraint solver is an extension of a Boolean satisfiability checker and it makes use of a linear programming package. The solving algorithm is described, and examples are given to illustrate the use of the tools. For many paths in the testing literature, their feasibility can be decided in a reasonable amount of time.
Resumo:
为有效地刻画和求解军事装备系统的维修规划问题,建立了一个以维修费用和任务能力为目标的约束优化模型,提出了一种求解装备维修规划问题的多目标禁忌搜索算法。模型考虑了维修器材和工时两种费用指标,并在数质量评估的基础上通过二次回归方程来分层评估装备系统的任务能力指标。算法采用两阶段搜索策略,第一阶段从维修数量下限出发,以任务能力为演化目标进行搜索,直至找到一个可行解;第二阶段以任务能力/维修费用比为演化目标进行搜索,不断改善整个非支配解集。实验表明,算法能够求解型号≥500种,数量≥45000的大规模问题,模型和算法求解的质量也在实际应用中得到了验证。
Resumo:
为了降低生料成分的不确定性给水泥生料质量控制系统带来的影响,提出了率值补偿的控制策略.分别为三率值创建目标函数,并利用状态空间搜索策略解决多目标优化问题.针对初始样本空间不能覆盖所有样本的问题,提出了基于神经网络的估算模型,对初始样本空间进行拓扑.通过估价函数对状态空间中的状态量进行评价,得到最优的率值状态量;根据率值对原料配比进行调整,最后使率值偏差得到补偿,同时使给配比造成的波动最小.工业实验结果表明,生料的质量合格率由原来的30%提高到50%,该系统能有效地对配料过程进行优化控制.证明了基于神经网络的状态空间搜索策略为水泥生料配料多目标寻优问题提供了一种可行的方法。
Resumo:
本文提出一种聚类引导搜索(cluster guide searching,CGS)的路径规划方法。采用基于最大最小距离的K均值聚类方法对样本进行离线聚类学习,学习结果以相似环境相似决策的知识形式进行存储。路径规划过程中,机器人在线整理环境信息,获得输入空间样本,通过与知识库匹配,检索到最近的类别,然后在该类别内部采用速度优先策略和方向优先策略交替的方式搜索输出空间。若知识不完备导致检索失败,可重启线性规划算法(linear programming,LP)进行在线路径规划,并更新聚类知识库。仿真结果表明该方法是一种有效的路径规划学习方法。
Resumo:
建立了极大极小任务分配问题的混合整数线性规划模型,提出一种矩阵作业解答,并与穷举解及混合整数线性规划解的计算复杂度进行了比较.理论分析和数值试验表明矩阵作业法对两类任务分配问题,极大极小和总体极小任务分配问题,有效地提供最优解.
Resumo:
针对混装配线设计这一有约束的多目标优化问题,建立了数学模型。将基于Pareto的解的分级方法与Lp-范数形式的非线性机制相组合,构建了基于遗传退火算法多目标优化方法。重点阐述了个体编码、染色体检修、多目标处理机制等关键技术。设计了算法流程图,并开发了优化程序。该方法克服了加权和方法的不足,用模拟退火改善了遗传算法全局寻优性能。计算实例表明,随着迭代次数的增加,每代的非受控点逐渐收敛于Pareto最优边界,是一种混装线设计多目标优化的新方法。
Resumo:
基于Stewart平台的六维力传感器具有结构紧凑、刚度大、量程宽等特点,它在工业机器人、空间站对接等领域具有广泛的应用前景。好的标定方法是正确使用传感器的基础。由于基于Stewart平台的六维力传感器是一个复杂的非线性系统,所以采用常规的线性标定方法必将带来较大的标定误差从而影响其使用性能。标定的实质是,由测量值空间到理论值空间的映射函数的确定过程。由函数逼近理论可知,当只在已知点集上给出函数值时,可用多项式或分段多项式等较简单函数逼近待定函数。基于上述思想,本文将整个测量空间划分为若干连续的子测量空间,再对每个子空间进行线性标定,从而提高了整个测量系统的标定精度。实验分析结果表明了该标定方法有效。
Resumo:
提出了一种既能够在陆地上爬行,又能够在一定深度的水下浮游和在海底爬行的新概念轮桨腿一体化两栖机器人;多运动模式和复合移动机构是该机器人的突出特点.分析了轮桨腿复合式驱动机构的运动机理,并采用多目标优化设计理论和算法,对驱动机构的爬行性能和浮游特性进行了综合优化,得到了两栖机器人驱动机构的结构优化参数.虚拟样机的仿真结果证明了该轮桨腿一体化两栖机器人驱动机构的综合运动性能良好,对非结构环境具有一定的适应能力。
Resumo:
柔性是柔性制造系统(FMS)的一个基本优点,但这一基本优点却往往被人们所忽视,许多现在运行的FMS不是缺乏柔性,就是没能充分利用可获得的柔性来提高生产效率柔性制造系统的负荷分配和路径规划问题正是这种柔性的一个主要方面.然而,路径规划决策却往往被忽视.其中一个主要原因就是人们仍不能从传统的生产管理概念中解放出来.本文在明确概念区分的基础上,提出了一种柔性制造系统的负荷分配和路径规划的线性规划模型,其主要特点是将负荷分配和路径规划问题有机地结合起来,并通过仿真实验验证并分析了此方法对FMS性能上的影响。
Resumo:
Control of machines that exhibit flexibility becomes important when designers attempt to push the state of the art with faster, lighter machines. Three steps are necessary for the control of a flexible planet. First, a good model of the plant must exist. Second, a good controller must be designed. Third, inputs to the controller must be constructed using knowledge of the system dynamic response. There is a great deal of literature pertaining to modeling and control but little dealing with the shaping of system inputs. Chapter 2 examines two input shaping techniques based on frequency domain analysis. The first involves the use of the first deriviate of a gaussian exponential as a driving function template. The second, acasual filtering, involves removal of energy from the driving functions at the resonant frequencies of the system. Chapter 3 presents a linear programming technique for generating vibration-reducing driving functions for systems. Chapter 4 extends the results of the previous chapter by developing a direct solution to the new class of driving functions. A detailed analysis of the new technique is presented from five different perspectives and several extensions are presented. Chapter 5 verifies the theories of the previous two chapters with hardware experiments. Because the new technique resembles common signal filtering, chapter 6 compares the new approach to eleven standard filters. The new technique will be shown to result in less residual vibrations, have better robustness to system parameter uncertainty, and require less computation than other currently used shaping techniques.
Resumo:
In the framework of iBench research project, our previous work created a domain specific language TRAFFIC [6] that facilitates specification, programming, and maintenance of distributed applications over a network. It allows safety property to be formalized in terms of types and subtyping relations. Extending upon our previous work, we add Hindley-Milner style polymorphism [8] with constraints [9] to the type system of TRAFFIC. This allows a programmer to use for-all quantifier to describe types of network components, escalating power and expressiveness of types to a new level that was not possible before with propositional subtyping relations. Furthermore, we design our type system with a pluggable constraint system, so it can adapt to different application needs while maintaining soundness. In this paper, we show the soundness of the type system, which is not syntax-directed but is easier to do typing derivation. We show that there is an equivalent syntax-directed type system, which is what a type checker program would implement to verify the safety of a network flow. This is followed by discussion on several constraint systems: polymorphism with subtyping constraints, Linear Programming, and Constraint Handling Rules (CHR) [3]. Finally, we provide some examples to illustrate workings of these constraint systems.
Resumo:
With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.
Resumo:
We describe a general technique for determining upper bounds on maximal values (or lower bounds on minimal costs) in stochastic dynamic programs. In this approach, we relax the nonanticipativity constraints that require decisions to depend only on the information available at the time a decision is made and impose a "penalty" that punishes violations of nonanticipativity. In applications, the hope is that this relaxed version of the problem will be simpler to solve than the original dynamic program. The upper bounds provided by this dual approach complement lower bounds on values that may be found by simulating with heuristic policies. We describe the theory underlying this dual approach and establish weak duality, strong duality, and complementary slackness results that are analogous to the duality results of linear programming. We also study properties of good penalties. Finally, we demonstrate the use of this dual approach in an adaptive inventory control problem with an unknown and changing demand distribution and in valuing options with stochastic volatilities and interest rates. These are complex problems of significant practical interest that are quite difficult to solve to optimality. In these examples, our dual approach requires relatively little additional computation and leads to tight bounds on the optimal values. © 2010 INFORMS.
Resumo:
p.55-64
Resumo:
This paper presents a formalism for representing temporal knowledge in legal discourse that allows an explicit expression of time and event occurrences. The fundamental time structure is characterized as a well‐ordered discrete set of primitive times, i.e. non‐decomposable intervals with positive duration or points with zero duration), from which decomposable intervals can be constructed. The formalism supports a full representation of both absolute and relative temporal knowledge, and a formal mechanism for checking the temporal consistency of a given set of legal statements is provided. The general consistency checking algorithm which addresses both absolute and relative temporal knowledge turns out to be a linear programming problem, while in the special case where only relative temporal relations are involved, it becomes a simple question of searching for cycles in the graphical representation of the corresponding legal text.