916 resultados para Infeasible solution space search
Resumo:
A robust numerical solution of the input voltage equations (IVEs) for the independent-double-gate metal-oxide-semiconductor field-effect transistor requires root bracketing methods (RBMs) instead of the commonly used Newton-Raphson (NR) technique due to the presence of nonremovable discontinuity and singularity. In this brief, we do an exhaustive study of the different RBMs available in the literature and propose a single derivative-free RBM that could be applied to both trigonometric and hyperbolic IVEs and offers faster convergence than the earlier proposed hybrid NR-Ridders algorithm. We also propose some adjustments to the solution space for the trigonometric IVE that leads to a further reduction of the computation time. The improvement of computational efficiency is demonstrated to be about 60% for trigonometric IVE and about 15% for hyperbolic IVE, by implementing the proposed algorithm in a commercial circuit simulator through the Verilog-A interface and simulating a variety of circuit blocks such as ring oscillator, ripple adder, and twisted ring counter.
Resumo:
This paper presents a decentralized/peer-to-peer architecture-based parallel version of the vector evaluated particle swarm optimization (VEPSO) algorithm for multi-objective design optimization of laminated composite plates using message passing interface (MPI). The design optimization of laminated composite plates being a combinatorially explosive constrained non-linear optimization problem (CNOP), with many design variables and a vast solution space, warrants the use of non-parametric and heuristic optimization algorithms like PSO. Optimization requires minimizing both the weight and cost of these composite plates, simultaneously, which renders the problem multi-objective. Hence VEPSO, a multi-objective variant of the PSO algorithm, is used. Despite the use of such a heuristic, the application problem, being computationally intensive, suffers from long execution times due to sequential computation. Hence, a parallel version of the PSO algorithm for the problem has been developed to run on several nodes of an IBM P720 cluster. The proposed parallel algorithm, using MPI's collective communication directives, establishes a peer-to-peer relationship between the constituent parallel processes, deviating from the more common master-slave approach, in achieving reduction of computation time by factor of up to 10. Finally we show the effectiveness of the proposed parallel algorithm by comparing it with a serial implementation of VEPSO and a parallel implementation of the vector evaluated genetic algorithm (VEGA) for the same design problem. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Optimal switching angles for minimization of total harmonic distortion of line current (I-THD) in a voltage source inverter are determined traditionally by imposing half-wave symmetry (HWS) and quarter-wave symmetry (QWS) conditions on the pulse width modulated waveform. This paper investigates optimal switching angles with QWS relaxed. Relaxing QWS expands the solution space and presents the possibility of improved solutions. The optimal solutions without QWS are shown here to outperform the optimal solutions with QWS over a range of modulation index (M) between 0.82 and 0.94 for a switching frequency to fundamental frequency ratio of 5. Theoretical and experimental results are presented on a 2.3kW induction motor drive.
Resumo:
Internal analogies are created if the knowledge of source domain is obtained only from the cognition of designers. In this paper, an understanding of the use of internal analogies in conceptual design is developed by studying: the types of internal analogies; the roles of internal analogies; the influence of design problems on the creation of internal analogies; the role of experience of designers on the use of internal analogies; the levels of abstraction at which internal analogies are searched in target domain, identified in source domain, and realized in the target domain; and the effect of internal analogies from the natural and artificial domains on the solution space created using these analogies. To facilitate this understanding, empirical studies of design sessions from earlier research, each involving a designer solving a design problem by identifying requirements and developing conceptual solutions, without using any support, are used. The following are the important findings: designers use analogies from the natural and artificial domains; analogies are used for generating requirements and solutions; the nature of the design problem influences the use of analogies; the role of experience of designers on the use of analogies is not clearly ascertained; analogical transfer is observed only at few levels of abstraction while many levels remain unexplored; and analogies from the natural domain seem to have more positive influence than the artificial domain on the number of ideas and variety of idea space.
Resumo:
The demand for variety of products and the shorter time to market is encouraging designers to adopt computer aided concept generation techniques. One such technique is being explored here. The present work makes an attempt towards synthesis of concepts for sensors using physical laws and effects as building blocks. A database of building blocks based upon the SAPPhIRE-lite model of causality is maintained. It uses composition to explore the solution space. The algorithm has been implemented in a web based tool. The tool generates two types of sensor designs: direct sensing designs and feedback sensing designs. According to the literature, synthesis using building blocks often lead to vague solutions principles. The current work tries to avoid uninteresting solutions by using some heuristics. A particularly novel outcome of the work described here is the generation of feedback based solutions, something not generated automatically before. A number of patent violations were observed with the set of generated concepts; thus emphasizing some amount of novelty in the designs.
Resumo:
6 p.
Resumo:
6 p.
Resumo:
The aim of this research is to provide a unified modelling-based method to help with the evaluation of organization design and change decisions. Relevant literature regarding model-driven organization design and change is described. This helps identify the requirements for a new modelling methodology. Such a methodology is developed and described. The three phases of the developed method include the following. First, the use of CIMOSA-based multi-perspective enterprise modelling to understand and capture the most enduring characteristics of process-oriented organizations and externalize various types of requirement knowledge about any target organization. Second, the use of causal loop diagrams to identify dynamic causal impacts and effects related to the issues and constraints on the organization under study. Third, the use of simulation modelling to quantify the effects of each issue in terms of organizational performance. The design and case study application of a unified modelling method based on CIMOSA (computer integrated manufacturing open systems architecture) enterprise modelling, causal loop diagrams, and simulation modelling, is explored to illustrate its potential to support systematic organization design and change. Further application of the proposed methodology in various company and industry sectors, especially in manufacturing sectors, would be helpful to illustrate complementary uses and relative benefits and drawbacks of the methodology in different types of organization. The proposed unified modelling-based method provides a systematic way of enabling key aspects of organization design and change. The case company, its relevant data, and developed models help to explore and validate the proposed method. The application of CIMOSA-based unified modelling method and integrated application of these three modelling techniques within a single solution space constitutes an advance on previous best practice. Also, the purpose and application domain of the proposed method offers an addition to knowledge. © IMechE 2009.
Resumo:
为了降低生料成分的不确定性给水泥生料质量控制系统带来的影响,提出了率值补偿的控制策略.分别为三率值创建目标函数,并利用状态空间搜索策略解决多目标优化问题.针对初始样本空间不能覆盖所有样本的问题,提出了基于神经网络的估算模型,对初始样本空间进行拓扑.通过估价函数对状态空间中的状态量进行评价,得到最优的率值状态量;根据率值对原料配比进行调整,最后使率值偏差得到补偿,同时使给配比造成的波动最小.工业实验结果表明,生料的质量合格率由原来的30%提高到50%,该系统能有效地对配料过程进行优化控制.证明了基于神经网络的状态空间搜索策略为水泥生料配料多目标寻优问题提供了一种可行的方法。
Resumo:
基于螺旋理论和空间模型理论绘制了Stewart平台型六维力传感器的性能图谱,总结了结构参数对传感器各性能指标的影响规律;在此基础上对大型Stewart平台六维力传感器的结构进行了非线性单目标、多目标优化设计,为具有普通球形铰链大型Stewart平台六维力传感器的设计和优化提供了依据。
Resumo:
Seismic technique is in the leading position for discovering oil and gas trap and searching for reserves throughout the course of oil and gas exploration. It needs high quality of seismic processed data, not only required exact spatial position, but also the true information of amplitude and AVO attribute and velocity. Acquisition footprint has an impact on highly precision and best quality of imaging and analysis of AVO attribute and velocity. Acquisition footprint is a new conception of describing seismic noise in 3-D exploration. It is not easy to understand the acquisition footprint. This paper begins with forward modeling seismic data from the simple sound wave model, then processes it and discusses the cause for producing the acquisition footprint. It agreed that the recording geometry is the main cause which leads to the distribution asymmetry of coverage and offset and azimuth in different grid cells. It summarizes the characters and description methods and analysis acquisition footprint’s influence on data geology interpretation and the analysis of seismic attribute and velocity. The data reconstruct based on Fourier transform is the main method at present for non uniform data interpolation and extrapolate, but this method always is an inverse problem with bad condition. Tikhonov regularization strategy which includes a priori information on class of solution in search can reduce the computation difficulty duo to discrete kernel condition disadvantage and scarcity of the number of observations. The method is quiet statistical, which does not require the selection of regularization parameter; and hence it has appropriate inversion coefficient. The result of programming and tentat-ive calculation verifies the acquisition footprint can be removed through prestack data reconstruct. This paper applies migration to the processing method of removing the acquisition footprint. The fundamental principle and algorithms are surveyed, seismic traces are weighted according to the area which occupied by seismic trace in different source-receiver distances. Adopting grid method in stead of accounting the area of Voroni map can reduce difficulty of calculation the weight. The result of processing the model data and actual seismic demonstrate, incorporating a weighting scheme based on the relative area that is associated with each input trace with respect to its neighbors acts to minimize the artifacts caused by irregular acquisition geometry.
Resumo:
This paper presents a novel approach based on the use of evolutionary agents for epipolar geometry estimation. In contrast to conventional nonlinear optimization methods, the proposed technique employs each agent to denote a minimal subset to compute the fundamental matrix, and considers the data set of correspondences as a 1D cellular environment, in which the agents inhabit and evolve. The agents execute some evolutionary behavior, and evolve autonomously in a vast solution space to reach the optimal (or near optima) result. Then three different techniques are proposed in order to improve the searching ability and computational efficiency of the original agents. Subset template enables agents to collaborate more efficiently with each other, and inherit accurate information from the whole agent set. Competitive evolutionary agent (CEA) and finite multiple evolutionary agent (FMEA) apply a better evolutionary strategy or decision rule, and focus on different aspects of the evolutionary process. Experimental results with both synthetic data and real images show that the proposed agent-based approaches perform better than other typical methods in terms of accuracy and speed, and are more robust to noise and outliers.
Resumo:
This paper investigates the learning of a wide class of single-hidden-layer feedforward neural networks (SLFNs) with two sets of adjustable parameters, i.e., the nonlinear parameters in the hidden nodes and the linear output weights. The main objective is to both speed up the convergence of second-order learning algorithms such as Levenberg-Marquardt (LM), as well as to improve the network performance. This is achieved here by reducing the dimension of the solution space and by introducing a new Jacobian matrix. Unlike conventional supervised learning methods which optimize these two sets of parameters simultaneously, the linear output weights are first converted into dependent parameters, thereby removing the need for their explicit computation. Consequently, the neural network (NN) learning is performed over a solution space of reduced dimension. A new Jacobian matrix is then proposed for use with the popular second-order learning methods in order to achieve a more accurate approximation of the cost function. The efficacy of the proposed method is shown through an analysis of the computational complexity and by presenting simulation results from four different examples.
Resumo:
Many scientific applications are programmed using hybrid programming models that use both message passing and shared memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared memory or message passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoption of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74 percent on average and up to 13.8 percent) with some performance gain (up to 7.5 percent) or negligible performance loss.
Resumo:
This paper considers the optimal design of fabricated steel beams for long-span portal frames. The design optimisation takes into account ultimate as well as serviceability limit states, adopting deflection limits recommended by the Steel Construction Institute (SCI). Results for three benchmark frames demonstrate the efficiency of the optimisation methodology. A genetic algorithm (GA) was used to optimise the dimensions of the plates used for the columns, rafters and haunches. Discrete decision variables were adopted for the thickness of the steel plates and continuous variables for the breadth and depth of the plates. Strategies were developed to enhance the performance of the GA including solution space reduction and a hybrid initial population half of which is derived using Latin hypercube sampling. The results show that the proposed GA-based optimisation model generates optimal and near-optimal solutions consistently. A parametric study is then conducted on frames of different spans. A significant variation in weight between fabricated and conventional hot-rolled steel portal frames is shown; for a 50 m span frame, a 14–19% saving in weight was achieved. Furthermore, since Universal Beam sections in the UK come from a discrete section library, the results could also provide overall dimensions of other beams that could be more efficient for portal frames. Eurocode 3 was used for illustrative purposes; any alternative code of practice may be used.