322 resultados para Constrained evolutionary optimization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A robust aeroelastic optimization is performed to minimize helicopter vibration with uncertainties in the design variables. Polynomial response surfaces and space-¯lling experimental designs are used to generate the surrogate model of aeroelastic analysis code. Aeroelastic simulations are performed at the sample inputs generated by Latin hypercube sampling. The response values which does not satisfy the frequency constraints are eliminated from the data for model ¯tting. This step increased the accuracy of response surface models in the feasible design space. It is found that the response surface models are able to capture the robust optimal regions of design space. The optimal designs show a reduction of 10 percent in the objective function comprising six vibratory hub loads and 1.5 to 80 percent reduction for the individual vibratory forces and moments. This study demonstrates that the second-order response surface models with space ¯lling-designs can be a favorable choice for computationally intensive robust aeroelastic optimization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trajectory optimization of a generic launch vehicle is considered in this paper. The trajectory from launch point to terminal injection point is divided in to two segments. The first segment deals with launcher clearance and vertical raise of the vehicle. During this phase, a nonlinear feedback guidance loop is incorporated to assure vertical raise in presence of thrust misalignment, centre of gravity offset, wind disturbance etc. and possibly to clear obstacles as well. The second segment deals with the trajectory optimization, where the objective is to ensure desired terminal conditions as well as minimum control effort and minimum structural loading in the high dynamic pressure region. The usefulness of this dynamic optimization problem formulation is demonstrated by solving it using the classical Gradient method. Numerical results for both the segments are presented, which clearly brings out the potential advantages of the proposed approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The focus of this paper is on designing useful compliant micro-mechanisms of high-aspect-ratio which can be microfabricated by the cost-effective wet etching of (110) orientation silicon (Si) wafers. Wet etching of (110) Si imposes constraints on the geometry of the realized mechanisms because it allows only etch-through in the form of slots parallel to the wafer's flat with a certain minimum length. In this paper, we incorporate this constraint in the topology optimization and obtain compliant designs that meet the specifications on the desired motion for given input forces. Using this design technique and wet etching, we show that we can realize high-aspect-ratio compliant micro-mechanisms. For a (110) Si wafer of 250 µm thickness, the minimum length of the etch opening to get a slot is found to be 866 µm. The minimum achievable width of the slot is limited by the resolution of the lithography process and this can be a very small value. This is studied by conducting trials with different mask layouts on a (110) Si wafer. These constraints are taken care of by using a suitable design parameterization rather than by imposing the constraints explicitly. Topology optimization, as is well known, gives designs using only the essential design specifications. In this work, we show that our technique also gives manufacturable mechanism designs along with lithography mask layouts. Some designs obtained are transferred to lithography masks and mechanisms are fabricated on (110) Si wafers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The topology optimization problem for the synthesis of compliant mechanisms has been formulated in many different ways in the last 15 years, but there is not yet a definitive formulation that is universally accepted. Furthermore, there are two unresolved issues in this problem. In this paper, we present a comparative study of five distinctly different formulations that are reported in the literature. Three benchmark examples are solved with these formulations using the same input and output specifications and the same numerical optimization algorithm. A total of 35 different synthesis examples are implemented. The examples are limited to desired instantaneous output direction for prescribed input force direction. Hence, this study is limited to linear elastic modeling with small deformations. Two design parameterizations, namely, the frame element based ground structure and the density approach using continuum elements, are used. The obtained designs are evaluated with all other objective functions and are compared with each other. The checkerboard patterns, point flexures, the ability to converge from an unbiased uniform initial guess, and the computation time are analyzed. Some observations are noted based on the extensive implementation done in this study. Complete details of the benchmark problems and the results are included. The computer codes related to this study are made available on the internet for ready access.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fault-tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault-tolerance. For a given job and a soft (transient) error probability, we define mathematical formulas for AET that includes bus communication overhead for both voting (active replication) and rollback-recovery with checkpointing (RRC). And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC, (2) finding the number of processors and job-to-processor assignment when using voting, and (3) defining fault-tolerance scheme (voting or RRC) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Technology scaling has caused Negative Bias Temperature Instability (NBTI) to emerge as a major circuit reliability concern. Simultaneously leakage power is becoming a greater fraction of the total power dissipated by logic circuits. As both NBTI and leakage power are highly dependent on vectors applied at the circuit’s inputs, they can be minimized by applying carefully chosen input vectors during periods when the circuit is in standby or idle mode. Unfortunately input vectors that minimize leakage power are not the ones that minimize NBTI degradation, so there is a need for a methodology to generate input vectors that minimize both of these variables.This paper proposes such a systematic methodology for the generation of input vectors which minimize leakage power under the constraint that NBTI degradation does not exceed a specified limit. These input vectors can be applied at the primary inputs of a circuit when it is in standby/idle mode and are such that the gates dissipate only a small amount of leakage power and also allow a large majority of the transistors on critical paths to be in the “recovery” phase of NBTI degradation. The advantage of this methodology is that allowing circuit designers to constrain NBTI degradation to below a specified limit enables tighter guardbanding, increasing performance. Our methodology guarantees that the generated input vector dissipates the least leakage power among all the input vectors that satisfy the degradation constraint. We formulate the problem as a zero-one integer linear program and show that this formulation produces input vectors whose leakage power is within 1% of a minimum leakage vector selected by a search algorithm and simultaneously reduces NBTI by about 5.75% of maximum circuit delay as compared to the worst case NBTI degradation. Our paper also proposes two new algorithms for the identification of circuit paths that are affected the most by NBTI degradation. The number of such paths identified by our algorithms are an order of magnitude fewer than previously proposed heuristics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As the gap between processor and memory continues to grow Memory performance becomes a key performance bottleneck for many applications. Compilers therefore increasingly seek to modify an application’s data layout to improve cache locality and cache reuse. Whole program Structure Layout [WPSL] transformations can significantly increase the spatial locality of data and reduce the runtime of programs that use link-based data structures, by increasing the cache line utilization. However, in production compilers WPSL transformations do not realize the entire performance potential possible due to a number of factors. Structure layout decisions made on the basis of whole program aggregated affinity/hotness of structure fields, can be sub optimal for local code regions. WPSL is also restricted in applicability in production compilers for type unsafe languages like C/C++ due to the extensive legality checks and field sensitive pointer analysis required over the entire application. In order to overcome the issues associated with WPSL, we propose Region Based Structure Layout (RBSL) optimization framework, using selective data copying. We describe our RBSL framework, implemented in the production compiler for C/C++ on HP-UX IA-64. We show that acting in complement to the existing and mature WPSL transformation framework in our compiler, RBSL improves application performance in pointer intensive SPEC benchmarks ranging from 3% to 28% over WPSL

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A "plan diagram" is a pictorial enumeration of the execution plan choices of a database query optimizer over the relational selectivity space. We have shown recently that, for industrial-strength database engines, these diagrams are often remarkably complex and dense, with a large number of plans covering the space. However, they can often be reduced to much simpler pictures, featuring significantly fewer plans, without materially affecting the query processing quality. Plan reduction has useful implications for the design and usage of query optimizers, including quantifying redundancy in the plan search space, enhancing useability of parametric query optimization, identifying error-resistant and least-expected-cost plans, and minimizing the overheads of multi-plan approaches. We investigate here the plan reduction issue from theoretical, statistical and empirical perspectives. Our analysis shows that optimal plan reduction, w.r.t. minimizing the number of plans, is an NP-hard problem in general, and remains so even for a storage-constrained variant. We then present a greedy reduction algorithm with tight and optimal performance guarantees, whose complexity scales linearly with the number of plans in the diagram for a given resolution. Next, we devise fast estimators for locating the best tradeoff between the reduction in plan cardinality and the impact on query processing quality. Finally, extensive experimentation with a suite of multi-dimensional TPCH-based query templates on industrial-strength optimizers demonstrates that complex plan diagrams easily reduce to "anorexic" (small absolute number of plans) levels incurring only marginal increases in the estimated query processing costs.