988 resultados para Parameter Optimization
Resumo:
Given a parametrized n-dimensional SQL query template and a choice of query optimizer, a plan diagram is a color-coded pictorial enumeration of the execution plan choices of the optimizer over the query parameter space. These diagrams have proved to be a powerful metaphor for the analysis and redesign of modern optimizers, and are gaining currency in diverse industrial and academic institutions. However, their utility is adversely impacted by the impractically large computational overheads incurred when standard brute-force exhaustive approaches are used for producing fine-grained diagrams on high-dimensional query templates. In this paper, we investigate strategies for efficiently producing close approximations to complex plan diagrams. Our techniques are customized to the features available in the optimizer's API, ranging from the generic optimizers that provide only the optimal plan for a query, to those that also support costing of sub-optimal plans and enumerating rank-ordered lists of plans. The techniques collectively feature both random and grid sampling, as well as inference techniques based on nearest-neighbor classifiers, parametric query optimization and plan cost monotonicity. Extensive experimentation with a representative set of TPC-H and TPC-DS-based query templates on industrial-strength optimizers indicates that our techniques are capable of delivering 90% accurate diagrams while incurring less than 15% of the computational overheads of the exhaustive approach. In fact, for full-featured optimizers, we can guarantee zero error with less than 10% overheads. These approximation techniques have been implemented in the publicly available Picasso optimizer visualization tool.
Resumo:
Theoretical approaches are of fundamental importance to predict the potential impact of waste disposal facilities on ground water contamination. Appropriate design parameters are, in general, estimated by fitting the theoretical models to a field monitoring or laboratory experimental data. Double-reservoir diffusion (Transient Through-Diffusion) experiments are generally conducted in the laboratory to estimate the mass transport parameters of the proposed barrier material. These design parameters are estimated by manual parameter adjusting techniques (also called eye-fitting) like Pollute. In this work an automated inverse model is developed to estimate the mass transport parameters from transient through-diffusion experimental data. The proposed inverse model uses particle swarm optimization (PSO) algorithm which is based on the social behaviour of animals for finding their food sources. Finite difference numerical solution of the transient through-diffusion mathematical model is integrated with the PSO algorithm to solve the inverse problem of parameter estimation.The working principle of the new solver is demonstrated by estimating mass transport parameters from the published transient through-diffusion experimental data. The estimated values are compared with the values obtained by existing procedure. The present technique is robust and efficient. The mass transport parameters are obtained with a very good precision in less time
Resumo:
A robust aeroelastic optimization is performed to minimize helicopter vibration with uncertainties in the design variables. Polynomial response surfaces and space-¯lling experimental designs are used to generate the surrogate model of aeroelastic analysis code. Aeroelastic simulations are performed at the sample inputs generated by Latin hypercube sampling. The response values which does not satisfy the frequency constraints are eliminated from the data for model ¯tting. This step increased the accuracy of response surface models in the feasible design space. It is found that the response surface models are able to capture the robust optimal regions of design space. The optimal designs show a reduction of 10 percent in the objective function comprising six vibratory hub loads and 1.5 to 80 percent reduction for the individual vibratory forces and moments. This study demonstrates that the second-order response surface models with space ¯lling-designs can be a favorable choice for computationally intensive robust aeroelastic optimization.
Resumo:
For high performance aircrafts, the flight control system needs to be quite effective in both assuring accurate tracking of pilot commands, while simultaneously assuring overall stability of the aircraft. In addition, the control system must also be sufficiently robust to cater to possible parameter variations. The primary aim of this paper is to enhance the robustness of the controller for a HPA using neuro-adaptive control design. Here the architecture employs a network of Gaussian Radial basis functions to adaptively compensate for the ignored system dynamics. A stable weight mechanism is determined using Lyapunov theory. The network construction and performance of the resulting controller are illustrated through simulations with a low-fidelity six –DOF model of F16 that is available in open literature.
Resumo:
Trajectory optimization of a generic launch vehicle is considered in this paper. The trajectory from launch point to terminal injection point is divided in to two segments. The first segment deals with launcher clearance and vertical raise of the vehicle. During this phase, a nonlinear feedback guidance loop is incorporated to assure vertical raise in presence of thrust misalignment, centre of gravity offset, wind disturbance etc. and possibly to clear obstacles as well. The second segment deals with the trajectory optimization, where the objective is to ensure desired terminal conditions as well as minimum control effort and minimum structural loading in the high dynamic pressure region. The usefulness of this dynamic optimization problem formulation is demonstrated by solving it using the classical Gradient method. Numerical results for both the segments are presented, which clearly brings out the potential advantages of the proposed approach.
Resumo:
The focus of this paper is on designing useful compliant micro-mechanisms of high-aspect-ratio which can be microfabricated by the cost-effective wet etching of (110) orientation silicon (Si) wafers. Wet etching of (110) Si imposes constraints on the geometry of the realized mechanisms because it allows only etch-through in the form of slots parallel to the wafer's flat with a certain minimum length. In this paper, we incorporate this constraint in the topology optimization and obtain compliant designs that meet the specifications on the desired motion for given input forces. Using this design technique and wet etching, we show that we can realize high-aspect-ratio compliant micro-mechanisms. For a (110) Si wafer of 250 µm thickness, the minimum length of the etch opening to get a slot is found to be 866 µm. The minimum achievable width of the slot is limited by the resolution of the lithography process and this can be a very small value. This is studied by conducting trials with different mask layouts on a (110) Si wafer. These constraints are taken care of by using a suitable design parameterization rather than by imposing the constraints explicitly. Topology optimization, as is well known, gives designs using only the essential design specifications. In this work, we show that our technique also gives manufacturable mechanism designs along with lithography mask layouts. Some designs obtained are transferred to lithography masks and mechanisms are fabricated on (110) Si wafers.
Resumo:
The topology optimization problem for the synthesis of compliant mechanisms has been formulated in many different ways in the last 15 years, but there is not yet a definitive formulation that is universally accepted. Furthermore, there are two unresolved issues in this problem. In this paper, we present a comparative study of five distinctly different formulations that are reported in the literature. Three benchmark examples are solved with these formulations using the same input and output specifications and the same numerical optimization algorithm. A total of 35 different synthesis examples are implemented. The examples are limited to desired instantaneous output direction for prescribed input force direction. Hence, this study is limited to linear elastic modeling with small deformations. Two design parameterizations, namely, the frame element based ground structure and the density approach using continuum elements, are used. The obtained designs are evaluated with all other objective functions and are compared with each other. The checkerboard patterns, point flexures, the ability to converge from an unbiased uniform initial guess, and the computation time are analyzed. Some observations are noted based on the extensive implementation done in this study. Complete details of the benchmark problems and the results are included. The computer codes related to this study are made available on the internet for ready access.
Resumo:
Based on the an earlier CFD analysis of the performance of the gas-dynamically controlled laser cavity [1]it was found that there is possibility of optimizing the geometry of the diffuser that can bring about reductions in both size and cost of the system by examining the critical dimensional requirements of the diffuser. Consequently,an extensive CFD analysis has been carried out for a range of diffuser configurations by simulating the supersonic flow through the arrangement including the laser cavity driven by a bank of converging – diverging nozzles and the diffuser. The numerical investigations with 3D-RANS code are carried out to capture the flow patterns through diffusers past the cavity that has multiple supersonic jet interactions with shocks leading to complex flow pattern. Varying length of the diffuser plates is made to be the basic parameter of the study. The analysis reveals that the pressure recovery pattern during the flow through the diffuser from the simulation, being critical for the performance of the laser device shows its dependence on the diffuser length is weaker beyond a critical lower limit and this evaluation of this limit would provide a design guideline for a more efficient system configuration.The observation based on the parametric study shows that the pressure recovery transients in the near vicinity of the cavity is not affected for the reduction in the length of the diffuser plates up to its 10% of the initial size, indicating the design in the first configuration that was tested experimentally has a large factor of margin. The flow stability in the laser cavity is found to be unaffected since a strong and stable shock is located at the leading edge of the diffuser plates while the downstream shock and flow patterns are changed, as one would expect. Results of the study for the different lengths of diffusers in the range of 10% to its full length are presented, keeping the experimentally tested configuration used in the earlier study [1] as the reference length. The conclusions drawn from the analysis is found to be of significance since it provides new design considerations based on the understanding of the intricacies of the flow, allowing for a hardware optimization that can lead to substantial size reduction of the device with no loss of performance.
Resumo:
Fault-tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault-tolerance. For a given job and a soft (transient) error probability, we define mathematical formulas for AET that includes bus communication overhead for both voting (active replication) and rollback-recovery with checkpointing (RRC). And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC, (2) finding the number of processors and job-to-processor assignment when using voting, and (3) defining fault-tolerance scheme (voting or RRC) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.