967 resultados para Optimal Design
Resumo:
This paper discusses the problem of optimal design of a jurisdiction structure from the view point of a utilitarian social planner when individuals with identical utility functions for a non-rival public good and private consumption have private information about their contributive capacities. It shows that the superiority of a centralized provision of a non-rival public good over a federal one does not always hold. Specifically, when differences in individuals’ contributive capacities are large, it is better to provide the public good in several distinct jurisdictions rather than to pool these jurisdictions into a single one. In the specific situation where individuals have logarithmic utilities, the paper provides a complete characterization of the optimal jurisdiction structure in the two-type case.
Resumo:
This article describes an approach to optimal design of phase II clinical trials using Bayesian decision theory. The method proposed extends that suggested by Stallard (1998, Biometrics54, 279–294) in which designs were obtained to maximize a gain function including the cost of drug development and the benefit from a successful therapy. Here, the approach is extended by the consideration of other potential therapies, the development of which is competing for the same limited resources. The resulting optimal designs are shown to have frequentist properties much more similar to those traditionally used in phase II trials.
Resumo:
It has been known for decades that the metabolic rate of animals scales with body mass with an exponent that is almost always <1, >2/3, and often very close to 3/4. The 3/4 exponent emerges naturally from two models of resource distribution networks, radial explosion and hierarchically branched, which incorporate a minimum of specific details. Both models show that the exponent is 2/3 if velocity of flow remains constant, but can attain a maximum value of 3/4 if velocity scales with its maximum exponent, 1/12. Quarterpower scaling can arise even when there is no underlying fractality. The canonical “fourth dimension” in biological scaling relations can result from matching the velocity of flow through the network to the linear dimension of the terminal “service volume” where resources are consumed. These models have broad applicability for the optimal design of biological and engineered systems where energy, materials, or information are distributed from a single source.
Resumo:
We consider the linear equality-constrained least squares problem (LSE) of minimizing ${\|c - Gx\|}_2 $, subject to the constraint $Ex = p$. A preconditioned conjugate gradient method is applied to the Kuhn–Tucker equations associated with the LSE problem. We show that our method is well suited for structural optimization problems in reliability analysis and optimal design. Numerical tests are performed on an Alliant FX/8 multiprocessor and a Cray-X-MP using some practical structural analysis data.
Resumo:
In the global construction context, the best value or most economically advantageous tender is becoming a widespread approach for contractor selection, as an alternative to other traditional awarding criteria such as the lowest price. In these multi-attribute tenders, the owner or auctioneer solicits proposals containing both a price bid and additional technical features. Once the proposals are received, each bidder’s price bid is given an economic score according to a scoring rule, generally called an economic scoring formula (ESF) and a technical score according to pre-specified criteria. Eventually, the contract is awarded to the bidder with the highest weighted overall score (economic + technical). However, economic scoring formula selection by auctioneers is invariably and paradoxically a highly intuitive process in practice, involving few theoretical or empirical considerations, despite having been considered traditionally and mistakenly as objective, due to its mathematical nature. This paper provides a taxonomic classification of a wide variety of ESFs and abnormally low bids criteria (ALBC) gathered in several countries with different tendering approaches. Practical implications concern the optimal design of price scoring rules in construction contract tenders, as well as future analyses of the effects of the ESF and ALBC on competitive bidding behaviour.
Resumo:
This thesis describes design methodologies for frequency selective surfaces (FSSs) composed of periodic arrays of pre-fractals metallic patches on single-layer dielectrics (FR4, RT/duroid). Shapes presented by Sierpinski island and T fractal geometries are exploited to the simple design of efficient band-stop spatial filters with applications in the range of microwaves. Initial results are discussed in terms of the electromagnetic effect resulting from the variation of parameters such as, fractal iteration number (or fractal level), fractal iteration factor, and periodicity of FSS, depending on the used pre-fractal element (Sierpinski island or T fractal). The transmission properties of these proposed periodic arrays are investigated through simulations performed by Ansoft DesignerTM and Ansoft HFSSTM commercial softwares that run full-wave methods. To validate the employed methodology, FSS prototypes are selected for fabrication and measurement. The obtained results point to interesting features for FSS spatial filters: compactness, with high values of frequency compression factor; as well as stable frequency responses at oblique incidence of plane waves. This thesis also approaches, as it main focus, the application of an alternative electromagnetic (EM) optimization technique for analysis and synthesis of FSSs with fractal motifs. In application examples of this technique, Vicsek and Sierpinski pre-fractal elements are used in the optimal design of FSS structures. Based on computational intelligence tools, the proposed technique overcomes the high computational cost associated to the full-wave parametric analyzes. To this end, fast and accurate multilayer perceptron (MLP) neural network models are developed using different parameters as design input variables. These neural network models aim to calculate the cost function in the iterations of population-based search algorithms. Continuous genetic algorithm (GA), particle swarm optimization (PSO), and bees algorithm (BA) are used for FSSs optimization with specific resonant frequency and bandwidth. The performance of these algorithms is compared in terms of computational cost and numerical convergence. Consistent results can be verified by the excellent agreement obtained between simulations and measurements related to FSS prototypes built with a given fractal iteration
Resumo:
Factorial experiments are widely used in industry to investigate the effects of process factors on quality response variables. Many food processes, for example, are not only subject to variation between days, but also between different times of the day. Removing this variation using blocking factors leads to row-column designs. In this paper, an algorithm is described for constructing factorial row-column designs when the factors are quantitative, and the data are to be analysed by fitting a polynomial model. The row-column designs are constructed using an iterative interchange search, where interchanges that result in an improvement in the weighted mean of the efficiency factors corresponding to the parameters of interest are accepted. Some examples illustrating the performance of the algorithm are given.
Resumo:
Variance dispersion graphs have become a popular tool in aiding the choice of a response surface design. Often differences in response from some particular point, such as the expected position of the optimum or standard operating conditions, are more important than the response itself. We describe two examples from food technology. In the first, an experiment was conducted to find the levels of three factors which optimized the yield of valuable products enzymatically synthesized from sugars and to discover how the yield changed as the levels of the factors were changed from the optimum. In the second example, an experiment was conducted on a mixing process for pastry dough to discover how three factors affected a number of properties of the pastry, with a view to using these factors to control the process. We introduce the difference variance dispersion graph (DVDG) to help in the choice of a design in these circumstances. The DVDG for blocked designs is developed and the examples are used to show how the DVDG can be used in practice. In both examples a design was chosen by using the DVDG, as well as other properties, and the experiments were conducted and produced results that were useful to the experimenters. In both cases the conclusions were drawn partly by comparing responses at different points on the response surface.
Resumo:
Thermoeconomic Functional Analysis is a method developed for the analysis and optimal design of improvement of thermal systems (Frangopoulos, 1984). The purpose of this work is to discuss the cogeneration system optimization using a condensing steam turbine with two extractions. This cogeneration system is a rational alternative in pulp and paper plants in regard to the Brazilian conditions. The objective of this optimization consists of minimizing the global cost of the system acquisition and operation, based on the parametrization of actual data from a cellulose plant with a daily production of 1000 tons. Among the several possible decision variables, the pressure and temperature of live steam were selected. These variables significantly affect the energy performance of the cogeneration system. The conditions which determine a lower cost for the system are presented in conclusion.
Resumo:
The study of algorithms for active vibrations control in flexible structures became an area of enormous interest, mainly due to the countless demands of an optimal performance of mechanical systems as aircraft, aerospace and automotive structures. Smart structures, formed by a structure base, coupled with piezoelectric actuators and sensor are capable to guarantee the conditions demanded through the application of several types of controllers. The actuator/sensor materials are composed by piezoelectric ceramic (PZT - Lead Zirconate Titanate), commonly used as distributed actuators, and piezoelectric plastic films (PVDF-PolyVinyliDeno Floride), highly indicated for distributed sensors. The design process of such system encompasses three main phases: structural design; optimal placement of sensor/actuator (PVDF and PZT); and controller design. Consequently, for optimal design purposes, the structure, the sensor/actuator placement and the controller have to be considered simultaneously. This article addresses the optimal placement of actuators and sensors for design of controller for vibration attenuation in a flexible plate. Techniques involving linear matrix inequalities (LMI) to solve the Riccati's equation are used. The controller's gain is calculated using the linear quadratic regulator (LQR). The major advantage of LMI design is to enable specifications such as stability degree requirements, decay rate, input force limitation in the actuators and output peak bounder. It is also possible to assume that the model parameters involve uncertainties. LMI is a very useful tool for problems with constraints, where the parameters vary in a range of values. Once formulated in terms of LMI a problem can be solved efficiently by convex optimization algorithms.
Resumo:
Human evolution has always been linked to personal or group needs. This statement is based on observations of the day to day. With time, we can now choose from among many excellent techniques and materials that can be employed in the construction of this part of the machinery so important to the functionality of machines and equipment. When we look at a machine, we see that this is usually designed by combining a set of pre-determined in your project. Among the many pieces that we can highlight one of them is of fundamental importance, the gear. Gears are an example of the mechanical devices used by the older man, and are currently the most important components in the transmission technique. This is responsible for transmitting rotary motion from one shaft to another. Gears are one of the best among the various means available for the transmission of motion. Gears are the most important components of modern technique of transmission. The main purpose of a transmission gear is precisely transmit torque and speed. The requirements have increased significantly due to pollution and energy conservation. Nowadays, gear transmissions are required to transmit high strength through all his life together with the high demand on performance and sound properties. An optimal design for the gear you need a set of the most modern fabrication machines and cutting tools. In the following work is studied on the manufacture of gears, making the monitoring of a case study of the try out the installation of a gear grinding machine
Resumo:
This paper is part of an extensive work about the technological development, experimental analysis and numerical modeling of steel fibre reinforced concrete pipes. The first part ("Steel fibre reinforced concrete pipes. Part 1: technological analysis of the mechanical behavior") dealt with the technological development of the experimental campaign, the test procedure and the discussion of the structural behavior obtained for each of the dosages of fibre used. This second part deals with the aspects of numerical modeling. In this respect, a numerical model called MAP, which simulates the behavior of fibre reinforced concrete pipes with medium-low range diameters, is introduced. The bases of the numerical model are also mentioned. Subsequently, the experimental results are contrasted with those produced by the numerical model, obtaining excellent correlations. It was possible to conclude that the numerical model is a useful tool for the design of this type of pipes, which represents an important step forward to establish the structural fibres as reinforcement for concrete pipes. Finally, the design for the optimal amount of fibres for a pipe with a diameter of 400 mm is presented as an illustrating example with strategic interest.
Resumo:
L’utilizzo di materiali compositi come i calcestruzzi fibrorinforzati sta diventando sempre più frequente e diffuso. Tuttavia la scelta di nuovi materiali richiede una approfondita analisi delle loro caratteristiche e dei loro comportamenti. I vantaggi forniti dall’aggiunta di fibre d’acciaio ad un materiale fragile, quale il calcestruzzo, sono legati al miglioramento della duttilità e all'aumento di assorbimento di energia. L’aggiunta di fibre permette quindi di migliorare il comportamento strutturale del composito, dando vita ad un nuovo materiale capace di lavorare non solo a compressione ma anche in piccola parte a trazione, ma soprattutto caratterizzato da una discreta duttilità ed una buona capacità plastica. Questa tesi ha avuto come fine l’analisi delle caratteristiche di questi compositi cementizi fibrorinforzati. Partendo da prove sperimentali classiche quali prove di trazione e compressione, si è arrivati alla caratterizzazione di questi materiali avvalendosi di una campagna sperimentale basata sull’applicazione della norma UNI 11039/2003. L’obiettivo principale di questo lavoro consiste nell’analizzare e nel confrontare calcestruzzi rinforzati con fibre di due diverse lunghezze e in diversi dosaggi. Studiando questi calcestruzzi si è cercato di comprendere meglio questi materiali e trovare un riscontro pratico ai comportamenti descritti in teorie ormai diffuse e consolidate. La comparazione dei risultati dei test condotti ha permesso di mettere in luce differenze tra i materiali rinforzati con l’aggiunta di fibre corte rispetto a quelli con fibre lunghe, ma ha anche permesso di mostrare e sottolineare le analogie che caratterizzano questi materiali fibrorinforzati. Sono stati affrontati inoltre gli aspetti legati alle fasi della costituzione di questi materiali sia da un punto di vista teorico sia da un punto di vista pratico. Infine è stato sviluppato un modello analitico basato sulla definizione di specifici diagrammi tensione-deformazione; i risultati di questo modello sono quindi stati confrontati con i dati sperimentali ottenuti in laboratorio.
Resumo:
The control of a proton exchange membrane fuel cell system (PEM FC) for domestic heat and power supply requires extensive control measures to handle the complicated process. Highly dynamic and non linear behavior, increase drastically the difficulties to find the optimal design and control strategies. The objective is to design, implement and commission a controller for the entire fuel cell system. The fuel cell process and the control system are engineered simultaneously; therefore there is no access to the process hardware during the control system development. Therefore the method of choice was a model based design approach, following the rapid control prototyping (RCP) methodology. The fuel cell system is simulated using a fuel cell library which allowed thermodynamic calculations. In the course of the development the process model is continuously adapted to the real system. The controller application is designed and developed in parallel and thereby tested and verified against the process model. Furthermore, after the commissioning of the real system, the process model can be also better identified and parameterized utilizing measurement data to perform optimization procedures. The process model and the controller application are implemented in Simulink using Mathworks` Real Time Workshop (RTW) and the xPC development suite for MiL (model-in-theloop) and HiL (hardware-in-the-loop) testing. It is possible to completely develop, verify and validate the controller application without depending on the real fuel cell system, which is not available for testing during the development process. The fuel cell system can be immediately taken into operation after connecting the controller to the process.
Resumo:
Water distribution networks optimization is a challenging problem due to the dimension and the complexity of these systems. Since the last half of the twentieth century this field has been investigated by many authors. Recently, to overcome discrete nature of variables and non linearity of equations, the research has been focused on the development of heuristic algorithms. This algorithms do not require continuity and linearity of the problem functions because they are linked to an external hydraulic simulator that solve equations of mass continuity and of energy conservation of the network. In this work, a NSGA-II (Non-dominating Sorting Genetic Algorithm) has been used. This is a heuristic multi-objective genetic algorithm based on the analogy of evolution in nature. Starting from an initial random set of solutions, called population, it evolves them towards a front of solutions that minimize, separately and contemporaneously, all the objectives. This can be very useful in practical problems where multiple and discordant goals are common. Usually, one of the main drawback of these algorithms is related to time consuming: being a stochastic research, a lot of solutions must be analized before good ones are found. Results of this thesis about the classical optimal design problem shows that is possible to improve results modifying the mathematical definition of objective functions and the survival criterion, inserting good solutions created by a Cellular Automata and using rules created by classifier algorithm (C4.5). This part has been tested using the version of NSGA-II supplied by Centre for Water Systems (University of Exeter, UK) in MATLAB® environment. Even if orientating the research can constrain the algorithm with the risk of not finding the optimal set of solutions, it can greatly improve the results. Subsequently, thanks to CINECA help, a version of NSGA-II has been implemented in C language and parallelized: results about the global parallelization show the speed up, while results about the island parallelization show that communication among islands can improve the optimization. Finally, some tests about the optimization of pump scheduling have been carried out. In this case, good results are found for a small network, while the solutions of a big problem are affected by the lack of constraints on the number of pump switches. Possible future research is about the insertion of further constraints and the evolution guide. In the end, the optimization of water distribution systems is still far from a definitive solution, but the improvement in this field can be very useful in reducing the solutions cost of practical problems, where the high number of variables makes their management very difficult from human point of view.