934 resultados para Robust design
Resumo:
Accelerated life testing (ALT) is widely used to obtain reliability information about a product within a limited time frame. The Cox s proportional hazards (PH) model is often utilized for reliability prediction. My master thesis research focuses on designing accelerated life testing experiments for reliability estimation. We consider multiple step-stress ALT plans with censoring. The optimal stress levels and times of changing the stress levels are investigated. We discuss the optimal designs under three optimality criteria. They are D-, A- and Q-optimal designs. We note that the classical designs are optimal only if the model assumed is correct. Due to the nature of prediction made from ALT experimental data, attained under the stress levels higher than the normal condition, extrapolation is encountered. In such case, the assumed model cannot be tested. Therefore, for possible imprecision in the assumed PH model, the method of construction for robust designs is also explored.
Resumo:
The study of robust design methodologies and techniques has become a new topical area in design optimizations in nearly all engineering and applied science disciplines in the last 10 years due to inevitable and unavoidable imprecision or uncertainty which is existed in real word design problems. To develop a fast optimizer for robust designs, a methodology based on polynomial chaos and tabu search algorithm is proposed. In the methodology, the polynomial chaos is employed as a stochastic response surface model of the objective function to efficiently evaluate the robust performance parameter while a mechanism to assign expected fitness only to promising solutions is introduced in tabu search algorithm to minimize the requirement for determining robust metrics of intermediate solutions. The proposed methodology is applied to the robust design of a practical inverse problem with satisfactory results.
Resumo:
Considerable interest in renewable energy has increased in recent years due to the concerns raised over the environmental impact of conventional energy sources and their price volatility. In particular, wind power has enjoyed a dramatic global growth in installed capacity over the past few decades. Nowadays, the advancement of wind turbine industry represents a challenge for several engineering areas, including materials science, computer science, aerodynamics, analytical design and analysis methods, testing and monitoring, and power electronics. In particular, the technological improvement of wind turbines is currently tied to the use of advanced design methodologies, allowing the designers to develop new and more efficient design concepts. Integrating mathematical optimization techniques into the multidisciplinary design of wind turbines constitutes a promising way to enhance the profitability of these devices. In the literature, wind turbine design optimization is typically performed deterministically. Deterministic optimizations do not consider any degree of randomness affecting the inputs of the system under consideration, and result, therefore, in an unique set of outputs. However, given the stochastic nature of the wind and the uncertainties associated, for instance, with wind turbine operating conditions or geometric tolerances, deterministically optimized designs may be inefficient. Therefore, one of the ways to further improve the design of modern wind turbines is to take into account the aforementioned sources of uncertainty in the optimization process, achieving robust configurations with minimal performance sensitivity to factors causing variability. The research work presented in this thesis deals with the development of a novel integrated multidisciplinary design framework for the robust aeroservoelastic design optimization of multi-megawatt horizontal axis wind turbine (HAWT) rotors, accounting for the stochastic variability related to the input variables. The design system is based on a multidisciplinary analysis module integrating several simulations tools needed to characterize the aeroservoelastic behavior of wind turbines, and determine their economical performance by means of the levelized cost of energy (LCOE). The reported design framework is portable and modular in that any of its analysis modules can be replaced with counterparts of user-selected fidelity. The presented technology is applied to the design of a 5-MW HAWT rotor to be used at sites of wind power density class from 3 to 7, where the mean wind speed at 50 m above the ground ranges from 6.4 to 11.9 m/s. Assuming the mean wind speed to vary stochastically in such range, the rotor design is optimized by minimizing the mean and standard deviation of the LCOE. Airfoil shapes, spanwise distributions of blade chord and twist, internal structural layup and rotor speed are optimized concurrently, subject to an extensive set of structural and aeroelastic constraints. The effectiveness of the multidisciplinary and robust design framework is demonstrated by showing that the probabilistically designed turbine achieves more favorable probabilistic performance than those of the initial baseline turbine and a turbine designed deterministically.
Resumo:
CFD has been successfully used in the optimisation of aerodynamic surfaces using a given set of parameters such as Mach numbers and angle of attack. While carrying out a multidisciplinary design optimisation one deals with situations where the parameters have some uncertain attached. Any optimisation carried out for fixed values of input parameters gives a design which may be totally unacceptable under off-design conditions. The challenge is to develop a robust design procedure which takes into account the fluctuations in the input parameters. In this work, we attempt this using a modified Taguchi approach. This is incorporated into an evolutionary algorithm with many features developed in house. The method is tested for an UCAV design which simultaneously handles aerodynamics, electromagnetics and maneuverability. Results demonstrate that the method has considerable potential.
Resumo:
Urbanisation significantly changes the characteristics of a catchment as natural areas are transformed to impervious surfaces such as roads, roofs and parking lots. The increased fraction of impervious surfaces leads to changes to the stormwater runoff characteristics, whilst a variety of anthropogenic activities common to urban areas generate a range of pollutants such as nutrients, solids and organic matter. These pollutants accumulate on catchment surfaces and are removed and trans- ported by stormwater runoff and thereby contribute pollutant loads to receiving waters. In summary, urbanisation influences the stormwater characteristics of a catchment, including hydrology and water quality. Due to the growing recognition that stormwater pollution is a significant environmental problem, the implementation of mitigation strategies to improve the quality of stormwater runoff is becoming increasingly common in urban areas. A scientifically robust stormwater quality treatment strategy is an essential requirement for effective urban stormwater management. The efficient design of treatment systems is closely dependent on the state of knowledge in relation to the primary factors influencing stormwater quality. In this regard, stormwater modelling outcomes provide designers with important guidance and datasets which significantly underpin the design of effective stormwater treatment systems. Therefore, the accuracy of modelling approaches and the reliability modelling outcomes are of particular concern. This book discusses the inherent complexity and key characteristics in the areas of urban hydrology and stormwater quality, based on the influence exerted by a range of rainfall and catchment characteristics. A comprehensive field sampling and testing programme in relation to pollutant build-up, an urban catchment monitoring programme in relation to stormwater quality and the outcomes from advanced statistical analyses provided the platform for the knowledge creation. Two case studies and two real-world applications are discussed to illustrate the translation of the knowledge created to practical use in relation to the role of rainfall and catchment characteristics on urban stormwater quality. An innovative rainfall classification based on stormwater quality was developed to support the effective and scientifically robust design of stormwater treatment systems. Underpinned by the rainfall classification methodology, a reliable approach for design rainfall selection is proposed in order to optimise stormwater treatment based on both, stormwater quality and quantity. This is a paradigm shift from the common approach where stormwater treatment systems are designed based solely on stormwater quantity data. Additionally, how pollutant build-up and stormwater runoff quality vary with a range of catchment characteristics was also investigated. Based on the study out- comes, it can be concluded that the use of only a limited number of catchment parameters such as land use and impervious surface percentage, as it is the case in current modelling approaches, could result in appreciable error in water quality estimation. Influential factors which should be incorporated into modelling in relation to catchment characteristics, should also include urban form and impervious surface area distribution. The knowledge created through the research investigations discussed in this monograph is expected to make a significant contribution to engineering practice such as hydrologic and stormwater quality modelling, stormwater treatment design and urban planning, as the study outcomes provide practical approaches and recommendations for urban stormwater quality enhancement. Furthermore, this monograph also demonstrates how fundamental knowledge of stormwater quality processes can be translated to provide guidance on engineering practice, the comprehensive application of multivariate data analyses techniques and a paradigm on integrative use of computer models and mathematical models to derive practical outcomes.
Resumo:
Encroaching built environment with increased fault current levels is demanding a robust design approach and prolonged improved performance of the earth grid. With this in mind, the aim of the project was to perform a sensitivity analysis of the earth grid and an earthing performance evaluation with graphene coated conductors. Subsequent to these, a conceptual design to continuously monitor the performance of the earth grid was developed. In this study, earth grid design standards were compared to evaluate their appropriate use in determining the safety condition. A process to grow a thin film of graphene on the surface of cylindrical copper rods was developed to evaluate earthing performance in terms of conductivity and corrosion susceptibility.
Resumo:
In this paper, we consider robust joint designs of relay precoder and destination receive filters in a nonregenerative multiple-input multiple-output (MIMO) relay network. The network consists of multiple source-destination node pairs assisted by a MIMO-relay node. The channel state information (CSI) available at the relay node is assumed to be imperfect. We consider robust designs for two models of CSI error. The first model is a stochastic error (SE) model, where the probability distribution of the CSI error is Gaussian. This model is applicable when the imperfect CSI is mainly due to errors in channel estimation. For this model, we propose robust minimum sum mean square error (SMSE), MSE-balancing, and relay transmit power minimizing precoder designs. The next model for the CSI error is a norm-bounded error (NBE) model, where the CSI error can be specified by an uncertainty set. This model is applicable when the CSI error is dominated by quantization errors. In this case, we adopt a worst-case design approach. For this model, we propose a robust precoder design that minimizes total relay transmit power under constraints on MSEs at the destination nodes. We show that the proposed robust design problems can be reformulated as convex optimization problems that can be solved efficiently using interior-point methods. We demonstrate the robust performance of the proposed design through simulations.
Resumo:
We demonstrate a non-contact technique to apply calibrated and localized forces in the micro-Newton to milli-Newton range using an air microjet. An electromagnetically actuated diaphragm controlled by a signal generator is used to generate the air microjet. With a nozzle diameter of 150 mu m, the microjet diameter was maintained to a maximum of 1 mm at a distance of 5 mm from the nozzle. The force generated by the microjet was measured using a commercial force sensor to determine the velocity profile of the jet. Axial flow velocities of up to 25 m s(-1) were obtained at distances as long as 6 mm. The microjet exerted a force up to 1 mu N on a poly dimethyl siloxane (PDMS) micropillar (50 mu m in diameter, 157 mu m in height) and 415 mu N on a PDMS membrane (3 mm in diameter, 28 mu m thick). We also demonstrate that from a distance of 6 mm our microjet can exert a peak pressure of 187 Pa with a total force of about 84 mu N on a flat surface with 8 V operating voltage. Out of the cleanroom fabrication and robust design make this system cost effective and durable.
Resumo:
The paper reports the results of a high-quality pulse source incorporating a gain-switched laser diode followed by a novel compact two-cascade fibre compression scheme. The pulse compression scheme incorporates a dispersive delay line and a nonlinear pulse compressor based on a dispersion-imbalanced fibre loop mirror (DILM). We analyse and demonstrate for the first time significant improvement of the loop performance by means of the chirped pulse switching. As a result, the DILM provides high-quality nonlinear pulse compression as well as rejection of the nonsoliton component. In the experiment, 20ps pulses from a gain switched laser diode are compressed to a duration of 300fs at a repetition rate in range 70MHz-10GHz. The pulses are pedestal free and transform-limited. Spectral filtering of the output signal by means of a bandpass filter results in generation of wavelength-tuneable picosecond pulses with a duration defined by the filter bandwidth. Alternatively, signal filtering by an arrayed waveguide grating (AWG) results in multichannel picosecond pulse generation for WDM and OTDM applications. The pulse source is built of standard components and is of compact and potentially robust design.
Resumo:
Lean premixed prevaporized (LPP) technology has been widely used in the new generation of gas turbines in which reduced emissions are a priority. However, such combustion systems are susceptible to the damage of self-excited oscillations. Feedback control provide a way of preventing such dynamic stabilities. A flame dynamics assumption is proposed for a recently developed unsteady heat release model, the robust design technique, ℋ ∞ loop-shaping, is applied for the controller design and the performance of the controller is confirmed by simulations of the closed-loop system. The Integral Quadratic Constraints(IQC) method is employed to prove the stability of the closed-loop system. ©2010 IEEE.
Resumo:
Since their introduction in the 1950s, marine outfalls with diffusers have been prone to saline intrusion, a process in which seawater ingresses into the outfall. This can greatly reduce the dilution and subsequent dispersion of wastewater discharged, sometimes resulting in serious deterioration of coastal water quality. Although long aware of the difficulties posed by saline intrusion, engineers still lack satisfactory methods for its prediction and robust design methods for its alleviation. However, with recent developments in numerical methods and computer power, it has been suggested that commercially available computational fluid dynamics (CFD) software may be a useful aid in combating this phenomenon by improving understanding through synthesising likely behaviour. This document reviews current knowledge on saline intrusion and its implications and then outlines a model-scale investigation of the process undertaken at Queen's University Belfast, using both physical and CFD methods. Results are presented for a simple outfall configuration, incorporating several outlets. The features observed agree with general observations from full-scale marine outfalls, and quantify the intricate internal flow mechanisms associated with saline intrusion. The two-dimensional numerical model was found to represent saline intrusion, but in a qualitative manner, not yet adequate for design purposes. Specific areas requiring further development were identified. The ultimate aim is to provide a reliable, practical and cost effective means by which engineers can minimise saline intrusion through optimised outfall design.
Resumo:
This paper describes a randomised controlled trial (RCT) investigation of the added value of systemic family therapy (SFT) over individually focused cognitive behavioural therapy (CBT) for families in which one or more members has suffered trauma and been referred to a community-based psychotherapy centre. The results illustrate how an apparently robust design can be confounded by high attrition rates, low average number of therapeutic sessions and poor protocol adherence. The paper highlights a number of general and specific lessons regarding the resources and processes involved that can act as a model for those planning to undertake studies of this type and scope. A key message is that the challenges of conducting RCTs in ‘real world’ settings should not be underestimated. The wider implications in relation to the place of RCTs within the creation of the evidence base for complex psycho-social interventions is discussed and the current movement towards a phased mixed-methods approach, including the appropriate use of RCTs, which some might argue is a return to the original vision of evidence-based practice (EBP), is affirmed.
Resumo:
Taguchi method was applied to investigate the optimal operating conditions in the preparation of activated carbon using palm kernel shell with quadruple control factors: irradiation time, microwave power, concentration of phosphoric acid as impregnation substance and impregnation ratio between acid and palm kernel shell. The best combination of the control factors as obtained by applying Taguchi method was microwave power of 800 W, irradiation time of 17 min, impregnation ratio of 2, and acid concentration of 85%. The noise factor (particle size of raw material) was considered in a separate outer array, which had no effect on the quality of the activated carbon as confirmed by t-test. Activated carbon prepared at optimum combination of control factors had high BET surface area of 1,473.55 m² g-1 and high porosity. The adsorption equilibrium and kinetic data can satisfactorily be described by the Langmuir isotherm and a pseudo-second-order kinetic model, respectively. The maximum adsorbing capacity suggested by the Langmuir model was 1000 mg g-1.
Resumo:
General-purpose computing devices allow us to (1) customize computation after fabrication and (2) conserve area by reusing expensive active circuitry for different functions in time. We define RP-space, a restricted domain of the general-purpose architectural space focussed on reconfigurable computing architectures. Two dominant features differentiate reconfigurable from special-purpose architectures and account for most of the area overhead associated with RP devices: (1) instructions which tell the device how to behave, and (2) flexible interconnect which supports task dependent dataflow between operations. We can characterize RP-space by the allocation and structure of these resources and compare the efficiencies of architectural points across broad application characteristics. Conventional FPGAs fall at one extreme end of this space and their efficiency ranges over two orders of magnitude across the space of application characteristics. Understanding RP-space and its consequences allows us to pick the best architecture for a task and to search for more robust design points in the space. Our DPGA, a fine- grained computing device which adds small, on-chip instruction memories to FPGAs is one such design point. For typical logic applications and finite- state machines, a DPGA can implement tasks in one-third the area of a traditional FPGA. TSFPGA, a variant of the DPGA which focuses on heavily time-switched interconnect, achieves circuit densities close to the DPGA, while reducing typical physical mapping times from hours to seconds. Rigid, fabrication-time organization of instruction resources significantly narrows the range of efficiency for conventional architectures. To avoid this performance brittleness, we developed MATRIX, the first architecture to defer the binding of instruction resources until run-time, allowing the application to organize resources according to its needs. Our focus MATRIX design point is based on an array of 8-bit ALU and register-file building blocks interconnected via a byte-wide network. With today's silicon, a single chip MATRIX array can deliver over 10 Gop/s (8-bit ops). On sample image processing tasks, we show that MATRIX yields 10-20x the computational density of conventional processors. Understanding the cost structure of RP-space helps us identify these intermediate architectural points and may provide useful insight more broadly in guiding our continual search for robust and efficient general-purpose computing structures.
Resumo:
For a long time, we believed in the pattern that tropical and south hemisphere species have high survival. Nowadays results began to contradict this pattern, indicating the need for further studies. Despite the advanced state of the study of bird population parameters, little is known about their variation throughout the year and the factors affecting them. Reproduction, for example, is one factor that may alter adult survival rates, because during this process the breeding pair allocates resources to maintain itself to maintain offspring, making itself more susceptible to diseases and predation. The aim of this study was to estimate survival and population size of a Central and South America passerine, Tachyphonus rufus (Boddaert, 1783), testing hypotheses about the factors that define these parameters. We performed data collection between Nov/2010 and ago/2012 in 12 ha plot, in a fragment of Atlantic Forest in northeastern Brazil. We used capture-mark-recapture methods to generate estimates using Closed Design Robust model in the program MARK. We generated Multi-state models to test some assumptions inherent to Closed Robust Design. The influence of co-variables (time, rain and reproductive cycle) and the effect of transient individuals were measured. Capture, recapture and apparent survival parameters were defined by reproductive cycle, while temporary dispersal was influence by rain. The estimates showed a higher apparent survival during the non-breeding period (92% ± 1%) than during breeding (40% ± 9%), revealing a cost of reproduction and suggesting a trade-off between surviving and reproducing. The low annual survival observed (34%) did not corroborate the pattern of high rates expected for a tropical bird. The largest population size was estimated to be 56 individuals in Nov/11, explained by high recruitment of juveniles, while the lowest observed in May/12: 10 individuals, probably as a result of massive influx of competitor species. Results from this study add to the growing literature on life history of Neotropical species. We encourage studies like this especially in Brazil, where there are few information, and suggest that covariates related to habitat quality and environmental changes should be tested, so that we can generate increasingly reliable models