842 resultados para optimization algorithm
Resumo:
Majority of the current research on the mounting system has emphasised on the low/medium power engine, rare work has been reported for the high-speed and heavy-duty engine, the vibration characteristics of which exhibits significantly increased complexity and uncertainty. In this work, a general dynamics model was firstly established to describe the dynamic properties of a mounting system with various numbers of mounts. Then, this model was employed for the optimization of the mounting system. A modified Powell conjugate direction method was developed to improve the optimization efficiency. Basing on the optimization results obtained from the theoretical model, a mounting system was constructed for a V6 diesel engine. The experimental measurement of the vibration intensity of the mounting systems shows excellent agreement with the theoretical calculations, indicating the validity of the model. This dynamics model opens a new avenue in assessing and designing the mounting system for a high-speed and heavy-duty engine. On the other hand, the delineated dynamics model, and the optimization algorithm should find wide applications for other mounting systems, such as the power transmission system which usually has various uncertain mounts.
Resumo:
The Davis Growth Model (a dynamic steer growth model encompassing 4 fat deposition models) is currently being used by the phenotypic prediction program of the Cooperative Research Centre (CRC) for Beef Genetic Technologies to predict P8 fat (mm) in beef cattle to assist beef producers meet market specifications. The concepts of cellular hyperplasia and hypertrophy are integral components of the Davis Growth Model. The net synthesis of total body fat (kg) is calculated from the net energy available after accounting tor energy needs for maintenance and protein synthesis. Total body fat (kg) is then partitioned into 4 fat depots (intermuscular, intramuscular, subcutaneous, and visceral). This paper reports on the parameter estimation and sensitivity analysis of the DNA (deoxyribonucleic acid) logistic growth equations and the fat deposition first-order differential equations in the Davis Growth Model using acslXtreme (Hunstville, AL, USA, Xcellon). The DNA and fat deposition parameter coefficients were found to be important determinants of model function; the DNA parameter coefficients with days on feed >100 days and the fat deposition parameter coefficients for all days on feed. The generalized NL2SOL optimization algorithm had the fastest processing time and the minimum number of objective function evaluations when estimating the 4 fat deposition parameter coefficients with 2 observed values (initial and final fat). The subcutaneous fat parameter coefficient did indicate a metabolic difference for frame sizes. The results look promising and the prototype Davis Growth Model has the potential to assist the beef industry meet market specifications.
Resumo:
We describe a novel approach to treatment planning for focal brachytherapy utilizing a biologically based inverse optimization algorithm and biological imaging to target an ablative dose at known regions of significant tumour burden and a lower, therapeutic dose to low risk regions.
Resumo:
A diffusion/replacement model for new consumer durables designed to be used as a long-term forecasting tool is developed. The model simulates new demand as well as replacement demand over time. The model is called DEMSIM and is built upon a counteractive adoption model specifying the basic forces affecting the adoption behaviour of individual consumers. These forces are the promoting forces and the resisting forces. The promoting forces are further divided into internal and external influences. These influences are operationalized within a multi-segmental diffusion model generating the adoption behaviour of the consumers in each segment as an expected value. This diffusion model is combined with a replacement model built upon the same segmental structure as the diffusion model. This model generates, in turn, the expected replacement behaviour in each segment. To be able to use DEMSIM as a forecasting tool in early stages of a diffusion process estimates of the model parameters are needed as soon as possible after product launch. However, traditional statistical techniques are not very helpful in estimating such parameters in early stages of a diffusion process. To enable early parameter calibration an optimization algorithm is developed by which the main parameters of the diffusion model can be estimated on the basis of very few sales observations. The optimization is carried out in iterative simulation runs. Empirical validations using the optimization algorithm reveal that the diffusion model performs well in early long-term sales forecasts, especially as it comes to the timing of future sales peaks.
Resumo:
An energy-based variational approach is used for structural dynamic modeling of the IPMC (Ionic Polymer Metal Composites) flapping wing. Dynamic characteristics of the wing are analyzed using numerical simulations. Starting with the initial design, critical parameters which have influence on the performance of the wing are identified through parametric studies. An optimization study is performed to obtain improved flapping actuation of the IPMC wing. It is shown that the optimization algorithm leads to a flapping wing with dimensions similar to the dragonfly Aeshna Multicolor wing. An unsteady aerodynamic model based on modified strip theory is used to obtain the aerodynamic forces. It is found that the IPMC wing generates sufficient lift to support its own weight and carry a small payload. It is therefore a potential candidate for flapping wing of micro air vehicles.
Resumo:
In this paper, we consider robust joint linear precoder/receive filter design for multiuser multi-input multi-output (MIMO) downlink that minimizes the sum mean square error (SMSE) in the presence of imperfect channel state information (CSI). The base station is equipped with multiple transmit antennas, and each user terminal is equipped with multiple receive antennas. The CSI is assumed to be perturbed by estimation error. The proposed transceiver design is based on jointly minimizing a modified function of the MSE, taking into account the statistics of the estimation error under a total transmit power constraint. An alternating optimization algorithm, wherein the optimization is performed with respect to the transmit precoder and the receive filter in an alternating fashion, is proposed. The robustness of the proposed algorithm to imperfections in CSI is illustrated through simulations.
Resumo:
This paper discusses the design and experimental verification of a geometrically simple logarithmic weir. The weir consists of an inward trapezoidal weir of slope 1 horizontal to n vertical, or 1 in n, over two sectors of a circle of radius R and depth d, separated by a distance 2t. The weir parameters are optimized using a numerical optimization algorithm. The discharge through this weir is proportional to the logarithm of head measured above a fixed reference plane for all heads in the range 0.23R less than or equal to h less than or equal to 3.65R within a maximum deviation of +/-2% from the theoretical discharge. Experiments with two weirs show excellent agreement with the theory by giving a constant average coefficient of discharge of 0.62. The application of this weir to the field of irrigation, environmental, and chemical engineering is highlighted.
Resumo:
In this paper we propose a new algorithm for learning polyhedral classifiers. In contrast to existing methods for learning polyhedral classifier which solve a constrained optimization problem, our method solves an unconstrained optimization problem. Our method is based on a logistic function based model for the posterior probability function. We propose an alternating optimization algorithm, namely, SPLA1 (Single Polyhedral Learning Algorithm1) which maximizes the loglikelihood of the training data to learn the parameters. We also extend our method to make it independent of any user specified parameter (e.g., number of hyperplanes required to form a polyhedral set) in SPLA2. We show the effectiveness of our approach with experiments on various synthetic and real world datasets and compare our approach with a standard decision tree method (OC1) and a constrained optimization based method for learning polyhedral sets.
Resumo:
We present in this paper a new algorithm based on Particle Swarm Optimization (PSO) for solving Dynamic Single Objective Constrained Optimization (DCOP) problems. We have modified several different parameters of the original particle swarm optimization algorithm by introducing new types of particles for local search and to detect changes in the search space. The algorithm is tested with a known benchmark set and compare with the results with other contemporary works. We demonstrate the convergence properties by using convergence graphs and also the illustrate the changes in the current benchmark problems for more realistic correspondence to practical real world problems.
Resumo:
The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.
It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.
The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.
Resumo:
With continuing advances in CMOS technology, feature sizes of modern Silicon chip-sets have gone down drastically over the past decade. In addition to desktops and laptop processors, a vast majority of these chips are also being deployed in mobile communication devices like smart-phones and tablets, where multiple radio-frequency integrated circuits (RFICs) must be integrated into one device to cater to a wide variety of applications such as Wi-Fi, Bluetooth, NFC, wireless charging, etc. While a small feature size enables higher integration levels leading to billions of transistors co-existing on a single chip, it also makes these Silicon ICs more susceptible to variations. A part of these variations can be attributed to the manufacturing process itself, particularly due to the stringent dimensional tolerances associated with the lithographic steps in modern processes. Additionally, RF or millimeter-wave communication chip-sets are subject to another type of variation caused by dynamic changes in the operating environment. Another bottleneck in the development of high performance RF/mm-wave Silicon ICs is the lack of accurate analog/high-frequency models in nanometer CMOS processes. This can be primarily attributed to the fact that most cutting edge processes are geared towards digital system implementation and as such there is little model-to-hardware correlation at RF frequencies.
All these issues have significantly degraded yield of high performance mm-wave and RF CMOS systems which often require multiple trial-and-error based Silicon validations, thereby incurring additional production costs. This dissertation proposes a low overhead technique which attempts to counter the detrimental effects of these variations, thereby improving both performance and yield of chips post fabrication in a systematic way. The key idea behind this approach is to dynamically sense the performance of the system, identify when a problem has occurred, and then actuate it back to its desired performance level through an intelligent on-chip optimization algorithm. We term this technique as self-healing drawing inspiration from nature's own way of healing the body against adverse environmental effects. To effectively demonstrate the efficacy of self-healing in CMOS systems, several representative examples are designed, fabricated, and measured against a variety of operating conditions.
We demonstrate a high-power mm-wave segmented power mixer array based transmitter architecture that is capable of generating high-speed and non-constant envelope modulations at higher efficiencies compared to existing conventional designs. We then incorporate several sensors and actuators into the design and demonstrate closed-loop healing against a wide variety of non-ideal operating conditions. We also demonstrate fully-integrated self-healing in the context of another mm-wave power amplifier, where measurements were performed across several chips, showing significant improvements in performance as well as reduced variability in the presence of process variations and load impedance mismatch, as well as catastrophic transistor failure. Finally, on the receiver side, a closed-loop self-healing phase synthesis scheme is demonstrated in conjunction with a wide-band voltage controlled oscillator to generate phase shifter local oscillator (LO) signals for a phased array receiver. The system is shown to heal against non-idealities in the LO signal generation and distribution, significantly reducing phase errors across a wide range of frequencies.
Resumo:
以提高光刻机应用性能为目的,提出了一种高性能硅片曝光场分布优化算法。由芯片尺寸计算得到最佳曝光场尺寸,使其最接近于光刻机提供的曝光场最大尺寸,提高了曝光系统的利用率;引入曝光场交错分布,减少了硅片边缘曝光场的交叠,提高了光刻产率;建立产率优先和良率优先两种优化方案,实现了产率和良率的共优。以实际芯片产品的参量为例,将本算法用于曝光过程,采用产率优先标准,曝光场数量减少了10%,而内场数量基本不变,提高了光刻的产率也确保了良率;采用良率优先标准,内场数量增长了10%,总的场数也有所减少,提高了光刻良率的可靠
Resumo:
二维编码阵列是编码孔径成像的关键部件,它直接决定着再现的层析图像的质量。目前仍没有一种理想的二维阵列既具有较高的量子收集率,又具有良好的层析成像特性。采用一种新的方法——分割矩阵(DIRECT)全局优化算法,设计二维阵列,该算法适用于多变量“黑盒”问题的求解,并且具有比其他优化算法更快的收敛速度。其目的是设计一类自相关函数旁瓣最大值为1,同时具有最火填充率的二维编码阵列。理论分析及实验结果表明:用该算法搜索得到的二维阵列既具有较高的量子收集率,又具有良好的层析成像特性。
Resumo:
Aperture patterns play a vital role in coded aperture imaging ( CAI) applications. In recent years, many approaches were presented to design optimum or near-optimum aperture patterns. Uniformly redundant arrays (URAs) are, undoubtedly, the most successful for constant sidelobe of their periodic autocorrelation function. Unfortunately, the existing methods can only be used to design URAs with a limited number of array sizes and fixed autocorrelation sidelobe-to-peak ratios. In this paper, we present a novel method to design more flexible URAs. Our approach is based on a searching program driven by DIRECT, a global optimization algorithm. We transform the design question to a mathematical model, based on the DIRECT algorithm, which is advantageous for computer implementation. By changing determinative conditions, we obtain two kinds of types of URAs, including the filled URAs which can be constructed by existing methods and the sparse URAs which have never been mentioned by other authors as far as we know. Finally, we carry out an experiment to demonstrate the imaging performance of the sparse URAs.
Resumo:
In this paper, we propose a novel three-dimensional imaging method by which the object is captured by a coded cameras array (CCA) and computationally reconstructed as a series of longitudinal layered surface images of the object. The distribution of cameras in array, named code pattern, is crucial for reconstructed images fidelity when the correlation decoding is used. We use DIRECT global optimization algorithm to design the code patterns that possess proper imaging property. We have conducted primary experiments to verify and test the performance of the proposed method with a simple discontinuous object and a small-scale CCA including nine cameras. After certain procedures such as capturing, photograph integrating, computational reconstructing and filtering, etc., we obtain reconstructed longitudinal layered surface images of the object with higher signal-to-noise ratio. The results of experiments show that the proposed method is feasible. It is a promising method to be used in fields such as remote sensing, machine vision, etc. (c) 2006 Elsevier GmbH. All rights reserved.