931 resultados para Optimal
Resumo:
The conditional nonlinear optimal perturbation (CNOP), which is a nonlinear generalization of the linear singular vector (LSV), is applied in important problems of atmospheric and oceanic sciences, including ENSO predictability, targeted observations, and ensemble forecast. In this study, we investigate the computational cost of obtaining the CNOP by several methods. Differences and similarities, in terms of the computational error and cost in obtaining the CNOP, are compared among the sequential quadratic programming (SQP) algorithm, the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm, and the spectral projected gradients (SPG2) algorithm. A theoretical grassland ecosystem model and the classical Lorenz model are used as examples. Numerical results demonstrate that the computational error is acceptable with all three algorithms. The computational cost to obtain the CNOP is reduced by using the SQP algorithm. The experimental results also reveal that the L-BFGS algorithm is the most effective algorithm among the three optimization algorithms for obtaining the CNOP. The numerical results suggest a new approach and algorithm for obtaining the CNOP for a large-scale optimization problem.
Resumo:
A model is developed to investigate the trade-offs between benefits and costs involved in zooplanktonic diel vertical migration (DVM) strategies. The 'venturous revenue' (VR) is used as the criterion for optimal trade-offs. It is a function of environmental factors and the age of zooplankter. During vertical migration, animals are assumed to check instantaneously the variations of environmental parameters and thereby select the optimal behavioral strategy to maximize the value of VR, i.e. taking up as much food as possible with a certain risk of mortality. The model is run on a diel time scale (24 h) in four possible scenarios during the animal's life history. The results show that zooplankton can perform normal DVM balancing optimal food intake against predation risk, with the profile of DVM largely modified by the age of zooplankter.
Resumo:
Based on the hypothesis of self-optimization, we derive four models of biomass spectra and abundance spectra in communities with size-dependent metabolic rates. In Models 1 and 2, the maximum diversity of population abundance in different size classes subject to the constraints of constant mean body mass and constant mean respiration rate is assumed to be the strategy for ecosystems to organize their size structure. In Models 3 and 4, the organizing strategy is defined as the maximum diversity of biomass in different size classes without constraints on mean body mass and subject to the constant mean specific respiration rate of all individuals, i.e. the average specific respiration rate over all individuals of a community or group, which characterizes the mean rate of energy consumption in a community. Models 1 and 2 generate peaked distributions of biomass spectral density whereas Model 3 generates a fiat distribution. In Model 4, the distributions of biomass spectral density and of abundance spectral density depend on the Lagrangian multipler (lambda (2)). When lambda (2) tends to zero or equals zero, the distributions of biomass spectral density and of abundance spectral density correspond to those from Model 3. When lambda (2) has a large negative value, the biomass spectrum is similar to the empirical fiat biomass spectrum organized in logarithmic size intervals. When lambda (2) > 0, the biomass spectral density increases with body mass and the distribution of abundance spectral density is an unimodal curve. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
提出了一种用于工业机器人时间最优轨迹规划及轨迹控制的新方法,它可以确保在关节位移、速度、加速度以及二阶加速度边界值的约束下,机器人手部沿笛卡尔空间中规定路径运动的时间阳短。在这种方法中,所规划的关节轨迹都采用二次多项式加余弦函数的形式,不仅可以保证各关节运动的位移、速度 、加速度连续而且还可以保证各关节运动的二阶加速度连续。采用这种方法,既可以提高机器人的工作效率又可以延长机器人的工作寿命以PUMA560机器人为对象进行了计算机仿真和机器人实验,结果表明这种方法是正确的有效的。它为工业机器人在非线性运动学约束条件下的时间最优轨迹规划及控制问题提供了一种较好的解决方案。
Resumo:
提出了一种新的最优模糊PID控制器,它由两部分组成,即在线模糊推理机构和带有不完全微分的常规PID控制器,在模糊推理机构中,引入了三个可调节因子xp,xi和xd,其作用是进一步修改和优化模糊推理的结果,以使控制器对一个给定对象具有最优的控制效果,可调节因子的最优值采用ITAE准则及Nelder和Mead提出的柔性多面体最优搜索算法加以确定,这种PID控制器被用来控制由作者设计的智能人工腿中的一个直流电机,仿真结果表明该控制器的设计是非常有效的,它可被用于控制各种不同的对象和过程。
Resumo:
Many problems in early vision are ill posed. Edge detection is a typical example. This paper applies regularization techniques to the problem of edge detection. We derive an optimal filter for edge detection with a size controlled by the regularization parameter $\\ lambda $ and compare it to the Gaussian filter. A formula relating the signal-to-noise ratio to the parameter $\\lambda $ is derived from regularization analysis for the case of small values of $\\lambda$. We also discuss the method of Generalized Cross Validation for obtaining the optimal filter scale. Finally, we use our framework to explain two perceptual phenomena: coarsely quantized images becoming recognizable by either blurring or adding noise.
Resumo:
We consider the question "How should one act when the only goal is to learn as much as possible?" Building on the theoretical results of Fedorov [1972] and MacKay [1992], we apply techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network learner. We demonstrate that these techniques allow the learner to minimize its generalization error by exploring its domain efficiently and completely. We conclude that, while not a panacea, OED-based query/action has much to offer, especially in domains where its high computational costs can be tolerated.
Resumo:
Small failures should only disrupt a small part of a network. One way to do this is by marking the surrounding area as untrustworthy --- circumscribing the failure. This can be done with a distributed algorithm using hierarchical clustering and neighbor relations, and the resulting circumscription is near-optimal for convex failures.
Resumo:
We give a one-pass, O~(m^{1-2/k})-space algorithm for estimating the k-th frequency moment of a data stream for any real k>2. Together with known lower bounds, this resolves the main problem left open by Alon, Matias, Szegedy, STOC'96. Our algorithm enables deletions as well as insertions of stream elements.
Resumo:
Fluctuating light intensity had a more significant impact on growth of gametophytes of transgenic Laminaria japonica in a 2500 ml bubble-column bioreactor than constant light intensity. A fluctuating light intensity between 10 and 110 mu E m(-2) s(-1), with a photoperiod of 14 h:10 h light:dark, was the best regime for growth giving 1430 mg biomass l(-1).
Resumo:
In the present study, a method based on transmission-line mode for a porous electrode was used to measure the ionic resistance of the anode catalyst layer under in situ fuel cell operation condition. The influence of Nafion content and catalyst loading in the anode catalyst layer on the methanol electro-oxidation and direct methanol fuel cell (DMFC) performance based on unsupported Pt-Ru black was investigated by using the AC impedance method. The optimal Nafion content was found to be 15 wt% at 75 degrees C. The optimal Pt-Ru loading is related to the operating temperature, for example, about 2.0 mg/cm(2) for 75-90 degrees C, 3.0 mg/cm2 for 50 degrees C. Over these values, the cell performance decreased due to the increases in ohmic and mass transfer resistances. It was found that the peak power density obtained was 217 mW/cm(2) with optimal catalyst and Nafion loading at 75 degrees C using oxygen. (c) 2005 International Association for Hydrogen Energy. Published by Elsevier Ltd. All rights reserved.
Resumo:
Gough, John; Belavkin, V.P.; Smolianov, O.G., (2005) 'Hamilton?Jacobi?Bellman equations for quantum optimal feedback control', Journal of Optics B: Quantum and Semiclassical Optics 7 pp.S237-S244 RAE2008
Resumo:
Dynamic service aggregation techniques can exploit skewed access popularity patterns to reduce the costs of building interactive VoD systems. These schemes seek to cluster and merge users into single streams by bridging the temporal skew between them, thus improving server and network utilization. Rate adaptation and secondary content insertion are two such schemes. In this paper, we present and evaluate an optimal scheduling algorithm for inserting secondary content in this scenario. The algorithm runs in polynomial time, and is optimal with respect to the total bandwidth usage over the merging interval. We present constraints on content insertion which make the overall QoS of the delivered stream acceptable, and show how our algorithm can satisfy these constraints. We report simulation results which quantify the excellent gains due to content insertion. We discuss dynamic scenarios with user arrivals and interactions, and show that content insertion reduces the channel bandwidth requirement to almost half. We also discuss differentiated service techniques, such as N-VoD and premium no-advertisement service, and show how our algorithm can support these as well.
Resumo:
Hidden State Shape Models (HSSMs) [2], a variant of Hidden Markov Models (HMMs) [9], were proposed to detect shape classes of variable structure in cluttered images. In this paper, we formulate a probabilistic framework for HSSMs which provides two major improvements in comparison to the previous method [2]. First, while the method in [2] required the scale of the object to be passed as an input, the method proposed here estimates the scale of the object automatically. This is achieved by introducing a new term for the observation probability that is based on a object-clutter feature model. Second, a segmental HMM [6, 8] is applied to model the "duration probability" of each HMM state, which is learned from the shape statistics in a training set and helps obtain meaningful registration results. Using a segmental HMM provides a principled way to model dependencies between the scales of different parts of the object. In object localization experiments on a dataset of real hand images, the proposed method significantly outperforms the method of [2], reducing the incorrect localization rate from 40% to 15%. The improvement in accuracy becomes more significant if we consider that the method proposed here is scale-independent, whereas the method of [2] takes as input the scale of the object we want to localize.
Resumo:
It is a neural network truth universally acknowledged, that the signal transmitted to a target node must be equal to the product of the path signal times a weight. Analysis of catastrophic forgetting by distributed codes leads to the unexpected conclusion that this universal synaptic transmission rule may not be optimal in certain neural networks. The distributed outstar, a network designed to support stable codes with fast or slow learning, generalizes the outstar network for spatial pattern learning. In the outstar, signals from a source node cause weights to learn and recall arbitrary patterns across a target field of nodes. The distributed outstar replaces the outstar source node with a source field, of arbitrarily many nodes, where the activity pattern may be arbitrarily distributed or compressed. Learning proceeds according to a principle of atrophy due to disuse whereby a path weight decreases in joint proportion to the transmittcd path signal and the degree of disuse of the target node. During learning, the total signal to a target node converges toward that node's activity level. Weight changes at a node are apportioned according to the distributed pattern of converging signals three types of synaptic transmission, a product rule, a capacity rule, and a threshold rule, are examined for this system. The three rules are computationally equivalent when source field activity is maximally compressed, or winner-take-all when source field activity is distributed, catastrophic forgetting may occur. Only the threshold rule solves this problem. Analysis of spatial pattern learning by distributed codes thereby leads to the conjecture that the optimal unit of long-term memory in such a system is a subtractive threshold, rather than a multiplicative weight.