12 resultados para Linear semi-infinite optimization
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
This paper deals with semi-global C(k)-solvability of complex vector fields of the form L = partial derivative/partial derivative t + x(r) (a(x) + ib(x))partial derivative/partial derivative x, r >= 1, defined on Omega(epsilon) = (-epsilon, epsilon) x S(1), epsilon > 0, where a and b are C(infinity) real-valued functions in (-epsilon, epsilon). It is shown that the interplay between the order of vanishing of the functions a and b at x = 0 influences the C(k)-solvability at Sigma = {0} x S(1). When r = 1, it is permitted that the functions a and b of L depend on the x and t variables, that is, L = partial derivative/partial derivative t + x(a(x, t) + ib(x, t))partial derivative/partial derivative x, where (x, t) is an element of Omega(epsilon).
Resumo:
We study the Gevrey solvability of a class of complex vector fields, defined on Omega(epsilon) = (-epsilon, epsilon) x S(1), given by L = partial derivative/partial derivative t + (a(x) + ib(x))partial derivative/partial derivative x, b not equivalent to 0, near the characteristic set Sigma = {0} x S(1). We show that the interplay between the order of vanishing of the functions a and b at x = 0 plays a role in the Gevrey solvability. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
Conventional procedures employed in the modeling of viscoelastic properties of polymer rely on the determination of the polymer`s discrete relaxation spectrum from experimentally obtained data. In the past decades, several analytical regression techniques have been proposed to determine an explicit equation which describes the measured spectra. With a diverse approach, the procedure herein introduced constitutes a simulation-based computational optimization technique based on non-deterministic search method arisen from the field of evolutionary computation. Instead of comparing numerical results, this purpose of this paper is to highlight some Subtle differences between both strategies and focus on what properties of the exploited technique emerge as new possibilities for the field, In oder to illustrate this, essayed cases show how the employed technique can outperform conventional approaches in terms of fitting quality. Moreover, in some instances, it produces equivalent results With much fewer fitting parameters, which is convenient for computational simulation applications. I-lie problem formulation and the rationale of the highlighted method are herein discussed and constitute the main intended contribution. (C) 2009 Wiley Periodicals, Inc. J Appl Polym Sci 113: 122-135, 2009
Resumo:
We report the partitioning of the interaction-induced static electronic dipole (hyper)polarizabilities for linear hydrogen cyanide complexes into contributions arising from various interaction energy terms. We analyzed the nonadditivities of the studied properties and used these data to predict the electric properties of an infinite chain. The interaction-induced static electric dipole properties and their nonadditivities were analyzed using an approach based on numerical differentiation of the interaction energy components estimated in an external electric field. These were obtained using the hybrid variational-perturbational interaction energy decomposition scheme, augmented with coupled-cluster calculations, with singles, doubles, and noniterative triples. Our results indicate that the interaction-induced dipole moments and polarizabilities are primarily electrostatic in nature; however, the composition of the interaction hyperpolarizabilities is much more complex. The overlap effects substantially quench the contributions due to electrostatic interactions, and therefore, the major components are due to the induction and exchange induction terms, as well as the intramolecular electron-correlation corrections. A particularly intriguing observation is that the interaction first hyperpolarizability in the studied systems not only is much larger than the corresponding sum of monomer properties, but also has the opposite sign. We show that this effect can be viewed as a direct consequence of hydrogen-bonding interactions that lead to a decrease of the hyperpolarizability of the proton acceptor and an increase of the hyperpolarizability of the proton donor. In the case of the first hyperpolarizability, we also observed the largest nonadditivity of interaction properties (nearly 17%) which further enhances the effects of pairwise interactions.
Resumo:
A Nonlinear Programming algorithm that converges to second-order stationary points is introduced in this paper. The main tool is a second-order negative-curvature method for box-constrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (Powell-Hestenes-Rockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to second-order stationary points in situations in which first-order methods fail are exhibited.
Resumo:
Augmented Lagrangian methods for large-scale optimization usually require efficient algorithms for minimization with box constraints. On the other hand, active-set box-constraint methods employ unconstrained optimization algorithms for minimization inside the faces of the box. Several approaches may be employed for computing internal search directions in the large-scale case. In this paper a minimal-memory quasi-Newton approach with secant preconditioners is proposed, taking into account the structure of Augmented Lagrangians that come from the popular Powell-Hestenes-Rockafellar scheme. A combined algorithm, that uses the quasi-Newton formula or a truncated-Newton procedure, depending on the presence of active constraints in the penalty-Lagrangian function, is also suggested. Numerical experiments using the Cute collection are presented.
Resumo:
Two Augmented Lagrangian algorithms for solving KKT systems are introduced. The algorithms differ in the way in which penalty parameters are updated. Possibly infeasible accumulation points are characterized. It is proved that feasible limit points that satisfy the Constant Positive Linear Dependence constraint qualification are KKT solutions. Boundedness of the penalty parameters is proved under suitable assumptions. Numerical experiments are presented.
Resumo:
The immersed boundary method is a versatile tool for the investigation of flow-structure interaction. In a large number of applications, the immersed boundaries or structures are very stiff and strong tangential forces on these interfaces induce a well-known, severe time-step restriction for explicit discretizations. This excessive stability constraint can be removed with fully implicit or suitable semi-implicit schemes but at a seemingly prohibitive computational cost. While economical alternatives have been proposed recently for some special cases, there is a practical need for a computationally efficient approach that can be applied more broadly. In this context, we revisit a robust semi-implicit discretization introduced by Peskin in the late 1970s which has received renewed attention recently. This discretization, in which the spreading and interpolation operators are lagged. leads to a linear system of equations for the inter-face configuration at the future time, when the interfacial force is linear. However, this linear system is large and dense and thus it is challenging to streamline its solution. Moreover, while the same linear system or one of similar structure could potentially be used in Newton-type iterations, nonlinear and highly stiff immersed structures pose additional challenges to iterative methods. In this work, we address these problems and propose cost-effective computational strategies for solving Peskin`s lagged-operators type of discretization. We do this by first constructing a sufficiently accurate approximation to the system`s matrix and we obtain a rigorous estimate for this approximation. This matrix is expeditiously computed by using a combination of pre-calculated values and interpolation. The availability of a matrix allows for more efficient matrix-vector products and facilitates the design of effective iterative schemes. We propose efficient iterative approaches to deal with both linear and nonlinear interfacial forces and simple or complex immersed structures with tethered or untethered points. One of these iterative approaches employs a splitting in which we first solve a linear problem for the interfacial force and then we use a nonlinear iteration to find the interface configuration corresponding to this force. We demonstrate that the proposed approach is several orders of magnitude more efficient than the standard explicit method. In addition to considering the standard elliptical drop test case, we show both the robustness and efficacy of the proposed methodology with a 2D model of a heart valve. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
This study aimed to optimize the rheological properties of probiotic yoghurts supplemented with skimmed milk powder (SMP) whey protein concentrate (WPC) and sodium caseinate (Na-Cn) by using an experimental design type simplex-centroid for mixture modeling It Included seven batches/trials three were supplemented with each type of the dairy protein used three corresponding to the binary mixtures and one to the ternary one in order to increase protein concentration in 1 g 100 g(-1) of final product A control experiment was prepared without supplementing the milk base Processed milk bases were fermented at 42 C until pH 4 5 by using a starter culture blend that consisted of Streptococcus thermophilus Lactobacillus delbrueckii subsp bulgaricus and Bifidobacterium (Humans subsp lactis The kinetics of acidification was followed during the fermentation period as well the physico-chemical analyses enumeration of viable bacteria and theological characteristics of the yoghurts Models were adjusted to the results (kinetic responses counts of viable bacteria and theological parameters) through three regression models (linear quadratic and cubic special) applied to mixtures The results showed that the addition of milk proteins affected slightly acidification profile and counts of S thermophilus and B animal`s subsp lactis but it was significant for L delbrueckii subsp bulgaricus Partially-replacing SMP (45 g/100 g) with WPC or Na-Cn simultaneously enhanced the theological properties of probiotic yoghurts taking into account the kinetics of acidification and enumeration of viable bacteria (C) 2010 Elsevier Ltd All rights reserved