986 resultados para penalty function
Resumo:
In recent years, the cross-entropy method has been successfully applied to a wide range of discrete optimization tasks. In this paper we consider the cross-entropy method in the context of continuous optimization. We demonstrate the effectiveness of the cross-entropy method for solving difficult continuous multi-extremal optimization problems, including those with non-linear constraints.
Resumo:
The finite element process is now used almost routinely as a tool of engineering analysis. From early days, a significant effort has been devoted to developing simple, cost effective elements which adequately fulfill accuracy requirements. In this thesis we describe the development and application of one of the simplest elements available for the statics and dynamics of axisymmetric shells . A semi analytic truncated cone stiffness element has been formulated and implemented in a computer code: it has two nodes with five degrees of freedom at each node, circumferential variations in displacement field are described in terms of trigonometric series, transverse shear is accommodated by means of a penalty function and rotary inertia is allowed for. The element has been tested in a variety of applications in the statics and dynamics of axisymmetric shells subjected to a variety of boundary conditions. Good results have been obtained for thin and thick shell cases .
Resumo:
2000 Mathematics Subject Classification: 60B10, 60G17, 60G51, 62P05.
Resumo:
This research develops a methodology and model formulation which suggests locations for rapid chargers to help assist infrastructure development and enable greater battery electric vehicle (BEV) usage. The model considers the likely travel patterns of BEVs and their subsequent charging demands across a large road network, where no prior candidate site information is required. Using a GIS-based methodology, polygons are constructed which represent the charging demand zones for particular routes across a real-world road network. The use of polygons allows the maximum number of charging combinations to be considered whilst limiting the input intensity needed for the model. Further polygons are added to represent deviation possibilities, meaning that placement of charge points away from the shortest path is possible, given a penalty function. A validation of the model is carried out by assessing the expected demand at current rapid charging locations and comparing to recorded empirical usage data. Results suggest that the developed model provides a good approximation to real world observations, and that for the provision of charging, location matters. The model is also implemented where no prior candidate site information is required. As such, locations are chosen based on the weighted overlay between several different routes where BEV journeys may be expected. In doing so many locations, or types of locations, could be compared against one another and then analysed in relation to siting practicalities, such as cost, land permission and infrastructure availability. Results show that efficient facility location, given numerous siting possibilities across a large road network can be achieved. Slight improvements to the standard greedy adding technique are made by adding combination weightings which aim to reward important long distance routes that require more than one charge to complete.
Resumo:
Purpose – Curve fitting from unordered noisy point samples is needed for surface reconstruction in many applications -- In the literature, several approaches have been proposed to solve this problem -- However, previous works lack formal characterization of the curve fitting problem and assessment on the effect of several parameters (i.e. scalars that remain constant in the optimization problem), such as control points number (m), curve degree (b), knot vector composition (U), norm degree (k), and point sample size (r) on the optimized curve reconstruction measured by a penalty function (f) -- The paper aims to discuss these issues -- Design/methodology/approach - A numerical sensitivity analysis of the effect of m, b, k and r on f and a characterization of the fitting procedure from the mathematical viewpoint are performed -- Also, the spectral (frequency) analysis of the derivative of the angle of the fitted curve with respect to u as a means to detect spurious curls and peaks is explored -- Findings - It is more effective to find optimum values for m than k or b in order to obtain good results because the topological faithfulness of the resulting curve strongly depends on m -- Furthermore, when an exaggerate number of control points is used the resulting curve presents spurious curls and peaks -- The authors were able to detect the presence of such spurious features with spectral analysis -- Also, the authors found that the method for curve fitting is robust to significant decimation of the point sample -- Research limitations/implications - The authors have addressed important voids of previous works in this field -- The authors determined, among the curve fitting parameters m, b and k, which of them influenced the most the results and how -- Also, the authors performed a characterization of the curve fitting problem from the optimization perspective -- And finally, the authors devised a method to detect spurious features in the fitting curve -- Practical implications – This paper provides a methodology to select the important tuning parameters in a formal manner -- Originality/value - Up to the best of the knowledge, no previous work has been conducted in the formal mathematical evaluation of the sensitivity of the goodness of the curve fit with respect to different possible tuning parameters (curve degree, number of control points, norm degree, etc.)
Resumo:
Optimization methods have been used in many areas of knowledge, such as Engineering, Statistics, Chemistry, among others, to solve optimization problems. In many cases it is not possible to use derivative methods, due to the characteristics of the problem to be solved and/or its constraints, for example if the involved functions are non-smooth and/or their derivatives are not know. To solve this type of problems a Java based API has been implemented, which includes only derivative-free optimization methods, and that can be used to solve both constrained and unconstrained problems. For solving constrained problems, the classic Penalty and Barrier functions were included in the API. In this paper a new approach to Penalty and Barrier functions, based on Fuzzy Logic, is proposed. Two penalty functions, that impose a progressive penalization to solutions that violate the constraints, are discussed. The implemented functions impose a low penalization when the violation of the constraints is low and a heavy penalty when the violation is high. Numerical results, obtained using twenty-eight test problems, comparing the proposed Fuzzy Logic based functions to six of the classic Penalty and Barrier functions are presented. Considering the achieved results, it can be concluded that the proposed penalty functions besides being very robust also have a very good performance.
Resumo:
A modeling paradigm is proposed for covariate, variance and working correlation structure selection for longitudinal data analysis. Appropriate selection of covariates is pertinent to correct variance modeling and selecting the appropriate covariates and variance function is vital to correlation structure selection. This leads to a stepwise model selection procedure that deploys a combination of different model selection criteria. Although these criteria find a common theoretical root based on approximating the Kullback-Leibler distance, they are designed to address different aspects of model selection and have different merits and limitations. For example, the extended quasi-likelihood information criterion (EQIC) with a covariance penalty performs well for covariate selection even when the working variance function is misspecified, but EQIC contains little information on correlation structures. The proposed model selection strategies are outlined and a Monte Carlo assessment of their finite sample properties is reported. Two longitudinal studies are used for illustration.
Resumo:
The process cascade leading to the final accommodation of the carbohydrate ligand in the lectin’s binding site comprises enthalpic and entropic contributions of the binding partners and solvent molecules. With emphasis on lactose, N-acetyllactosamine, and thiodigalactoside as potent inhibitors of binding of galactoside-specific lectins, the question was addressed to what extent these parameters are affected as a function of the protein. The microcalorimetric study of carbohydrate association to the galectin from chicken liver (CG-16) and the agglutinin from Viscum album (VAA) revealed enthalpy–entropy compensation with evident protein type-dependent changes for N-acetyllactosamine. Reduction of the entropic penalty by differential flexibility of loops or side chains and/or solvation properties of the protein will have to be reckoned with to assign a molecular cause to protein type-dependent changes in thermodynamic parameters for lectins sharing the same monosaccharide specificity.
Resumo:
This article presents a new approach to minimize the losses in electrical power systems. This approach considers the application of the primal-dual logarithmic barrier method to voltage magnitude and tap-changing transformer variables, and the other inequality constraints are treated by augmented Lagrangian method. The Lagrangian function aggregates all the constraints. The first-order necessary conditions are reached by Newton's method, and by updating the dual variables and penalty factors. Test results are presented to show the good performance of this approach.
The boundedness of penalty parameters in an augmented Lagrangian method with constrained subproblems
Resumo:
Augmented Lagrangian methods are effective tools for solving large-scale nonlinear programming problems. At each outer iteration, a minimization subproblem with simple constraints, whose objective function depends on updated Lagrange multipliers and penalty parameters, is approximately solved. When the penalty parameter becomes very large, solving the subproblem becomes difficult; therefore, the effectiveness of this approach is associated with the boundedness of the penalty parameters. In this paper, it is proved that under more natural assumptions than the ones employed until now, penalty parameters are bounded. For proving the new boundedness result, the original algorithm has been slightly modified. Numerical consequences of the modifications are discussed and computational experiments are presented.
Resumo:
Objectives The current study investigated to what extent task-specific practice can help reduce the adverse effects of high-pressure on performance in a simulated penalty kick task. Based on the assumption that practice attenuates the required attentional resources, it was hypothesized that task-specific practice would enhance resilience against high-pressure. Method Participants practiced a simulated penalty kick in which they had to move a lever to the side opposite to the goalkeeper's dive. The goalkeeper moved at different times before ball-contact. Design Before and after task-specific practice, participants were tested on the same task both under low- and high-pressure conditions. Results Before practice, performance of all participants worsened under high-pressure; however, whereas one group of participants merely required more time to correctly respond to the goalkeeper movement and showed a typical logistic relation between the percentage of correct responses and the time available to respond, a second group of participants showed a linear relationship between the percentage of correct responses and the time available to respond. This implies that they tended to make systematic errors for the shortest times available. Practice eliminated the debilitating effects of high-pressure in the former group, whereas in the latter group high-pressure continued to negatively affect performance. Conclusions Task-specific practice increased resilience to high-pressure. However, the effect was a function of how participants responded initially to high-pressure, that is, prior to practice. The results are discussed within the framework of attentional control theory (ACT).
Resumo:
We experimentally demonstrate ∼2 dB quality (Q)-factor enhancement in terms of fiber nonlinearity compensation of 40 Gb/s 16 quadrature amplitude modulation coherent optical orthogonal frequency-division multiplexing at 2000 km, using a nonlinear equalizer (NLE) based on artificial neural networks (ANN). Nonlinearity alleviation depends on escalation of the ANN training overhead and the signal bit rate, reporting ∼4 dB Q-factor enhancement at 70 Gb/s, whereas a reduction of the number of ANN neurons annihilates the NLE performance. An enhanced performance by up to ∼2 dB in Q-factor compared to the inverse Volterra-series transfer function NLE leads to a breakthrough in the efficiency of ANN.
Resumo:
Fleck and Johnson (Int. J. Mech. Sci. 29 (1987) 507) and Fleck et al. (Proc. Inst. Mech. Eng. 206 (1992) 119) have developed foil rolling models which allow for large deformations in the roll profile, including the possibility that the rolls flatten completely. However, these models require computationally expensive iterative solution techniques. A new approach to the approximate solution of the Fleck et al. (1992) Influence Function Model has been developed using both analytic and approximation techniques. The numerical difficulties arising from solving an integral equation in the flattened region have been reduced by applying an Inverse Hilbert Transform to get an analytic expression for the pressure. The method described in this paper is applicable to cases where there is or there is not a flat region.