876 resultados para Nonlinear Constraints


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is on the self-scheduling problem for a thermal power producer taking part in a pool-based electricity market as a price-taker, having bilateral contracts and emission-constrained. An approach based on stochastic mixed-integer linear programming approach is proposed for solving the self-scheduling problem. Uncertainty regarding electricity price is considered through a set of scenarios computed by simulation and scenario-reduction. Thermal units are modelled by variable costs, start-up costs and technical operating constraints, such as: forbidden operating zones, ramp up/down limits and minimum up/down time limits. A requirement on emission allowances to mitigate carbon footprint is modelled by a stochastic constraint. Supply functions for different emission allowance levels are accessed in order to establish the optimal bidding strategy. A case study is presented to illustrate the usefulness and the proficiency of the proposed approach in supporting biding strategies. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An improved class of nonlinear bidirectional Boussinesq equations of sixth order using a wave surface elevation formulation is derived. Exact travelling wave solutions for the proposed class of nonlinear evolution equations are deduced. A new exact travelling wave solution is found which is the uniform limit of a geometric series. The ratio of this series is proportional to a classical soliton-type solution of the form of the square of a hyperbolic secant function. This happens for some values of the wave propagation velocity. However, there are other values of this velocity which display this new type of soliton, but the classical soliton structure vanishes in some regions of the domain. Exact solutions of the form of the square of the classical soliton are also deduced. In some cases, we find that the ratio between the amplitude of this wave and the amplitude of the classical soliton is equal to 35/36. It is shown that different families of travelling wave solutions are associated with different values of the parameters introduced in the improved equations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In today’s healthcare paradigm, optimal sedation during anesthesia plays an important role both in patient welfare and in the socio-economic context. For the closed-loop control of general anesthesia, two drugs have proven to have stable, rapid onset times: propofol and remifentanil. These drugs are related to their effect in the bispectral index, a measure of EEG signal. In this paper wavelet time–frequency analysis is used to extract useful information from the clinical signals, since they are time-varying and mark important changes in patient’s response to drug dose. Model based predictive control algorithms are employed to regulate the depth of sedation by manipulating these two drugs. The results of identification from real data and the simulation of the closed loop control performance suggest that the proposed approach can bring an improvement of 9% in overall robustness and may be suitable for clinical practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the framework of multibody dynamics, the path motion constraint enforces that a body follows a predefined curve being its rotations with respect to the curve moving frame also prescribed. The kinematic constraint formulation requires the evaluation of the fourth derivative of the curve with respect to its arc length. Regardless of the fact that higher order polynomials lead to unwanted curve oscillations, at least a fifth order polynomials is required to formulate this constraint. From the point of view of geometric control lower order polynomials are preferred. This work shows that for multibody dynamic formulations with dependent coordinates the use of cubic polynomials is possible, being the dynamic response similar to that obtained with higher order polynomials. The stabilization of the equations of motion, always required to control the constraint violations during long analysis periods due to the inherent numerical errors of the integration process, is enough to correct the error introduced by using a lower order polynomial interpolation and thus forfeiting the analytical requirement for higher order polynomials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do European Master in Computational Logics, como requisito parcial para obtenção do grau de Mestre em Computational Logics

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In today’s healthcare paradigm, optimal sedation during anesthesia plays an important role both in patient welfare and in the socio-economic context. For the closed-loop control of general anesthesia, two drugs have proven to have stable, rapid onset times: propofol and remifentanil. These drugs are related to their effect in the bispectral index, a measure of EEG signal. In this paper wavelet time–frequency analysis is used to extract useful information from the clinical signals, since they are time-varying and mark important changes in patient’s response to drug dose. Model based predictive control algorithms are employed to regulate the depth of sedation by manipulating these two drugs. The results of identification from real data and the simulation of the closed loop control performance suggest that the proposed approach can bring an improvement of 9% in overall robustness and may be suitable for clinical practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The local fractional Burgers’ equation (LFBE) is investigated from the point of view of local fractional conservation laws envisaging a nonlinear local fractional transport equation with a linear non-differentiable diffusion term. The local fractional derivative transformations and the LFBE conversion to a linear local fractional diffusion equation are analyzed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study proposes a new methodology to increase the power delivered to any load point in a radial distribution network, through the identification of new investments in order to improve the repair time. This research work is innovative and consists in proposing a full optimisation model based on mixed-integer non-linear programming considering the Pareto front technique. The goal is to achieve a reduction in repair times of the distribution networks components, while minimising the costs of that reduction as well as non-supplied energy costs. The optimisation model considers the distribution network technical constraints, the substation transformer taps, and it is able to choose the capacitor banks size. A case study based on a 33-bus distribution network is presented in order to illustrate in detail the application of the proposed methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previously we have presented a model for generating human-like arm and hand movements on an unimanual anthropomorphic robot involved in human-robot collaboration tasks. The present paper aims to extend our model in order to address the generation of human-like bimanual movement sequences which are challenged by scenarios cluttered with obstacles. Movement planning involves large scale nonlinear constrained optimization problems which are solved using the IPOPT solver. Simulation studies show that the model generates feasible and realistic hand trajectories for action sequences involving the two hands. The computational costs involved in the planning allow for real-time human robot-interaction. A qualitative analysis reveals that the movements of the robot exhibit basic characteristics of human movements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this work is to present an algorithm to solve nonlinear constrained optimization problems, using the filter method with the inexact restoration (IR) approach. In the IR approach two independent phases are performed in each iteration—the feasibility and the optimality phases. The first one directs the iterative process into the feasible region, i.e. finds one point with less constraints violation. The optimality phase starts from this point and its goal is to optimize the objective function into the satisfied constraints space. To evaluate the solution approximations in each iteration a scheme based on the filter method is used in both phases of the algorithm. This method replaces the merit functions that are based on penalty schemes, avoiding the related difficulties such as the penalty parameter estimation and the non-differentiability of some of them. The filter method is implemented in the context of the line search globalization technique. A set of more than two hundred AMPL test problems is solved. The algorithm developed is compared with LOQO and NPSOL software packages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new iterative algorithm based on the inexact-restoration (IR) approach combined with the filter strategy to solve nonlinear constrained optimization problems is presented. The high level algorithm is suggested by Gonzaga et al. (SIAM J. Optim. 14:646–669, 2003) but not yet implement—the internal algorithms are not proposed. The filter, a new concept introduced by Fletcher and Leyffer (Math. Program. Ser. A 91:239–269, 2002), replaces the merit function avoiding the penalty parameter estimation and the difficulties related to the nondifferentiability. In the IR approach two independent phases are performed in each iteration, the feasibility and the optimality phases. The line search filter is combined with the first one phase to generate a “more feasible” point, and then it is used in the optimality phase to reach an “optimal” point. Numerical experiences with a collection of AMPL problems and a performance comparison with IPOPT are provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Lógica Computacional

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores