46 resultados para Constrained Minimization
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
The two-Higgs-doublet model can be constrained by imposing Higgs-family symmetries and/or generalized CP symmetries. It is known that there are only six independent classes of such symmetry-constrained models. We study the CP properties of all cases in the bilinear formalism. An exact symmetry implies CP conservation. We show that soft breaking of the symmetry can lead to spontaneous CP violation (CPV) in three of the classes.
Resumo:
This paper is on the unit commitment problem, considering not only the economic perspective, but also the environmental perspective. We propose a bi-objective approach to handle the problem with conflicting profit and emission objectives. Numerical results based on the standard IEEE 30-bus test system illustrate the proficiency of the proposed approach.
Resumo:
One of the most effective ways of controlling vibrations in plate or beam structures is by means of constrained viscoelastic damping treatments. Contrary to the unconstrained configuration, the design of constrained and integrated layer damping treatments is multifaceted because the thickness of the viscoelastic layer acts distinctly on the two main counterparts of the strain energy the volume of viscoelastic material and the shear strain field. In this work, a parametric study is performed exploring the effect that the design parameters, namely the thickness/length ratio, constraining layer thickness, material modulus, natural mode and boundary conditions have on these two counterparts and subsequently, on the treatment efficiency. This paper presents five parametric studies, namely, the thickness/length ratio, the constraining layer thickness, material properties, natural mode and boundary conditions. The results obtained evidence an interesting effect when dealing with very thin viscoelastic layers that contradicts the standard treatment efficiency vs. layer thickness relation; hence, the potential optimisation of constrained and integrated viscoelastic treatments through the use of properly designed thin multilayer configurations is justified. This work presents a dimensionless analysis and provides useful general guidelines for the efficient design of constrained and integrated damping treatments based on single or multi-layer configurations. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Conferência: 39th Annual Conference of the IEEE Industrial-Electronics-Society (IECON) - NOV 10-14, 2013
Resumo:
This paper introduces a new unsupervised hyperspectral unmixing method conceived to linear but highly mixed hyperspectral data sets, in which the simplex of minimum volume, usually estimated by the purely geometrically based algorithms, is far way from the true simplex associated with the endmembers. The proposed method, an extension of our previous studies, resorts to the statistical framework. The abundance fraction prior is a mixture of Dirichlet densities, thus automatically enforcing the constraints on the abundance fractions imposed by the acquisition process, namely, nonnegativity and sum-to-one. A cyclic minimization algorithm is developed where the following are observed: 1) The number of Dirichlet modes is inferred based on the minimum description length principle; 2) a generalized expectation maximization algorithm is derived to infer the model parameters; and 3) a sequence of augmented Lagrangian-based optimizations is used to compute the signatures of the endmembers. Experiments on simulated and real data are presented to show the effectiveness of the proposed algorithm in unmixing problems beyond the reach of the geometrically based state-of-the-art competitors.
Resumo:
We directly visualize the response of nematic liquid crystal drops of toroidal topology threaded in cellulosic fibers, suspended in air, to an AC electric field and at different temperatures over the N-I transition. This new liquid crystal system can exhibit non-trivial point defects, which can be energetically unstable against expanding into ring defects depending on the fiber constraining geometries. The director anchoring tangentially near the fiber surface and homeotropically at the air interface makes a hybrid shell distribution that in turn causes a ring disclination line around the main axis of the fiber at the center of the droplet. Upon application of an electric field, E, the disclination ring first expands and moves along the fiber main axis, followed by the appearance of a stable "spherical particle" object orbiting around the fiber at the center of the liquid crystal drop. The rotation speed of this particle was found to vary linearly with the applied voltage. This constrained liquid crystal geometry seems to meet the essential requirements in which soliton-like deformations can develop and exhibit stable orbiting in three dimensions upon application of an external electric field. On changing the temperature the system remains stable and allows the study of the defect evolution near the nematic-isotropic transition, showing qualitatively different behaviour on cooling and heating processes. The necklaces of such liquid crystal drops constitute excellent systems for the study of topological defects and their evolution and open new perspectives for application in microelectronics and photonics.
Resumo:
A multiobjective approach for optimization of passive damping for vibration reduction in sandwich structures is presented in this paper. Constrained optimization is conducted for maximization of modal loss factors and minimization of weight of sandwich beams and plates with elastic laminated constraining layers and a viscoelastic core, with layer thickness and material and laminate layer ply orientation angles as design variables. The problem is solved using the Direct MultiSearch (DMS) solver for derivative-free multiobjective optimization and solutions are compared with alternative ones obtained using genetic algorithms.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Fluorescent protein microscopy imaging is nowadays one of the most important tools in biomedical research. However, the resulting images present a low signal to noise ratio and a time intensity decay due to the photobleaching effect. This phenomenon is a consequence of the decreasing on the radiation emission efficiency of the tagging protein. This occurs because the fluorophore permanently loses its ability to fluoresce, due to photochemical reactions induced by the incident light. The Poisson multiplicative noise that corrupts these images, in addition with its quality degradation due to photobleaching, make long time biological observation processes very difficult. In this paper a denoising algorithm for Poisson data, where the photobleaching effect is explicitly taken into account, is described. The algorithm is designed in a Bayesian framework where the data fidelity term models the Poisson noise generation process as well as the exponential intensity decay caused by the photobleaching. The prior term is conceived with Gibbs priors and log-Euclidean potential functions, suitable to cope with the positivity constrained nature of the parameters to be estimated. Monte Carlo tests with synthetic data are presented to characterize the performance of the algorithm. One example with real data is included to illustrate its application.
Resumo:
We characterize the elastic contribution to the surface free energy of a nematic liquid crystal in the presence of a sawtooth substrate. Our findings are based on numerical minimization of the Landau-de Gennes model and analytical calculations on the Frank-Oseen theory. The nucleation of disclination lines (characterized by non-half-integer winding numbers) in the wedges and apexes of the substrate induces a leading order proportional to q ln q to the elastic contribution to the surface free-energy density, with q being the wave number associated with the substrate periodicity.
Resumo:
This paper is an elaboration of the DECA algorithm [1] to blindly unmix hyperspectral data. The underlying mixing model is linear, meaning that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. The proposed method, as DECA, is tailored to highly mixed mixtures in which the geometric based approaches fail to identify the simplex of minimum volume enclosing the observed spectral vectors. We resort then to a statitistical framework, where the abundance fractions are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. With respect to DECA, we introduce two improvements: 1) the number of Dirichlet modes are inferred based on the minimum description length (MDL) principle; 2) The generalized expectation maximization (GEM) algorithm we adopt to infer the model parameters is improved by using alternating minimization and augmented Lagrangian methods to compute the mixing matrix. The effectiveness of the proposed algorithm is illustrated with simulated and read data.
Resumo:
Num mercado de electricidade competitivo onde existe um ambiente de incerteza, as empresas de geração adoptam estratégias que visam a maximização do lucro, e a minimização do risco. Neste contexto, é de extrema importância para desenvolver uma estratégia adequada de gestão de risco ter em conta as diferentes opções de negociação de energia num mercado liberalizado, de forma a suportar a tomada de decisões na gestão de risco. O presente trabalho apresenta um modelo que avalia a melhor estratégia de um produtor de energia eléctrica que comercializa num mercado competitivo, onde existem dois mercados possíveis para a transacção de energia: o mercado organizado (bolsa) e o mercado de contratos bilaterais. O produtor tenta maximizar seus lucros e minimizar os riscos correspondentes, seleccionando o melhor equilíbrio entre os dois mercados possíveis (bolsa e bilateral). O mercado de contratos bilaterais visa gerir adequadamente os riscos inerentes à operação de mercados no curto prazo (mercado organizado) e dar o vendedor / comprador uma capacidade real de escolher o fornecedor com que quer negociar. O modelo apresentado neste trabalho faz uma caracterização explícita do risco no que diz respeito ao agente de mercado na questão da sua atitude face ao risco, medido pelo Value at Risk (VaR), descrito neste trabalho por Lucro-em-Risco (PAR). O preço e os factores de risco de volume são caracterizados por um valor médio e um desvio padrão, e são modelizados por distribuições normais. Os resultados numéricos são obtidos utilizando a simulação de Monte Carlo implementado em Matlab, e que é aplicado a um produtor que mantém uma carteira diversificada de tecnologias de geração, para um horizonte temporal de um ano. Esta dissertação está organizada da seguinte forma: o capítulo 1, 2 e 3 descrevem o estado-da-arte relacionado com a gestão de risco na comercialização de energia eléctrica. O capítulo 4 descreve o modelo desenvolvido e implementado, onde é também apresentado um estudo de caso com uma aplicação do modelo para avaliar o risco de negociação de um produtor. No capítulo 5 são apresentadas as principais conclusões.
Resumo:
This paper proposes a practical approach for profit-based unit commitment (PBUC) with emission limitations. Under deregulation, unit commitment has evolved from a minimum-cost optimisation problem to a profit-based optimisation problem. However, as a consequence of growing environmental concern, the impact of fossil-fuelled power plants must be considered, giving rise to emission limitations. The simultaneous address of the profit with the emission is taken into account in our practical approach by a multiobjective optimisation (MO) problem. Hence, trade-off Curves between profit and emission are obtained for different energy price profiles, in a way to aid decision-makers concerning emission allowance trading. Moreover, a new parameter is presented, ratio of change, and the corresponding gradient angle, enabling the proper selection of a compromise commitment for the units. A case study based on the standard IEEE 30-bus system is presented to illustrate the proficiency Of Our practical approach for the new competitive and environmentally constrained electricity supply industry.
Resumo:
New K/Ar dating and geochemical analyses have been carried out on the WNW-ESE elongated oceanic island of S. Jorge to reconstruct the volcanic evolution of a linear ridge developed close to the Azores triple junction. We show that S. Jorge sub-aerial construction encompasses the last 1.3 Myr, a time interval far much longer than previously reported. The early development of the ridge involved a sub-aerial building phase exposed in the southeast end of the island and now constrained between 1.32 +/- 0.02 and 1.21 +/- 0.02 Ma. Basic lavas from this older stage are alkaline and enriched in incompatible elements, reflecting partial melting of an enriched mantle source. At least three differentiation cycles from alkaline basalts to mugearites are documented within this stage. The successive episodes of magma rising, storage and evolution suggest an intermittent reopening of the magma feeding system, possibly due to recurrent tensional or trans-tensional tectonic events. Present data show a gap in sub-aerial volcanism before a second main ongoing building phase starting at about 750 ka. Sub-aerial construction of the S. Jorge ridge migrated progressively towards the west, but involved several overlapping volcanic episodes constrained along the main WNW-ESE structural axis of the island. Malic magmas erupted during the second phase have been also generated by partial melting of an enriched mantle source. Trace element data suggest, however, variable and lower degrees of partial melting of a shallower mantle domain, which is interpreted as an increasing control of lithospheric deformation on the genesis and extraction of primitive melts during the last 750 kyr. The multi-stage development of the S. Jorge volcanic ridge over the last 1.3 Myr has most likely been greatly influenced by regional tectonics, controlled by deformation along the diffuse boundary between the Nubian and the Eurasian plates, and the increasing effect of sea-floor spreading at the Mid-Atlantic Ridge.
Resumo:
This paper studies all equity firms and shows which are in US firms, the main drivers of zero-debt policy. I analyze 6763 U.S. listed companies in years 1987-2009, a total of 77442 firms year. I find that financial constrained firms show a higher probability to become unlevered. In the opposite side, firms producing high cash flow are also likely to become unlevered, paying their debt. Some firms create economies of scale in the use of funds, increasing the probability of become unlevered. The industry characteristics are also important to explain the zero-debt policy. However is the high perception of risk, the most important factor influencing this extreme behavior, which is consistent with trade-off theory.