911 resultados para MINIMIZING EARLINESS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clustering ensemble methods produce a consensus partition of a set of data points by combining the results of a collection of base clustering algorithms. In the evidence accumulation clustering (EAC) paradigm, the clustering ensemble is transformed into a pairwise co-association matrix, thus avoiding the label correspondence problem, which is intrinsic to other clustering ensemble schemes. In this paper, we propose a consensus clustering approach based on the EAC paradigm, which is not limited to crisp partitions and fully exploits the nature of the co-association matrix. Our solution determines probabilistic assignments of data points to clusters by minimizing a Bregman divergence between the observed co-association frequencies and the corresponding co-occurrence probabilities expressed as functions of the unknown assignments. We additionally propose an optimization algorithm to find a solution under any double-convex Bregman divergence. Experiments on both synthetic and real benchmark data show the effectiveness of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Within a large set of renewable energies being explored to tackle energy sourcing problems, bioenergy can represent an attractive solution if effectively managed. The supply chain design supported by mathematical programming can be used as a decision support tool to the successful bioenergy production systems establishment. This strategic decision problem is addressed in this paper where we intent to study the design of the residual forestry biomass to bioelectricity production in the Portuguese context. In order to contribute to attain better solutions a mixed integer linear programming (MILP) model is developed and applied in order to optimize the design and planning of the bioenergy supply chain. While minimizing the total supply chain cost the production energy facilities capacity and location are defined. The model also includes the optimal selection of biomass amounts and sources, the transportation modes selection, and links that must be established for biomass transportation and products delivers to markets. Results illustrate the positive contribution of the mathematical programming approach to achieve viable economic solutions. Sensitivity analysis on the most uncertain parameters was performed: biomass availability, transportation costs, fixed operating costs and investment costs. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relatório de Estágio para obtenção do grau de Mestre em Engenharia na Área de Especializção em Edificações

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The optimal design of laminated sandwich panels with viscoelastic core is addressed in this paper, with the objective of simultaneously minimizing weight and material cost and maximizing modal damping. The design variables are the number of layers in the laminated sandwich panel, the layer constituent materials and orientation angles and the viscoelastic layer thickness. The problem is solved using the Direct MultiSearch (DMS) solver for multiobjective optimization problems which does not use any derivatives of the objective functions. A finite element model for sandwich plates with transversely compressible viscoelastic core and anisotropic laminated face layers is used. Trade-off Pareto optimal fronts are obtained and the results are analyzed and discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mestrado em Engenharia Civil – Ramo Estruturas

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mestrado em Engenharia Mecânica – Especialização Gestão Industrial

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neste documento, apresenta-se o reflexo sobre o trabalho de estágio desenvolvido entre 17 de Fevereiro e 31 de Julho de 2014, nas instalações da Fábrica das Estruturas Metálicas da Faurecia, em São João da Madeira, num Projecto Final no âmbito de Implementação de Ferramentas Lean. O objetivo proposto foi a participação na procura e implementação de soluções, com vista à melhoria contínua do sistema de produção. Foi utilizado, para esse efeito, um vasto conjunto de ferramentas entre as quais os 5S, QRCI, Standardized Work, entre outras e amplamente empregues na indústria automóvel (e nesta empresa em particular), através do Sistema de Excelência Faurecia (FES), aplicado ao ramo de negócio onde está solidamente implantada esta multinacional de origem francesa. O período de tempo em que decorreu o estágio constituiu uma oportunidade única para o estagiário contactar com os problemas existentes no departamento de produção, num mercado tão concorrencial e competitivo como é o da indústria de componentes para automóveis. O presente trabalho de estágio apresenta duas vertentes distintas: uma de caráter interno à empresa e outra relativa aos fornecedores e clientes. Em termos internos, foi visível a batalha pela diminuição das variabilidades que surgem no plano da produção ao absorver grande parte do esforço dos agentes que trabalham na otimização dos processos. Externamente, observou-se a dificuldade em encontrar fornecedores adequados a satisfazer os aprovisionamentos da Faurecia, em quantidade e qualidade, e um elevado grau de exigência imposto por parte dos vários clientes. Por fim, este Projeto possibilitou a aplicação de conhecimentos adquiridos não só ao longo do curso como também durante a realização do estágio, o conhecimento da realidade industrial e o enriquecimento técnico e pessoal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Os aterros de resíduos contemplam, na sua constituição, uma série de sistemas e infraestruturas complementares que asseguram o correto funcionamento do mesmo, minimizando os perigos ambientais. A produção de lixiviados no interior das células de confinamento deste tipo de aterro é um fato inevitável sendo necessário, dada a sua perigosidade, o dimensionamento de um sistema de contenção e drenagem que promova o seu escoamento para zonas de tratamento. O presente trabalho assentou em três vertentes principais: o aprofundamento dos conhecimentos e exposição dos mesmos, na temática relacionada com aterros de resíduos e com a aplicação de geossintéticos neste tipo de infraestrutura, o acompanhamento da construção do novo aterro de resíduos não perigosos da Suldouro, o Aterro do Giestal, que vem substituir o seu precedente, já em fase de selagem, o aterro de Sermonde e a elaboração de uma análise crítica à solução construtiva apresentada no projeto de execução deste aterro. Neste contexto, é efetuada uma pesquisa bibliográfica exaustiva e assente, essencialmente, na problemática da gestão de resíduos, na utilização de geossintéticos em aterros de resíduos e no seu projeto e construção. O acompanhamento da obra debruçou-se, essencialmente, sobre a execução do sistema de impermeabilização e de drenagem de lixiviados da célula de confinamento, dando enfâse aos materiais utilizados, nomeadamente os geossintéticos, ao dimensionamento efetuado e aos processos construtivos utilizados e à garantia da qualidade da sua execução. Dados os riscos ambientais associados a este tipo de infraestruturas, são, para além dos sistemas referidos, analisados os elementos do sistema de monitorização de águas subterrâneas, sendo aqui abordados os materiais e processos construtivos. Por fim, é proposta uma solução alternativa à prevista no projeto de execução dos sistema de impermeabilização e drenagem de lixiviados. A solução desenvolvida, é inovadora, de custo mais reduzido e com vantagens acrescidas relativamente aos seus processos construtivos e à capacidade de armazenamento.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a methodology for multi-objective day-ahead energy resource scheduling for smart grids considering intensive use of distributed generation and Vehicle- To-Grid (V2G). The main focus is the application of weighted Pareto to a multi-objective parallel particle swarm approach aiming to solve the dual-objective V2G scheduling: minimizing total operation costs and maximizing V2G income. A realistic mathematical formulation, considering the network constraints and V2G charging and discharging efficiencies is presented and parallel computing is applied to the Pareto weights. AC power flow calculation is included in the metaheuristics approach to allow taking into account the network constraints. A case study with a 33-bus distribution network and 1800 V2G resources is used to illustrate the performance of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a methodology to increase the probability of delivering power to any load point through the identification of new investments. The methodology uses a fuzzy set approach to model the uncertainty of outage parameters, load and generation. A DC fuzzy multicriteria optimization model considering the Pareto front and based on mixed integer non-linear optimization programming is developed in order to identify the adequate investments in distribution networks components which allow increasing the probability of delivering power to all customers in the distribution network at the minimum possible cost for the system operator, while minimizing the non supplied energy cost. To illustrate the application of the proposed methodology, the paper includes a case study which considers an 33 bus distribution network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a decision support tool methodology to help virtual power players (VPPs) in the Smart Grid (SGs) context to solve the day-ahead energy resource scheduling considering the intensive use of Distributed Generation (DG) and Vehicle-To-Grid (V2G). The main focus is the application of a new hybrid method combing a particle swarm approach and a deterministic technique based on mixedinteger linear programming (MILP) to solve the day-ahead scheduling minimizing total operation costs from the aggregator point of view. A realistic mathematical formulation, considering the electric network constraints and V2G charging and discharging efficiencies is presented. Full AC power flow calculation is included in the hybrid method to allow taking into account the network constraints. A case study with a 33-bus distribution network and 1800 V2G resources is used to illustrate the performance of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A methodology to increase the probability of delivering power to any load point through the identification of new investments in distribution network components is proposed in this paper. The method minimizes the investment cost as well as the cost of energy not supplied in the network. A DC optimization model based on mixed integer non-linear programming is developed considering the Pareto front technique in order to identify the adequate investments in distribution networks components which allow increasing the probability of delivering power for any customer in the distribution system at the minimum possible cost for the system operator, while minimizing the energy not supplied cost. Thus, a multi-objective problem is formulated. To illustrate the application of the proposed methodology, the paper includes a case study which considers a 180 bus distribution network

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação apresentada à Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Doutor em Engenharia Civil

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of demand response has drawing attention to the active participation in the economic operation of power systems, namely in the context of recent electricity markets and smart grid models and implementations. In these competitive contexts, aggregators are necessary in order to make possible the participation of small size consumers and generation units. The methodology proposed in the present paper aims to address the demand shifting between periods, considering multi-period demand response events. The focus is given to the impact in the subsequent periods. A Virtual Power Player operates the network, aggregating the available resources, and minimizing the operation costs. The illustrative case study included is based on a scenario of 218 consumers including generation sources.