932 resultados para ENERGY-PARTITIONING ANALYSIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing integration of wind energy in power systems can be responsible for the occurrence of over-generation, especially during the off-peak periods. This paper presents a dedicated methodology to identify and quantify the occurrence of this over-generation and to evaluate some of the solutions that can be adopted to mitigate this problem. The methodology is applied to the Portuguese power system, in which the wind energy is expected to represent more than 25% of the installed capacity in a near future. The results show that the pumped-hydro units will not provide enough energy storage capacity and, therefore, wind curtailments are expected to occur in the Portuguese system. Additional energy storage devices can be implemented to offset the wind energy curtailments. However, the investment analysis performed show that they are not economically viable, due to the present high capital costs involved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is based on a previous experimental work in which embedded cylindrical heaters were applied to a pultrusion machine die, and resultant energetic performance compared with that achieved with the former heating system based on planar resistances. The previous work allowed to conclude that the use of embedded resistances enhances significantly the energetic performance of pultrusion process, leading to 57% decrease of energy consumption. However, the aforementioned study was developed with basis on an existing pultrusion die, which only allowed a single relative position for the heaters. In the present work, new relative positions for the heaters were investigated in order to optimise heat distribution process and energy consumption. Finite Elements Analysis was applied as an efficient tool to identify the best relative position of the heaters into the die, taking into account the usual parameters involved in the process and the control system already tested in the previous study. The analysis was firstly developed based on eight cylindrical heaters located in four different location plans. In a second phase, in order to refine the results, a new approach was adopted using sixteen heaters with the same total power. Final results allow to conclude that the correct positioning of the heaters can contribute to about 10% of energy consumption reduction, decreasing the production costs and leading to a better eco-efficiency of pultrusion process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study addresses to the optimization of pultrusion manufacturing process from the energy-consumption point of view. The die heating system of external platen heaters commonly used in the pultrusion machines is one of the components that contribute the most to the high consumption of energy of pultrusion process. Hence, instead of the conventional multi-planar heaters, a new internal die heating system that leads to minor heat losses is proposed. The effect of the number and relative position of the embedded heaters along the die is also analysed towards the setting up of the optimum arrangement that minimizes both the energy rate and consumption. Simulation and optimization processes were greatly supported by Finite Element Analysis (FEA) and calibrated with basis on the temperature profile computed through thermography imaging techniques. The main outputs of this study allow to conclude that the use of embedded cylindrical resistances instead of external planar heaters leads to drastic reductions of both the power consumption and the warm-up periods of the die heating system. For the analysed die tool and process, savings on energy consumption up to 60% and warm-up period stages less than an half hour were attained with the new internal heating system. The improvements achieved allow reducing the power requirements on pultrusion process, and thus minimize industrial costs and contribute to a more sustainable pultrusion manufacturing industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The global warming due to high CO2 emission in the last years has made energy saving a global problem nowadays. However, manufacturing processes such as pultrusion necessarily needs heat for curing the resin. Then, the only option available is to apply all efforts to make the process even more efficient. Different heating systems have been used on pultrusion, however, the most widely used are the planar resistances. The main objective of this study is to develop another heating system and compares it with the former one. Thermography was used in spite of define the temperature profile along the die. FEA (finite element analysis) allows to understand how many energy is spend with the initial heating system. After this first approach, changes were done on the die in order to test the new heating system and to check possible quality problems on the product. Thus, this work allows to conclude that with the new heating system a significant reduction in the setup time is now possible and an energy reduction of about 57% was achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is based on a previous experimental work in which embedded cylindrical heaters were applied to a pultrusion machine die, and resultant energetic performance compared with that achieved with the former heating system based on planar resistances. The previous work allowed to conclude that the use of embedded resistances enhances significantly the energetic performance of pultrusion process, leading to 57% decrease of energy consumption. However, the aforementioned study was developed with basis on an existing pultrusion die, which only allowed a single relative position for the heaters. In the present work, new relative positions for the heaters were investigated in order to optimize heat distribution process and energy consumption. Finite Elements Analysis was applied as an efficient tool to identify the best relative position of the heaters into the die, taking into account the usual parameters involved in the process and the control system already tested in the previous study. The analysis was firstly developed with basis on eight cylindrical heaters located in four different location plans. In a second phase, in order to refine the results, a new approach was adopted using sixteen heaters with the same total power. Final results allow to conclude that the correct positioning of the heaters can contribute to about 10% of energy consumption reduction, decreasing the production costs and leading to a better eco-efficiency of pultrusion process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The new recast of Directive 2010/31/EU in order to implement the new concept NZEB in new buildings, is to be fully respected by all Member States, and is revealed as important measure to promote the reduction of energy consumption of buildings and encouraging the use of renewable energy. In this study, it was tested the applicability of the nearly zero energy building concept to a big size office building and its impact after a 50-years life cycle span.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Four Cynara cardunculus clones, two from Portugal and two from Spain were studied for biomass production and their lignin was characterized. The clones differed in biomass partitioning: Spanish clones produced more capitula (54.5% vs. 43.9%), and Portuguese clones more stalks (37.2% vs. 25.6%). The heating values (HHV0) of the stalks were similar, ranging from 17.1 to 18.4 MJ/kg. Lignin was studied by analytical pyrolysis (Py-GC/MS(FID)), separately in depithed stalks (stalksDP) and pith. StalksDP had in average higher relative proportions of lignin derived compounds than pith (23.9% vs. 21.8%) with slightly different lignin monomeric composition: pith samples were richer in syringyl units as compared to stalksDP (64% vs. 53%), with S/G ratios of 2.1 and 1.3, respectively. The H:G:S composition was 7:40:53 in stalksDP and 7:29:64 in pith. The lignin content ranged from 18.8% to 25.5%, enabling a differentiation between clones and provenances. © 2015 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – Our paper aims at analyzing how different European countries cope with the European Energy Policy, which proposes a set of measures (free energy market, smart meters, energy certificates) to improve energy utilization and management in Europe. Design/methodology/approach – The paper first reports the general vision, regulations and goals set up by Europe to implement the European Energy Policy. Later on, it performs an analysis of how some European countries are coping with the goals, with financial, legal, economical and regulatory measures. Finally, the paper draws a comparison between the countries to present a view on how Europe is responding to the emerging energy emergency of the modern world. Findings – Our analysis on different use cases (countries) showed that European countries are converging to a common energy policy, even though some countries appear to be later than others In particular, Southern European countries were slowed down by the world financial and economical crisis. Still, it appears that contingency plans were put into action, and Europe as a whole is proceeding steadily towards the common vision. Research limitations/implications – European countries are applying yet more cuts to financing green technologies, and it is not possible to predict clearly how each country will evolve its support to the European energy policy. Practical implications – Different countries applied the concepts and measures in different ways. The implementation of the European energy policy has to cope with the resulting plethora of regulations, and a company proposing enhancement regarding energy management still has to possess robust knowledge of the single country, before being able to export experience and know-how between European countries. Originality/Value – Even though a few surveys on energy measures in Europe are already part of the state-of-the-art, organic analysis diagonal to the different topics of the European Energy Policy is missing. Moreover, this paper highlights how European countries are converging on a common view, and provides some details on the differences between the countries, thus facilitating parties interesting into cross-country export of experience and technology for energy management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis to obtain the Master Degree in Electronics and Telecommunications Engineering

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Mecânica Especialização em Concepção e Produção

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This document presents a tool able to automatically gather data provided by real energy markets and to generate scenarios, capture and improve market players’ profiles and strategies by using knowledge discovery processes in databases supported by artificial intelligence techniques, data mining algorithms and machine learning methods. It provides the means for generating scenarios with different dimensions and characteristics, ensuring the representation of real and adapted markets, and their participating entities. The scenarios generator module enhances the MASCEM (Multi-Agent Simulator of Competitive Electricity Markets) simulator, endowing a more effective tool for decision support. The achievements from the implementation of the proposed module enables researchers and electricity markets’ participating entities to analyze data, create real scenarios and make experiments with them. On the other hand, applying knowledge discovery techniques to real data also allows the improvement of MASCEM agents’ profiles and strategies resulting in a better representation of real market players’ behavior. This work aims to improve the comprehension of electricity markets and the interactions among the involved entities through adequate multi-agent simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Enterprise and Work Innovation Studies,6,IET, pp.9-51

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atualmente, o parque edificado é responsável pelo consumo de 40% da energia total consumida em toda a União Europeia. As previsões apontam para o crescimento do sector da construção civil, nomeadamente a construção de edifícios, o que permite perspetivar um aumento do consumo de energia nesta área. Medidas importantes, como o lançamento da Diretiva 2010/31/EU do Parlamento Europeu e do Conselho de 19 de Maio de 2010 relativa ao desempenho energético dos edifícios, abrem caminho para a diminuição das necessidades energéticas e emissões de gases de efeito de estufa. Nela são apontados objetivos para aumentar a eficiência energética do parque edificado, tendo como objetivo que a partir de 2020 todos os novos edifícios sejam energeticamente eficientes e de balanço energético quase zero, com principal destaque para a compensação usando produção energética própria proveniente de fontes renováveis. Este novo requisito, denominado nearly zero energy building, apresenta-se como um novo incentivo no caminho para a sustentabilidade energética. As técnicas e tecnologias usadas na conceção dos edifícios terão um impacto positivo na análise de ciclo de vida, nomeadamente na minimização do impacto ambiental e na racionalização do consumo energético. Desta forma, pretendeu-se analisar a aplicabilidade do conceito nearly zero energy building a um grande edifício de serviços e o seu impacto em termos de ciclo de vida a 50 anos. Partindo da análise de alguns estudos sobre o consumo energético e sobre edifícios de balanço energético quase nulo já construídos em Portugal, desenvolveu-se uma análise de ciclo de vida para o caso de um edifício de serviços, da qual resultou um conjunto de propostas de otimização da sua eficiência energética e de captação de energias renováveis. As medidas apresentadas foram avaliadas com o auxílio de diferentes aplicações como DIALux, IES VE e o PVsyst, com o objetivo de verificar o seu impacto através da comparação com estado inicial de consumo energético do edifício. Nas condições iniciais, o resultado da análise de ciclo de vida do edifício a 50 anos no que respeita ao consumo energético e respetivas emissões de CO2 na fase de operação foi de 6 MWh/m2 e 1,62 t/m2, respetivamente. Com aplicação de medidas propostas de otimização, o consumo e as respetivas emissões de CO2 foram reduzidas para 5,2 MWh/m2 e 1,37 t/m2 respetivamente. Embora se tenha conseguido reduzir ao consumo com as medidas propostas de otimização de energia, chegou-se à conclusão que o sistema fotovoltaico dimensionado para fornecer energia ao edifício não consegue satisfazer as necessidades energéticas do edifício no final dos 50 anos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heterogeneous multicore platforms are becoming an interesting alternative for embedded computing systems with limited power supply as they can execute specific tasks in an efficient manner. Nonetheless, one of the main challenges of such platforms consists of optimising the energy consumption in the presence of temporal constraints. This paper addresses the problem of task-to-core allocation onto heterogeneous multicore platforms such that the overall energy consumption of the system is minimised. To this end, we propose a two-phase approach that considers both dynamic and leakage energy consumption: (i) the first phase allocates tasks to the cores such that the dynamic energy consumption is reduced; (ii) the second phase refines the allocation performed in the first phase in order to achieve better sleep states by trading off the dynamic energy consumption with the reduction in leakage energy consumption. This hybrid approach considers core frequency set-points, tasks energy consumption and sleep states of the cores to reduce the energy consumption of the system. Major value has been placed on a realistic power model which increases the practical relevance of the proposed approach. Finally, extensive simulations have been carried out to demonstrate the effectiveness of the proposed algorithm. In the best-case, savings up to 18% of energy are reached over the first fit algorithm, which has shown, in previous works, to perform better than other bin-packing heuristics for the target heterogeneous multicore platform.