873 resultados para Energy Efficient Algorithms
Resumo:
Spiking Neural Networks (SNNs) are bio-inspired Artificial Neural Networks (ANNs) utilizing discrete spiking signals, akin to neuron communication in the brain, making them ideal for real-time and energy-efficient Cyber-Physical Systems (CPSs). This thesis explores their potential in Structural Health Monitoring (SHM), leveraging low-cost MEMS accelerometers for early damage detection in motorway bridges. The study focuses on Long Short-Term SNNs (LSNNs), although their complex learning processes pose challenges. Comparing LSNNs with other ANN models and training algorithms for SHM, findings indicate LSNNs' effectiveness in damage identification, comparable to ANNs trained using traditional methods. Additionally, an optimized embedded LSNN implementation demonstrates a 54% reduction in execution time, but with longer pre-processing due to spike-based encoding. Furthermore, SNNs are applied in UAV obstacle avoidance, trained directly using a Reinforcement Learning (RL) algorithm with event-based input from a Dynamic Vision Sensor (DVS). Performance evaluation against Convolutional Neural Networks (CNNs) highlights SNNs' superior energy efficiency, showing a 6x decrease in energy consumption. The study also investigates embedded SNN implementations' latency and throughput in real-world deployments, emphasizing their potential for energy-efficient monitoring systems. This research contributes to advancing SHM and UAV obstacle avoidance through SNNs' efficient information processing and decision-making capabilities within CPS domains.
Resumo:
An (n, d)-expander is a graph G = (V, E) such that for every X subset of V with vertical bar X vertical bar <= 2n - 2 we have vertical bar Gamma(G)(X) vertical bar >= (d + 1) vertical bar X vertical bar. A tree T is small if it has at most n vertices and has maximum degree at most d. Friedman and Pippenger (1987) proved that any ( n; d)- expander contains every small tree. However, their elegant proof does not seem to yield an efficient algorithm for obtaining the tree. In this paper, we give an alternative result that does admit a polynomial time algorithm for finding the immersion of any small tree in subgraphs G of (N, D, lambda)-graphs Lambda, as long as G contains a positive fraction of the edges of Lambda and lambda/D is small enough. In several applications of the Friedman-Pippenger theorem, including the ones in the original paper of those authors, the (n, d)-expander G is a subgraph of an (N, D, lambda)-graph as above. Therefore, our result suffices to provide efficient algorithms for such previously non-constructive applications. As an example, we discuss a recent result of Alon, Krivelevich, and Sudakov (2007) concerning embedding nearly spanning bounded degree trees, the proof of which makes use of the Friedman-Pippenger theorem. We shall also show a construction inspired on Wigderson-Zuckerman expander graphs for which any sufficiently dense subgraph contains all trees of sizes and maximum degrees achieving essentially optimal parameters. Our algorithmic approach is based on a reduction of the tree embedding problem to a certain on-line matching problem for bipartite graphs, solved by Aggarwal et al. (1996).
Resumo:
In this work, a wide analysis of local search multiuser detection (LS-MUD) for direct sequence/code division multiple access (DS/CDMA) systems under multipath channels is carried out considering the performance-complexity trade-off. It is verified the robustness of the LS-MUD to variations in loading, E(b)/N(0), near-far effect, number of fingers of the Rake receiver and errors in the channel coefficients estimates. A compared analysis of the bit error rate (BER) and complexity trade-off is accomplished among LS, genetic algorithm (GA) and particle swarm optimization (PSO). Based on the deterministic behavior of the LS algorithm, it is also proposed simplifications over the cost function calculation, obtaining more efficient algorithms (simplified and combined LS-MUD versions) and creating new perspectives for the MUD implementation. The computational complexity is expressed in terms of the number of operations in order to converge. Our conclusion pointed out that the simplified LS (s-LS) method is always more efficient, independent of the system conditions, achieving a better performance with a lower complexity than the others heuristics detectors. Associated to this, the deterministic strategy and absence of input parameters made the s-LS algorithm the most appropriate for the MUD problem. (C) 2008 Elsevier GmbH. All rights reserved.
Resumo:
Subcycling, or the use of different timesteps at different nodes, can be an effective way of improving the computational efficiency of explicit transient dynamic structural solutions. The method that has been most widely adopted uses a nodal partition. extending the central difference method, in which small timestep updates are performed interpolating on the displacement at neighbouring large timestep nodes. This approach leads to narrow bands of unstable timesteps or statistical stability. It also can be in error due to lack of momentum conservation on the timestep interface. The author has previously proposed energy conserving algorithms that avoid the first problem of statistical stability. However, these sacrifice accuracy to achieve stability. An approach to conserve momentum on an element interface by adding partial velocities is considered here. Applied to extend the central difference method. this approach is simple. and has accuracy advantages. The method can be programmed by summing impulses of internal forces, evaluated using local element timesteps, in order to predict a velocity change at a node. However, it is still only statistically stable, so an adaptive timestep size is needed to monitor accuracy and to be adjusted if necessary. By replacing the central difference method with the explicit generalized alpha method. it is possible to gain stability by dissipating the high frequency response that leads to stability problems. However. coding the algorithm is less elegant, as the response depends on previous partial accelerations. Extension to implicit integration, is shown to be impractical due to the neglect of remote effects of internal forces acting across a timestep interface. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
A compactação dos equipamentos de tecnologia de informação e os aumentos simultâneos no consumo de energia dos processadores levam a que seja assegurada a distribuição adequada de ar frio, a remoção do ar quente, a capacidade adequada de arrefecimento e uma diminuição do consumo de energia. Considerando-se a cogeração como uma alternativa energeticamente eficiente em relação a outros métodos de produção de energia, com este trabalho faz-se a análise à rentabilidade de uma eventual integração de um sistema de cogeração num centro informático.
Resumo:
Num mundo em que as redes de telecomunicações estão em constante evolução e crescimento, o consumo energético destas também aumenta. Com a evolução tanto por parte das redes como dos seus equipamentos, o custo de implementação de uma rede tem-se reduzido até ao ponto em que o maior obstáculo para o crescimento das redes é já o seu custo de manutenção e funcionamento. Nas últimas décadas têm sido criados esforços para tornar as redes cada fez mais eficientes ao nível energético, reduzindo-se assim os seus custos operacionais, como também a redução dos problemas relacionados com as fontes de energia que alimentam estas redes. Neste sentido, este trabalho tem como objectivo principal o estudo do consumo energético de redes IP sobre WDM, designadamente o estudo de métodos de encaminhamento que sejam eficientes do ponto de vista energético. Neste trabalho formalizámos um modelo de optimização que foi avaliado usando diferentes topologias de rede. O resultado da análise mostrou que na maioria dos casos é possível obter uma redução do consumo na ordem dos 25%.
Resumo:
Cada vez mais a indústria tem vindo a sofrer algumas mudanças no seu processo produtivo. Hoje, mais que nunca, é preciso garantir que as instalações produtivas sejam o mais eficiente possível, procurando a racionalização de energia com um decrescimento dos custos. Deste modo o objectivo desta dissertação é o diagnóstico energético da fábrica de placas de borracha e a optimização do sector da pintura na empresa Monteiro Ribas. A realização de um diagnóstico energético, para a detecção de desperdícios de energia tem sido amplamente utilizada. A optimização irá prospectar potenciais de mudanças e aplicação de tecnologias de eficiência energética. Pretende-se deste modo travar o consumo energético sem que seja afectada a produção, já que a empresa é considerada consumidora intensiva de energia. Na empresa Monteiro Ribas há consumo de gás natural, de vapor e de energia eléctrica, sendo o vapor a forma de energia mais consumida, seguida da energia eléctrica e por fim, do gás natural nas proporções de 55%, 41% e 4%, respectivamente. A optimização feita permitiu estudar a influência de algumas variáveis, nos consumos anuais da energia, e assim apresentar propostas de melhoria. Uma das propostas analisadas foi a possibilidade de efectuar um isolamento térmico a algumas válvulas. Este isolamento conduziria a uma poupança de 79.263,4 kWh/ano. Propôs-se também a implementação de balastros electrónicos, que conduziria a uma diminuição em energia eléctrica de 29.509,92 kWh/ano. Relativamente às máquinas utilizadas no sector da pintura, verificou-se ser a estufa IRK 6, um dos equipamentos de grande consumo energético. Então analisou-se a influência da velocidade de circulação das placas de borracha através desta máquina, bem como a alteração da respectiva potência, pela diminuição do número de cassetes incorporados nesta estufa.
Resumo:
Devido à crescente preocupação com a racionalização energética, torna-se importante adequar os edifícios à sua utilização futura, procedendo à escolha acertada de materiais e técnicas a utilizar na construção e/ou na remodelação. Atualmente, com o desenvolvimento tecnológico, os serviços profissionais e os materiais existentes ao dispor dos projectistas e construtores permitem a implementação eficaz de soluções de elevado impacto a nível da eficiência energética dos edifícios de uma forma acessível e não muito dispendiosa. Nesta área, a regulamentação é essencial para controlar e catalogar energeticamente os sistemas, mitigando o seu sobredimensionamento e consequentes desperdícios, de forma a contribuir eficazmente para as melhorias ambientais e económicas pretendidas. Sem dúvida, que a preocupação consiste em tornar a médio/longo prazo o investimento numa poupança acrescida, proporcionando os mesmos níveis de conforto. As técnicas de climatização e todo o equipamento que está associado têm um peso importante nos custos e na exploração ao longo do tempo. Os sistemas de gestão técnica só poderão tirar partido de toda a estrutura, tornando-a confiável, se forem corretamente projetados. Com este trabalho, pretende-se sensibilizar o leitor sobre as questões práticas associadas ao correto dimensionamento de soluções que contribuam para a eficiência energética dos edifícios, exemplificando-se com um caso de estudo: um edifício de um centro escolar construído obedecendo aos requisitos listados no programa de renovação do parque escolar que o governo incentivou. A sensibilização passa por propostas objetivas de soluções alternativas que poderiam ter sido adotadas ainda na fase de projeto do caso de estudo, tendo em conta os custos e operacionalidade dos sistemas e o local em que se encontram, e que poderiam ter contribuído para melhorar a eficiência energética de todo o edifício, bem como por soluções transversais que se poderiam aplicar em outras situações. Todas as sugestões passam pela simplificação, com o objetivo de contribuir para uma melhor racionalização a curto e longo prazo dos recursos disponibilizados.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica com Especialização em Energia, Climatização e Refrigeração
Resumo:
Sleep-states are emerging as a first-class design choice in energy minimization. A side effect of this is that the release behavior of the system is affected and subsequently the preemption relations between tasks. In a first step we have investigated how the behavior in terms of number of preemptions of tasks in the system is changed at runtime, using an existing procrastination approach, which utilizes sleepstates for energy savings purposes. Our solution resulted in substantial savings of preemptions and we expect from even higher yields for alternative energy saving algorithms. This work is intended to form the base of future research, which aims to bound the number of preemptions at analysis time and subsequently how this may be employed in the analysis to reduced the amount of system utilization, which is reserved to account for the preemption delay.
Resumo:
The resource constrained project scheduling problem (RCPSP) is a difficult problem in combinatorial optimization for which extensive investigation has been devoted to the development of efficient algorithms. During the last couple of years many heuristic procedures have been developed for this problem, but still these procedures often fail in finding near-optimal solutions. This paper proposes a genetic algorithm for the resource constrained project scheduling problem. The chromosome representation of the problem is based on random keys. The schedule is constructed using a heuristic priority rule in which the priorities and delay times of the activities are defined by the genetic algorithm. The approach was tested on a set of standard problems taken from the literature and compared with other approaches. The computational results validate the effectiveness of the proposed algorithm.
Resumo:
- The resource constrained project scheduling problem (RCPSP) is a difficult problem in combinatorial optimization for which extensive investigation has been devoted to the development of efficient algorithms. During the last couple of years many heuristic procedures have been developed for this problem, but still these procedures often fail in finding near-optimal solutions. This paper proposes a genetic algorithm for the resource constrained project scheduling problem. The chromosome representation of the problem is based on random keys. The schedule is constructed using a heuristic priority rule in which the priorities and delay times of the activities are defined by the genetic algorithm. The approach was tested on a set of standard problems taken from the literature and compared with other approaches. The computational results validate the effectiveness of the proposed algorithm
Resumo:
This paper presents a methodology for applying scheduling algorithms using Monte Carlo simulation. The methodology is based on a decision support system (DSS). The proposed methodology combines a genetic algorithm with a new local search using Monte Carlo Method. The methodology is applied to the job shop scheduling problem (JSSP). The JSSP is a difficult problem in combinatorial optimization for which extensive investigation has been devoted to the development of efficient algorithms. The methodology is tested on a set of standard instances taken from the literature and compared with others. The computation results validate the effectiveness of the proposed methodology. The DSS developed can be utilized in a common industrial or construction environment.
Resumo:
This paper presents an optimization approach for the job shop scheduling problem (JSSP). The JSSP is a difficult problem in combinatorial optimization for which extensive investigation has been devoted to the development of efficient algorithms. The proposed approach is based on a genetic algorithm technique. The scheduling rules such as SPT and MWKR are integrated into the process of genetic evolution. The chromosome representation of the problem is based on random keys. The schedules are constructed using a priority rule in which the priorities and delay times of the operations are defined by the genetic algorithm. Schedules are constructed using a procedure that generates parameterized active schedules. After a schedule is obtained a local search heuristic is applied to improve the solution. The approach is tested on a set of standard instances taken from the literature and compared with other approaches. The computation results validate the effectiveness of the proposed approach.
Resumo:
Mestrado em Engenharia Electrotécnica – Sistemas Eléctricos de Energia