978 resultados para EFFICIENCY OPTIMIZATION


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks (WSN) are becoming widely adopted for many applications including complicated tasks like building energy management. However, one major concern for WSN technologies is the short lifetime and high maintenance cost due to the limited battery energy. One of the solutions is to scavenge ambient energy, which is then rectified to power the WSN. The objective of this thesis was to investigate the feasibility of an ultra-low energy consumption power management system suitable for harvesting sub-mW photovoltaic and thermoelectric energy to power WSNs. To achieve this goal, energy harvesting system architectures have been analyzed. Detailed analysis of energy storage units (ESU) have led to an innovative ESU solution for the target applications. Battery-less, long-lifetime ESU and its associated power management circuitry, including fast-charge circuit, self-start circuit, output voltage regulation circuit and hybrid ESU, using a combination of super-capacitor and thin film battery, were developed to achieve continuous operation of energy harvester. Low start-up voltage DC/DC converters have been developed for 1mW level thermoelectric energy harvesting. The novel method of altering thermoelectric generator (TEG) configuration in order to match impedance has been verified in this work. Novel maximum power point tracking (MPPT) circuits, exploring the fractional open circuit voltage method, were particularly developed to suit the sub-1mW photovoltaic energy harvesting applications. The MPPT energy model has been developed and verified against both SPICE simulation and implemented prototypes. Both indoor light and thermoelectric energy harvesting methods proposed in this thesis have been implemented into prototype devices. The improved indoor light energy harvester prototype demonstrates 81% MPPT conversion efficiency with 0.5mW input power. This important improvement makes light energy harvesting from small energy sources (i.e. credit card size solar panel in 500lux indoor lighting conditions) a feasible approach. The 50mm × 54mm thermoelectric energy harvester prototype generates 0.95mW when placed on a 60oC heat source with 28% conversion efficiency. Both prototypes can be used to continuously power WSN for building energy management applications in typical office building environment. In addition to the hardware development, a comprehensive system energy model has been developed. This system energy model not only can be used to predict the available and consumed energy based on real-world ambient conditions, but also can be employed to optimize the system design and configuration. This energy model has been verified by indoor photovoltaic energy harvesting system prototypes in long-term deployed experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an analysis of biofluid behavior in a T-shaped microchannel device and a design optimization for improved biofluid performance in terms of particle liquid separation. The biofluid is modeled with single phase shear rate non-Newtonian flow with blood property. The separation of red blood cell from plasma is evident based on biofluid distribution in the microchannels against various relevant effects and findings, including Zweifach-Fung bifurcation law, Fahraeus effect, Fahraeus-Lindqvist effect and cell free phenomenon. The modeling with the initial device shows that this T-microchannel device can separate red blood cell from plasma but the separation efficiency among different bifurcations varies largely. In accordance with the imbalanced performance, a design optimization is conducted. This includes implementing a series of simulations to investigate the effect of the lengths of the main and branch channels to biofluid behavior and searching an improved design with optimal separation performance. It is found that changing relative lengths of branch channels is effective to both uniformity of flow rate ratio among bifurcations and reduction of difference of the flow velocities between the branch channels, whereas extending the length of the main channel from bifurcation region is only effective for uniformity of flow rate ratio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reviews recent experimental activity in the area of optimization, control, and application of laser accelerated proton beams, carried out at the Rutherford Appleton Laboratory and the Laboratoire pour l’Utilisation des Lasers Intenses 100 TW facility in France. In particular, experiments have investigated the role of the scale length at the rear of the plasma in reducing target-normal-sheath-acceleration acceleration efficiency. Results match with recent theoretical predictions and provide information in view of the feasibility of proton fast-ignition applications. Experiments aiming to control the divergence of the proton beams have investigated the use of a laser-triggered microlens, which employs laser-driven transient electric fields in cylindrical geometry, enabling to focus the emitted
protons and select monochromatic beam lets out of the broad spectrum beam. This approach could be advantageous in view
of a variety of applications. The use of laser-driven protons as a particle probe for transient field detection has been developed and
applied to a number of experimental conditions. Recent work in this area has focused on the detection of large-scale self-generated magnetic fields in laser-produced plasmas and the investigation of fields associated to the propagation of relativistic electron both on the surface and in the bulk of targets irradiated by high-power laser pulses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coxian phase-type distributions are becoming a popular means of representing survival times within a health care environment. They are favoured as they show a distribution as a system of phases and can allow for an easy visual representation of the rate of flow of patients through a system. Difficulties arise, however, in determining the parameter estimates of the Coxian phase-type distribution. This paper examines ways of making the fitting of the Coxian phase-type distribution less cumbersome by outlining different software packages and algorithms available to perform the fit and assessing their capabilities through a number of performance measures. The performance measures rate each of the methods and help in identifying the more efficient. Conclusions drawn from these performance measures suggest SAS to be the most robust package. It has a high rate of convergence in each of the four example model fits considered, short computational times, detailed output, convergence criteria options, along with a succinct ability to switch between different algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditionally, the optimization of a turbomachinery engine casing for tip clearance has involved either twodimensional transient thermomechanical simulations or three-dimensional mechanical simulations. This paper illustrates that three-dimensional transient whole-engine thermomechanical simulations can be used within tip clearance optimizations and that the efficiency of such optimizations can be improved when a multifidelity surrogate modeling approach is employed. These simulations are employed in conjunction with a rotor suboptimization using surrogate models of rotor-dynamics performance, stress, mass and transient displacements, and an engine parameterization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-Volatile Memory (NVM) technology holds promise to replace SRAM and DRAM at various levels of the memory hierarchy. The interest in NVM is motivated by the difficulty faced in scaling DRAM beyond 22 nm and, long-term, lower cost per bit. While offering higher density and negligible static power (leakage and refresh), NVM suffers increased latency and energy per memory access. This paper develops energy and performance models of memory systems and applies them to understand the energy-efficiency of replacing or complementing DRAM with NVM. Our analysis focusses on the application of NVM in main memory. We demonstrate that NVM such as STT-RAM and RRAM is energy-efficient for memory sizes commonly employed in servers and high-end workstations, but PCM is not. Furthermore, the model is well suited to quickly evaluate the impact of changes to the model parameters, which may be achieved through optimization of the memory architecture, and to determine the key parameters that impact system-level energy and performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hydrous cerium oxide (HCO) was synthesized by intercalation of solutions of cerium(III) nitrate and sodium hydroxide and evaluated as an adsorbent for the removal of hexavalent chromium from aqueous solutions. Simple batch experiments and a 25 factorial experimental design were employed to screen the variables affecting Cr(VI) removal efficiency. The effects of the process variables; solution pH, initial Cr(VI) concentration, temperature, adsorbent dose and ionic strength were examined. Using the experimental results, a linear mathematical model representing the influence of the different variables and their interactions was obtained. Analysis of variance (ANOVA) demonstrated that Cr(VI) adsorption significantly increases with decreased solution pH, initial concentration and amount of adsorbent used (dose), but slightly decreased with an increase in temperature and ionic strength. The optimization study indicates 99% as the maximum removal at pH 2, 20 °C, 1.923 mM of metal concentration and a sorbent dose of 4 g/dm3. At these optimal conditions, Langmuir, Freundlich and Redlich–Peterson isotherm models were obtained. The maximum adsorption capacity of Cr(VI) adsorbed by HCO was 0.828 mmol/g, calculated by the Langmuir isotherm model. Desorption of chromium indicated that the HCO adsorbent can be regenerated using NaOH solution 0.1 M (up to 85%). The adsorption interactions between the surface sites of HCO and the Cr(VI) ions were found to be a combined effect of both anion exchange and surface complexation with the formation of an inner-sphere complex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Here we consider the numerical optimization of active surface plasmon polariton (SPP) trench waveguides suited for integration with luminescent polymers for use as highly localized SPP source devices in short-scale communication integrated circuits. The numerical analysis of the SPP modes within trench waveguide systems provides detailed information on the mode field components, effective indices, propagation lengths and mode areas. Such trench waveguide systems offer extremely high confinement with propagation on length scales appropriate to local interconnects, along with high efficiency coupling of dipolar emitters to waveguided plasmonic modes which can be close to 80%. The large Purcell factor exhibited in these structures will further lead to faster modulation capabilities along with an increased quantum yield beneficial for the proposed plasmon-emitting diode, a plasmonic analog of the light-emitting diode. The confinement of studied guided modes is on the order of 50 nm and the delay over the shorter 5 μm length scales will be on the order of 0.1 ps for the slowest propagating modes of the system, and significantly less for the faster modes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a surrogate-model based optimization of a doubly-fed induction generator (DFIG) machine winding design for maximizing power yield. Based on site-specific wind profile data and the machine’s previous operational performance, the DFIG’s stator and rotor windings are optimized to match the maximum efficiency with operating conditions for rewinding purposes. The particle swarm optimization (PSO)-based surrogate optimization techniques are used in conjunction with the finite element method (FEM) to optimize the machine design utilizing the limited available information for the site-specific wind profile and generator operating conditions. A response surface method in the surrogate model is developed to formulate the design objectives and constraints. Besides, the machine tests and efficiency calculations follow IEEE standard 112-B. Numerical and experimental results validate the effectiveness of the proposed technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern control methods like optimal control and model predictive control (MPC) provide a framework for simultaneous regulation of the tracking performance and limiting the control energy, thus have been widely deployed in industrial applications. Yet, due to its simplicity and robustness, the conventional P (Proportional) and PI (Proportional–Integral) control are still the most common methods used in many engineering systems, such as electric power systems, automotive, and Heating, Ventilation and Air Conditioning (HVAC) for buildings, where energy efficiency and energy saving are the critical issues to be addressed. Yet, little has been done so far to explore the effect of its parameter tuning on both the system performance and control energy consumption, and how these two objectives are correlated within the P and PI control framework. In this paper, the P and PI controllers are designed with a simultaneous consideration of these two aspects. Two case studies are investigated in detail, including the control of Voltage Source Converters (VSCs) for transmitting offshore wind power to onshore AC grid through High Voltage DC links, and the control of HVAC systems. Results reveal that there exists a better trade-off between the tracking performance and the control energy through a proper choice of the P and PI controller parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Environmental problems, especially climate change, have become a serious global issue waiting for people to solve. In the construction industry, the concept of sustainable building is developing to reduce greenhouse gas emissions. In this study, a building information modeling (BIM) based building design optimization method is proposed to facilitate designers to optimize their designs and improve buildings’ sustainability. A revised particle swarm optimization (PSO) algorithm is applied to search for the trade-off between life cycle costs (LCC) and life cycle carbon emissions (LCCE) of building designs. In order tovalidate the effectiveness and efficiency of this method, a case study of an office building is conducted in Hong Kong. The result of the case study shows that this method can enlarge the searching space for optimal design solutions and shorten the processing time for optimal design results, which is really helpful for designers to deliver an economic and environmental friendly design scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Por parte da indústria de estampagem tem-se verificado um interesse crescente em simulações numéricas de processos de conformação de chapa, incluindo também métodos de engenharia inversa. Este facto ocorre principalmente porque as técnicas de tentativa-erro, muito usadas no passado, não são mais competitivas a nível económico. O uso de códigos de simulação é, atualmente, uma prática corrente em ambiente industrial, pois os resultados tipicamente obtidos através de códigos com base no Método dos Elementos Finitos (MEF) são bem aceites pelas comunidades industriais e científicas Na tentativa de obter campos de tensão e de deformação precisos, uma análise eficiente com o MEF necessita de dados de entrada corretos, como geometrias, malhas, leis de comportamento não-lineares, carregamentos, leis de atrito, etc.. Com o objetivo de ultrapassar estas dificuldades podem ser considerados os problemas inversos. No trabalho apresentado, os seguintes problemas inversos, em Mecânica computacional, são apresentados e analisados: (i) problemas de identificação de parâmetros, que se referem à determinação de parâmetros de entrada que serão posteriormente usados em modelos constitutivos nas simulações numéricas e (ii) problemas de definição geométrica inicial de chapas e ferramentas, nos quais o objetivo é determinar a forma inicial de uma chapa ou de uma ferramenta tendo em vista a obtenção de uma determinada geometria após um processo de conformação. São introduzidas e implementadas novas estratégias de otimização, as quais conduzem a parâmetros de modelos constitutivos mais precisos. O objetivo destas estratégias é tirar vantagem das potencialidades de cada algoritmo e melhorar a eficiência geral dos métodos clássicos de otimização, os quais são baseados em processos de apenas um estágio. Algoritmos determinísticos, algoritmos inspirados em processos evolucionários ou mesmo a combinação destes dois são usados nas estratégias propostas. Estratégias de cascata, paralelas e híbridas são apresentadas em detalhe, sendo que as estratégias híbridas consistem na combinação de estratégias em cascata e paralelas. São apresentados e analisados dois métodos distintos para a avaliação da função objetivo em processos de identificação de parâmetros. Os métodos considerados são uma análise com um ponto único ou uma análise com elementos finitos. A avaliação com base num único ponto caracteriza uma quantidade infinitesimal de material sujeito a uma determinada história de deformação. Por outro lado, na análise através de elementos finitos, o modelo constitutivo é implementado e considerado para cada ponto de integração. Problemas inversos são apresentados e descritos, como por exemplo, a definição geométrica de chapas e ferramentas. Considerando o caso da otimização da forma inicial de uma chapa metálica a definição da forma inicial de uma chapa para a conformação de um elemento de cárter é considerado como problema em estudo. Ainda neste âmbito, um estudo sobre a influência da definição geométrica inicial da chapa no processo de otimização é efetuado. Este estudo é realizado considerando a formulação de NURBS na definição da face superior da chapa metálica, face cuja geometria será alterada durante o processo de conformação plástica. No caso dos processos de otimização de ferramentas, um processo de forjamento a dois estágios é apresentado. Com o objetivo de obter um cilindro perfeito após o forjamento, dois métodos distintos são considerados. No primeiro, a forma inicial do cilindro é otimizada e no outro a forma da ferramenta do primeiro estágio de conformação é otimizada. Para parametrizar a superfície livre do cilindro são utilizados diferentes métodos. Para a definição da ferramenta são também utilizados diferentes parametrizações. As estratégias de otimização propostas neste trabalho resolvem eficientemente problemas de otimização para a indústria de conformação metálica.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tese apresenta um estudo sobre otimização económica de parques eólicos, com o objetivo de obter um algoritmo para otimização económica de parques eólicos através do custo da energia produzida. No estudo utilizou-se uma abordagem multidisciplinar. Inicialmente, apresentam-se as principais tecnologias e diferentes arquiteturas utilizadas nos parques eólicos. Bem como esquemas de funcionamento e gestão dos parques. São identificadas variáveis necessárias e apresenta-se um modelo dimensionamento para cálculo dos custos da energia produzida, tendo-se dado ênfase às instalações onshore e ligados a rede elétrica de distribuição. É feita uma análise rigorosa das características das topologias dos aerogeradores disponíveis no mercado, e simula-se o funcionamento de um parque eólico para testar a validade dos modelos desenvolvidos. Também é implementado um algoritmo para a obtenção de uma resposta otimizada para o ciclo de vida económico do parque eólico em estudo. A abordagem proposta envolve algoritmos para otimização do custo de produção com multiplas funções objetivas com base na descrição matemática da produção de eletricidade. Foram desenvolvidos modelos de otimização linear, que estabelece a ligação entre o custo económico e a produção de eletricidade, tendo em conta ainda as emissões de CO2 em instrumentos de política energética para energia eólica. São propostas expressões para o cálculo do custo de energia com variáveis não convencionais, nomeadamente, para a produção variável do parque eólico, fator de funcionamento e coeficiente de eficiência geral do sistema. Para as duas últimas, também é analisado o impacto da distribuição do vento predominante no sistema de conversão de energia eólica. Verifica-se que os resultados obtidos pelos algoritmos propostos são similares às obtidas por demais métodos numéricos já publicados na comunidade científica, e que o algoritmo de otimização económica sofre influência significativa dos valores obtidos dos coeficientes em questão. Finalmente, é demonstrado que o algoritmo proposto (LCOEwso) é útil para o dimensionamento e cálculo dos custos de capital e O&M dos parques eólicos com informação incompleta ou em fase de projeto. Nesse sentido, o contributo desta tese vem ser desenvolver uma ferramenta de apoio à tomada de decisão de um gestor, investidor ou ainda agente público em fomentar a implantação de um parque eólico.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of a supercritical Rankine cycle in combined cycles does not happen in today’s thermoelectric power stations. Nevertheless, the most recent development in gas turbines, that allows a high efficiency and high exhaust gases temperatures, and the improvement of high pressure and temperature alloys, makes this cycle possible. This study’s intent is to prove the viability of this combined cycle, since it can break the 60% efficiency barrier, which is the plafond in actual power stations. To attain this target, several configurations for this cycle have been simulated, optimized and analyzed [1]. The simulations were done with the computational program IPSEpro [2] and the optimizations were effectuated with software developed for the effect, using the DFP method [3]. In parallel with the optimization that claims the cycle’s efficiency maximization, an exergetic analysis was also made [4] to all the cycle components. In opposite to what happens in subcritical combined cycles, it was demonstrated that in supercritical combined cycles the higher efficiency takes place with a single steam pressure in the heat recovery steam generator (HRSG).