926 resultados para Power flow algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current space exploration has transpired through the use of chemical rockets, and they have served us well, but they have their limitations. Exploration of the outer solar system, Jupiter and beyond will most likely require a new generation of propulsion system. One potential technology class to provide spacecraft propulsion and power systems involve thermonuclear fusion plasma systems. In this class it is well accepted that d-He3 fusion is the most promising of the fuel candidates for spacecraft applications as the 14.7 MeV protons carry up to 80% of the total fusion power while ‘s have energies less than 4 MeV. The other minor fusion products from secondary d-d reactions consisting of 3He, n, p, and 3H also have energies less than 4 MeV. Furthermore there are two main fusion subsets namely, Magnetic Confinement Fusion devices and Inertial Electrostatic Confinement (or IEC) Fusion devices. Magnetic Confinement Fusion devices are characterized by complex geometries and prohibitive structural mass compromising spacecraft use at this stage of exploration. While generating energy from a lightweight and reliable fusion source is important, another critical issue is harnessing this energy into usable power and/or propulsion. IEC fusion is a method of fusion plasma confinement that uses a series of biased electrodes that accelerate a uniform spherical beam of ions into a hollow cathode typically comprised of a gridded structure with high transparency. The inertia of the imploding ion beam compresses the ions at the center of the cathode increasing the density to the point where fusion occurs. Since the velocity distributions of fusion particles in an IEC are essentially isotropic and carry no net momentum, a means of redirecting the velocity of the particles is necessary to efficiently extract energy and provide power or create thrust. There are classes of advanced fuel fusion reactions where direct-energy conversion based on electrostatically-biased collector plates is impossible due to potential limits, material structure limitations, and IEC geometry. Thermal conversion systems are also inefficient for this application. A method of converting the isotropic IEC into a collimated flow of fusion products solves these issues and allows direct energy conversion. An efficient traveling wave direct energy converter has been proposed and studied by Momota , Shu and further studied by evaluated with numerical simulations by Ishikawa and others. One of the conventional methods of collimating charged particles is to surround the particle source with an applied magnetic channel. Charged particles are trapped and move along the lines of flux. By introducing expanding lines of force gradually along the magnetic channel, the velocity component perpendicular to the lines of force is transferred to the parallel one. However, efficient operation of the IEC requires a null magnetic field at the core of the device. In order to achieve this, Momota and Miley have proposed a pair of magnetic coils anti-parallel to the magnetic channel creating a null hexapole magnetic field region necessary for the IEC fusion core. Numerically, collimation of 300 eV electrons without a stabilization coil was demonstrated to approach 95% at a profile corresponding to Vsolenoid = 20.0V, Ifloating = 2.78A, Isolenoid = 4.05A while collimation of electrons with stabilization coil present was demonstrated to reach 69% at a profile corresponding to Vsolenoid = 7.0V, Istab = 1.1A, Ifloating = 1.1A, Isolenoid = 1.45A. Experimentally, collimation of electrons with stabilization coil present was demonstrated experimentally to be 35% at 100 eV and reach a peak of 39.6% at 50eV with a profile corresponding to Vsolenoid = 7.0V, Istab = 1.1A, Ifloating = 1.1A, Isolenoid = 1.45A and collimation of 300 eV electrons without a stabilization coil was demonstrated to approach 49% at a profile corresponding to Vsolenoid = 20.0V, Ifloating = 2.78A, Isolenoid = 4.05A 6.4% of the 300eV electrons’ initial velocity is directed to the collector plates. The remaining electrons are trapped by the collimator’s magnetic field. These particles oscillate around the null field region several hundred times and eventually escape to the collector plates. At a solenoid voltage profile of 7 Volts, 100 eV electrons are collimated with wall and perpendicular component losses of 31%. Increasing the electron energy beyond 100 eV increases the wall losses by 25% at 300 eV. Ultimately it was determined that a field strength deriving from 9.5 MAT/m would be required to collimate 14.7 MeV fusion protons from d-3He fueled IEC fusion core. The concept of the proton collimator has been proven to be effective to transform an isotropic source into a collimated flow of particles ripe for direct energy conversion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evolution of wireless communication systems leads to Dynamic Spectrum Allocation for Cognitive Radio, which requires reliable spectrum sensing techniques. Among the spectrum sensing methods proposed in the literature, those that exploit cyclostationary characteristics of radio signals are particularly suitable for communication environments with low signal-to-noise ratios, or with non-stationary noise. However, such methods have high computational complexity that directly raises the power consumption of devices which often have very stringent low-power requirements. We propose a strategy for cyclostationary spectrum sensing with reduced energy consumption. This strategy is based on the principle that p processors working at slower frequencies consume less power than a single processor for the same execution time. We devise a strict relation between the energy savings and common parallel system metrics. The results of simulations show that our strategy promises very significant savings in actual devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A reduzida informação e o pouco trabalho científico desenvolvido na área de sistemas de combustão de biomassa de média potência, faz dos objectivos propostos neste trabalho elementos importantes. O trabalho científico a seguir apresentado, vai permitir obter as bases para o desenvolvimento de condições apropriadas de operação de sistemas de combustão a biomassa, aumentando a eficiência e a rentabilidade económica deste tipo de sistema energético. O principal objetivo do presente trabalho consistiu na aplicação de metodologias de monitorização que permitam caracterizar e melhorar a eficiência do sistema de combustão, na implementação dos métodos escolhidos e na monitorização das condições de operação de uma caldeira industrial de combustão de biomassa, destacando-se: (i) monitorização dos caudais de alimentação de biomassa à caldeira realizada por sistemas de alimentação sem-fim; (ii) análise e monitorização de temperaturas e pressão; (iii) monitorização do caudal de ar de combustão; (iv) monitorização do caudal de gases de exaustão; (v) monitorização da potência térmica; (vi) monitorização da composição do efluente gasoso. A caracterização físicas de amostras de biomassa, o teste a diferentes tipos de biomassa com diferentes condições de operação e a recolha de amostras de cinzas de combustão para a caracterização físico-química são outros métodos de monitorização e caracterização aplicados. Também foi desenvolvido e aplicado um ensaio de controlo do sistema de alimentação em modo de operação manual e comparado com o sistema de controlo do sistema de alimentação em modo de operação automático. O estudo realizado permite concluir que deve ser desenvolvido e implementado um algoritmo de controlo e operação da fornalha que permita um doseamento mais adequado dos caudais de combustível e ar de combustão com vista a melhorar o desempenho do sistema combustão.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The search for patterns or motifs in data represents a problem area of key interest to finance and economic researchers. In this paper we introduce the Motif Tracking Algorithm, a novel immune inspired pattern identification tool that is able to identify unknown motifs of a non specified length which repeat within time series data. The power of the algorithm comes from the fact that it uses a small number of parameters with minimal assumptions regarding the data being examined or the underlying motifs. Our interest lies in applying the algorithm to financial time series data to identify unknown patterns that exist. The algorithm is tested using three separate data sets. Particular suitability to financial data is shown by applying it to oil price data. In all cases the algorithm identifies the presence of a motif population in a fast and efficient manner due to the utilisation of an intuitive symbolic representation. The resulting population of motifs is shown to have considerable potential value for other applications such as forecasting and algorithm seeding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the results of the implementation of a self-consumption maximization strategy tested in a real-scale Vanadium Redox Flow Battery (VRFB) (5 kW, 60 kWh) and Building Integrated Photovoltaics (BIPV) demonstrator (6.74 kWp). The tested energy management strategy aims to maximize the consumption of energy generated by a BIPV system through the usage of a battery. Whenever possible, the residual load is either stored in the battery to be used later or is supplied by the energy stored previously. The strategy was tested over seven days in a real-scale VRF battery to assess the validity of this battery to implement BIPV-focused energy management strategies. The results show that it was possible to obtain a self-consumption ratio of 100.0%, and that 75.6% of the energy consumed was provided by PV power. The VRFB was able to perform the strategy, although it was noticed that the available power (either to charge or discharge) varied with the state of charge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis work deals with a mathematical description of flow in polymeric pipe and in a specific peristaltic pump. This study involves fluid-structure interaction analysis in presence of complex-turbulent flows treated in an arbitrary Lagrangian-Eulerian (ALE) framework. The flow simulations are performed in COMSOL 4.4, as 2D axial symmetric model, and ABAQUS 6.14.1, as 3D model with symmetric boundary conditions. In COMSOL, the fluid and structure problems are coupled by monolithic algorithm, while ABAQUS code links ABAQUS CFD and ABAQUS Standard solvers with single block-iterative partitioned algorithm. For the turbulent features of the flow, the fluid model in both codes is described by RNG k-ϵ. The structural model is described, on the basis of the pipe material, by Elastic models or Hyperelastic Neo-Hookean models with Rayleigh damping properties. In order to describe the pulsatile fluid flow after the pumping process, the available data are often defective for the fluid problem. Engineering measurements are normally able to provide average pressure or velocity at a cross-section. This problem has been analyzed by McDonald's and Womersley's work for average pressure at fixed cross section by Fourier analysis since '50, while nowadays sophisticated techniques including Finite Elements and Finite Volumes exist to study the flow. Finally, we set up peristaltic pipe simulations in ABAQUS code, by using the same model previously tested for the fl uid and the structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the semiconductor industry struggles to maintain its momentum down the path following the Moore's Law, three dimensional integrated circuit (3D IC) technology has emerged as a promising solution to achieve higher integration density, better performance, and lower power consumption. However, despite its significant improvement in electrical performance, 3D IC presents several serious physical design challenges. In this dissertation, we investigate physical design methodologies for 3D ICs with primary focus on two areas: low power 3D clock tree design, and reliability degradation modeling and management. Clock trees are essential parts for digital system which dissipate a large amount of power due to high capacitive loads. The majority of existing 3D clock tree designs focus on minimizing the total wire length, which produces sub-optimal results for power optimization. In this dissertation, we formulate a 3D clock tree design flow which directly optimizes for clock power. Besides, we also investigate the design methodology for clock gating a 3D clock tree, which uses shutdown gates to selectively turn off unnecessary clock activities. Different from the common assumption in 2D ICs that shutdown gates are cheap thus can be applied at every clock node, shutdown gates in 3D ICs introduce additional control TSVs, which compete with clock TSVs for placement resources. We explore the design methodologies to produce the optimal allocation and placement for clock and control TSVs so that the clock power is minimized. We show that the proposed synthesis flow saves significant clock power while accounting for available TSV placement area. Vertical integration also brings new reliability challenges including TSV's electromigration (EM) and several other reliability loss mechanisms caused by TSV-induced stress. These reliability loss models involve complex inter-dependencies between electrical and thermal conditions, which have not been investigated in the past. In this dissertation we set up an electrical/thermal/reliability co-simulation framework to capture the transient of reliability loss in 3D ICs. We further derive and validate an analytical reliability objective function that can be integrated into the 3D placement design flow. The reliability aware placement scheme enables co-design and co-optimization of both the electrical and reliability property, thus improves both the circuit's performance and its lifetime. Our electrical/reliability co-design scheme avoids unnecessary design cycles or application of ad-hoc fixes that lead to sub-optimal performance. Vertical integration also enables stacking DRAM on top of CPU, providing high bandwidth and short latency. However, non-uniform voltage fluctuation and local thermal hotspot in CPU layers are coupled into DRAM layers, causing a non-uniform bit-cell leakage (thereby bit flip) distribution. We propose a performance-power-resilience simulation framework to capture DRAM soft error in 3D multi-core CPU systems. In addition, a dynamic resilience management (DRM) scheme is investigated, which adaptively tunes CPU's operating points to adjust DRAM's voltage noise and thermal condition during runtime. The DRM uses dynamic frequency scaling to achieve a resilience borrow-in strategy, which effectively enhances DRAM's resilience without sacrificing performance. The proposed physical design methodologies should act as important building blocks for 3D ICs and push 3D ICs toward mainstream acceptance in the near future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reconfigurable platforms are a promising technology that offers an interesting trade-off between flexibility and performance, which many recent embedded system applications demand, especially in fields such as multimedia processing. These applications typically involve multiple ad-hoc tasks for hardware acceleration, which are usually represented using formalisms such as Data Flow Diagrams (DFDs), Data Flow Graphs (DFGs), Control and Data Flow Graphs (CDFGs) or Petri Nets. However, none of these models is able to capture at the same time the pipeline behavior between tasks (that therefore can coexist in order to minimize the application execution time), their communication patterns, and their data dependencies. This paper proves that the knowledge of all this information can be effectively exploited to reduce the resource requirements and the timing performance of modern reconfigurable systems, where a set of hardware accelerators is used to support the computation. For this purpose, this paper proposes a novel task representation model, named Temporal Constrained Data Flow Diagram (TCDFD), which includes all this information. This paper also presents a mapping-scheduling algorithm that is able to take advantage of the new TCDFD model. It aims at minimizing the dynamic reconfiguration overhead while meeting the communication requirements among the tasks. Experimental results show that the presented approach achieves up to 75% of resources saving and up to 89% of reconfiguration overhead reduction with respect to other state-of-the-art techniques for reconfigurable platforms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many geological formations consist of crystalline rocks that have very low matrix permeability but allow flow through an interconnected network of fractures. Understanding the flow of groundwater through such rocks is important in considering disposal of radioactive waste in underground repositories. A specific area of interest is the conditioning of fracture transmissivities on measured values of pressure in these formations. This is the process where the values of fracture transmissivities in a model are adjusted to obtain a good fit of the calculated pressures to measured pressure values. While there are existing methods to condition transmissivity fields on transmissivity, pressure and flow measurements for a continuous porous medium there is little literature on conditioning fracture networks. Conditioning fracture transmissivities on pressure or flow values is a complex problem because the measurements are not linearly related to the fracture transmissivities and they are also dependent on all the fracture transmissivities in the network. We present a new method for conditioning fracture transmissivities on measured pressure values based on the calculation of certain basis vectors; each basis vector represents the change to the log transmissivity of the fractures in the network that results in a unit increase in the pressure at one measurement point whilst keeping the pressure at the remaining measurement points constant. The fracture transmissivities are updated by adding a linear combination of basis vectors and coefficients, where the coefficients are obtained by minimizing an error function. A mathematical summary of the method is given. This algorithm is implemented in the existing finite element code ConnectFlow developed and marketed by Serco Technical Services, which models groundwater flow in a fracture network. Results of the conditioning are shown for a number of simple test problems as well as for a realistic large scale test case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study mainly aims to provide an inter-industry analysis through the subdivision of various industries in flow of funds (FOF) accounts. Combined with the Financial Statement Analysis data from 2004 and 2005, the Korean FOF accounts are reconstructed to form "from-whom-to-whom" basis FOF tables, which are composed of 115 institutional sectors and correspond to tables and techniques of input–output (I–O) analysis. First, power of dispersion indices are obtained by applying the I–O analysis method. Most service and IT industries, construction, and light industries in manufacturing are included in the first quadrant group, whereas heavy and chemical industries are placed in the fourth quadrant since their power indices in the asset-oriented system are comparatively smaller than those of other institutional sectors. Second, investments and savings, which are induced by the central bank, are calculated for monetary policy evaluations. Industries are bifurcated into two groups to compare their features. The first group refers to industries whose power of dispersion in the asset-oriented system is greater than 1, whereas the second group indicates that their index is less than 1. We found that the net induced investments (NII)–total liabilities ratios of the first group show levels half those of the second group since the former's induced savings are obviously greater than the latter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proton exchange membrane (PEM) fuel cell has been known as a promising power source for different applications such as automotive, residential and stationary. During the operation of a PEM fuel cell, hydrogen is oxidized in anode and oxygen is reduced in the cathode to produce the intended power. Water and heat are inevitable byproducts of these reactions. The water produced in the cathode should be properly removed from inside the cell. Otherwise, it may block the path of reactants passing through the gas channels and/or gas diffusion layer (GDL). This deteriorates the performance of the cell and eventually can cease the operation of the cell. Water transport in PEM fuel cell has been the subject of this PhD study. Water transport on the surface of the GDL, through the gas flow channels, and through GDL has been studied in details. For water transport on the surface of the GDL, droplet detachment has been measured for different GDL conditions and for anode and cathode gas flow channels. Water transport through gas flow channels has been investigated by measuring the two-phase flow pressure drop along the gas flow channels. As accumulated liquid water within gas flow channels resists the gas flow, the pressure drop increases along the flow channels. The two-phase flow pressure drop can reveal useful information about the amount of liquid water accumulated within gas flow channels. Liquid water transport though GDL has also been investigated by measuring the liquid water breakthrough pressure for the region between the capillary fingering and the stable displacement on the drainage phase diagram. The breakthrough pressure has been measured for different variables such as GDL thickness, PTFE/Nafion content within the GDL, GDL compression, the inclusion of a micro-porous layer (MPL), and different water flow rates through the GDL. Prior to all these studies, GDL microstructural properties have been studied. GDL microstructural properties such as mean pore diameter, pore diameter distribution, and pore roundness distribution have been investigated by analyzing SEM images of GDL samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Frequency, time and places of charging and discharging have critical impact on the Quality of Experience (QoE) of using Electric Vehicles (EVs). EV charging and discharging scheduling schemes should consider both the QoE of using EV and the load capacity of the power grid. In this paper, we design a traveling plan-aware scheduling scheme for EV charging in driving pattern and a cooperative EV charging and discharging scheme in parking pattern to improve the QoE of using EV and enhance the reliability of the power grid. For traveling planaware scheduling, the assignment of EVs to Charging Stations (CSs) is modeled as a many-to-one matching game and the Stable Matching Algorithm (SMA) is proposed. For cooperative EV charging and discharging in parking pattern, the electricity exchange between charging EVs and discharging EVs in the same parking lot is formulated as a many-to-many matching model with ties, and we develop the Pareto Optimal Matching Algorithm (POMA). Simulation results indicates that the SMA can significantly improve the average system utility for EV charging in driving pattern, and the POMA can increase the amount of electricity offloaded from the grid which is helpful to enhance the reliability of the power grid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performance testing methods of boilers in transient operating conditions (start, stop and combustion power modulation sequences) need the combustion rate quantified to allow for the emissions to be quantified. One way of quantifying the combustion rate of a boiler during transient operating conditions is by measuring the flue gas flow rate. The flow conditions in chimneys of single family house boilers pose a challenge however, mainly because of the low flow velocity. The main objectives of the work were to characterize the flow conditions in residential chimneys, to evaluate the use of the Pitot-static method and the averaging Pitot method, and to develop and test a calibration method for averaging Pitot probes for low