8 resultados para ION ENERGY-DISTRIBUTION

em Digital Commons - Michigan Tech


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hall-effect thruster (HET) cathodes are responsible for the generation of the free electrons necessary to initiate and sustain the main plasma discharge and to neutralize the ion beam. The position of the cathode relative to the thruster strongly affects the efficiency of thrust generation. However, the mechanisms by which the position affects the efficiency are not well understood. This dissertation explores the effect of cathode position on HET efficiency. Magnetic field topology is shown to play an important role in the coupling between the cathode plasma and the main discharge plasma. The position of the cathode within the magnetic field affects the ion beam and the plasma properties of the near-field plume, which explains the changes in efficiency of the thruster. Several experiments were conducted which explored the changes of efficiency arising from changes in cathode coupling. In each experiment, the thrust, discharge current, and cathode coupling voltage were monitored while changes in the independent variables of cathode position, cathode mass flow and magnetic field topology were made. From the telemetry data, the efficiency of the HET thrust generation was calculated. Furthermore, several ion beam and plasma properties were measured including ion energy distribution, beam current density profile, near-field plasma potential, electron temperature, and electron density. The ion beam data show how the independent variables affected the quality of ion beam and therefore the efficiency of thrust generation. The measurements of near-field plasma properties partially explain how the changes in ion beam quality arise. The results of the experiments show that cathode position, mass flow, and field topology affect several aspects of the HET operation, especially beam divergence and voltage utilization efficiencies. Furthermore, the experiments show that magnetic field topology is important in the cathode coupling process. In particular, the magnetic field separatrix plays a critical role in impeding the coupling between cathode and HET. Suggested changes to HET thruster designs are provided including ways to improve the position of the separatrix to accommodate the cathode.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this study, the use of magnesium as a Hall thruster propellant was evaluated. A xenon Hall thruster was modified such that magnesium propellant could be loaded into the anode and use waste heat from the thruster discharge to drive the propellant vaporization. A control scheme was developed, which allowed for precise control of the mass flow rate while still using plasma heating as the main mechanism for evaporation. The thruster anode, which also served as the propellant reservoir, was designed such that the open area was too low for sufficient vapor flow at normal operating temperatures (i.e. plasma heating alone). The remaining heat needed to achieve enough vapor flow to sustain thruster discharge came from a counter-wound resistive heater located behind the anode. The control system has the ability to arrest thermal runaway in a direct evaporation feed system and stabilize the discharge current during voltage-limited operation. A proportional-integral-derivative control algorithm was implemented to enable automated operation of the mass flow control system using the discharge current as the measured variable and the anode heater current as the controlled parameter. Steady-state operation at constant voltage with discharge current excursions less than 0.35 A was demonstrated for 70 min. Using this long-duration method, stable operation was achieved with heater powers as low as 6% of the total discharge power. Using the thermal mass flow control system the thruster operated stably enough and long enough that performance measurements could be obtained and compared to the performance of the thruster using xenon propellant. It was found that when operated with magnesium, the thruster has thrust ranging from 34 mN at 200 V to 39 mN at 300 V with 1.7 mg/s of propellant. It was found to have 27 mN of thrust at 300 V using 1.0 mg/s of propellant. The thrust-to-power ratio ranged from 24 mN/kW at 200 V to 18 mN/kW at 300 volts. The specific impulse was 2000 s at 200 V and upwards of 2700 s at 300 V. The anode efficiency was found to be ~23% using magnesium, which is substantially lower than the 40% anode efficiency of xenon at approximately equivalent molar flow rates. Measurements in the plasma plume of the thruster—operated using magnesium and xenon propellants—were obtained using a Faraday probe to measure off-axis current distribution, a retarding potential analyzer to measure ion energy, and a double Langmuir probe to measure plasma density, electron temperature, and plasma potential. Additionally, the off axis current distributions and ion energy distributions were compared to measurements made in krypton and bismuth plasmas obtained in previous studies of the same thruster. Comparisons showed that magnesium had the largest beam divergence of the four propellants while the others had similar divergence. The comparisons also showed that magnesium and krypton both had very low voltage utilization compared to xenon and bismuth. It is likely that the differences in plume structure are due to the atomic differences between the propellants; the ionization mean free path goes down with increasing atomic mass. Magnesium and krypton have long ionization mean free paths and therefore require physically larger thruster dimensions for efficient thruster operation and would benefit from magnetic shielding.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Development of alternative propellants for Hall thruster operation is an active area of research. Xenon is the current propellant of choice for Hall thrusters, but can be costly in large thrusters and for extended test periods. Condensible propellants may offer an alternative to xenon, as they will not require costly active pumping to remove from a test facility, and may be less expensive to purchase. A method has been developed which uses segmented electrodes in the discharge channel of a Hall thruster to divert discharge current to and from the main anode and thus control the anode temperature. By placing a propellant reservoir in the anode, the evaporation rate, and hence, mass flow of propellant can be controlled. Segmented electrodes for thermal control of a Hall thruster represent a unique strategy of thruster design, and thus the performance of the thruster must be measured to determine the effect the electrodes have on the thruster. Furthermore, the source of any changes in thruster performance due to the adjustment of discharge current between the shims and the main anode must be characterized. A Hall thruster was designed and constructed with segmented electrodes. It was then tested at anode voltages between 300 and 400 V and mass flows between 4 and 6 mg/s, as well as 100%, 75%, 50%, 25%, and <5% of the discharge current on the shim electrodes. The level of current on the shims was adjusted by changing the shim voltage. At each operating point, the thruster performance, plume divergence, ion energy, and multiply charged ion fraction were measured performance exhibited a small change with the level of discharge current on the shim electrodes. Thrust and specific impulse increased by as much as 6% and 7.7%, respectively, as discharge current was shifted from the main anode to the shims at constant anode voltage. Thruster efficiency did not change. Plume divergence was reduced by approximately 4 degrees of half-angle at high levels of current on the shims and at all combinations of mass flow and anode voltage. The fraction of singly charged xenon in the thruster plume varied between approximately 80% and 95% as the anode voltage and mass flow were changed, but did not show a significant change with shim current. Doubly and triply charged xenon made up the remainder of the ions detected. Ion energy exhibited a mixed behavior. The highest voltage present in the thruster largely dictated the most probable energy; either shim or anode voltage, depending on which was higher. The overall change in most probable ion energy was 20-30 eV, the majority of which took place while the shim voltage was higher than the anode voltage. The thrust, specific impulse, plume divergence, and ion energy all indicate that the thruster is capable of a higher performance output at high levels of discharge current on the shims. The lack of a change in efficiency and fraction of multiply charged ions indicate that the thruster can be operated at any level of current on the shims without detrimental effect, and thus a condensible propellant thruster can control the anode temperature without a decrease in efficiency or a change in the multiply charged ion fraction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This report is a dissertation proposal that focuses on the energy balance within an internal combustion engine with a unique coolant-based waste heat recovery system. It has been predicted by the U.S. Energy Information Administration that the transportation sector in the United States will consume approximately 15 million barrels per day in liquid fuels by the year 2025. The proposed coolant-based waste heat recovery technique has the potential to reduce the yearly usage of those liquid fuels by nearly 50 million barrels by only recovering even a modest 1% of the wasted energy within the coolant system. The proposed waste heat recovery technique implements thermoelectric generators on the outside cylinder walls of an internal combustion engine. For this research, one outside cylinder wall of a twin cylinder 26 horsepower water-cooled gasoline engine will be implemented with a thermoelectric generator surrogate material. The vertical location of these TEG surrogates along the water jacket will be varied along with the TEG surrogate thermal conductivity. The aim of this proposed dissertation is to attain empirical evidence of the impact, including energy distribution and cylinder wall temperatures, of installing TEGs in the water jacket area. The results can be used for future research on larger engines and will also assist with proper TEG selection to maximize energy recovery efficiencies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This report is a PhD dissertation proposal to study the in-cylinder temperature and heat flux distributions within a gasoline turbocharged direct injection (GTDI) engine. Recent regulations requiring automotive manufacturers to increase the fuel efficiency of their vehicles has led to great technological achievements in internal combustion engines. These achievements have increased the power density of gasoline engines dramatically in the last two decades. Engine technologies such as variable valve timing (VVT), direct injection (DI), and turbocharging have significantly improved engine power-to-weight and power-to-displacement ratios. A popular trend for increasing vehicle fuel economy in recent years has been to downsize the engine and add VVT, DI, and turbocharging technologies so that a lighter more efficient engine can replace a larger, heavier one. With the added power density, thermal management of the engine becomes a more important issue. Engine components are being pushed to their temperature limits. Therefore it has become increasingly important to have a greater understanding of the parameters that affect in-cylinder temperatures and heat transfer. The proposed research will analyze the effects of engine speed, load, relative air-fuel ratio (AFR), and exhaust gas recirculation (EGR) on both in-cylinder and global temperature and heat transfer distributions. Additionally, the effect of knocking combustion and fuel spray impingement will be investigated. The proposed research will be conducted on a 3.5 L six cylinder GTDI engine. The research engine will be instrumented with a large number of sensors to measure in-cylinder temperatures and pressures, as well as, the temperature, pressure, and flow rates of energy streams into and out of the engine. One of the goals of this research is to create a model that will predict the energy distribution to the crankshaft, exhaust, and cooling system based on normalized values for engine speed, load, AFR, and EGR. The results could be used to aid in the engine design phase for turbocharger and cooling system sizing. Additionally, the data collected can be used for validation of engine simulation models, since in-cylinder temperature and heat flux data is not readily available in the literature..

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The push for improved fuel economy and reduced emissions has led to great achievements in engine performance and control. These achievements have increased the efficiency and power density of gasoline engines dramatically in the last two decades. With the added power density, thermal management of the engine has become increasingly important. Therefore it is critical to have accurate temperature and heat transfer models as well as data to validate them. With the recent adoption of the 2025 Corporate Average Fuel Economy(CAFE) standard, there has been a push to improve the thermal efficiency of internal combustion engines even further. Lean and dilute combustion regimes along with waste heat recovery systems are being explored as options for improving efficiency. In order to understand how these technologies will impact engine performance and each other, this research sought to analyze the engine from both a 1st law energy balance perspective, as well as from a 2nd law exergy analysis. This research also provided insights into the effects of various parameters on in-cylinder temperatures and heat transfer as well as provides data for validation of other models. It was found that the engine load was the dominant factor for the energy distribution, with higher loads resulting in lower coolant heat transfer and higher brake work and exhaust energy. From an exergy perspective, the exhaust system provided the best waste heat recovery potential due to its significantly higher temperatures compared to the cooling circuit. EGR and lean combustion both resulted in lower combustion chamber and exhaust temperatures; however, in most cases the increased flow rates resulted in a net increase in the energy in the exhaust. The exhaust exergy, on the other hand, was either increased or decreased depending on the location in the exhaust system and the other operating conditions. The effects of dilution from lean operation and EGR were compared using a dilution ratio, and the results showed that lean operation resulted in a larger increase in efficiency than the same amount of dilution with EGR. Finally, a method for identifying fuel spray impingement from piston surface temperature measurements was found. Note: The material contained in this section is planned for submission as part of a journal article and/or conference paper in the future.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The United States transportation industry is predicted to consume approximately 13 million barrels of liquid fuel per day by 2025. If one percent of the fuel energy were salvaged through waste heat recovery, there would be a reduction of 130 thousand barrels of liquid fuel per day. This dissertation focuses on automotive waste heat recovery techniques with an emphasis on two novel techniques. The first technique investigated was a combination coolant and exhaust-based Rankine cycle system, which utilized a patented piston-in-piston engine technology. The research scope included a simulation of the maximum mass flow rate of steam (700 K and 5.5 MPa) from two heat exchangers, the potential power generation from the secondary piston steam chambers, and the resulting steam quality within the steam chamber. The secondary piston chamber provided supplemental steam power strokes during the engine's compression and exhaust strokes to reduce the pumping work of the engine. A Class-8 diesel engine, operating at 1,500 RPM at full load, had a maximum increase in the brake fuel conversion efficiency of 3.1%. The second technique investigated the implementation of thermoelectric generators on the outer cylinder walls of a liquid-cooled internal combustion engine. The research scope focused on the energy generation, fuel energy distribution, and cylinder wall temperatures. The analysis was conducted over a range of engine speeds and loads in a two cylinder, 19.4 kW, liquid-cooled, spark-ignition engine. The cylinder wall temperatures increased by 17% to 44% which correlated well to the 4.3% to 9.5% decrease in coolant heat transfer. Only 23.3% to 28.2% of the heat transfer to the coolant was transferred through the TEG and TEG surrogate material. The gross indicated work decreased by 0.4% to 1.0%. The exhaust gas energy decreased by 0.8% to 5.9%. Due to coolant contamination, the TEG output was not able to be obtained. TEG output was predicted from cylinder wall temperatures and manufacturer documentation, which was less than 0.1% of the cumulative heat release. Higher TEG conversion efficiencies, combined with greater control of heat transfer paths, would be needed to improve energy output and make this a viable waste heat recovery technique.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Two of the indicators of the UN Millennium Development Goals ensuring environmental sustainability are energy use and per capita carbon dioxide emissions. The increasing urbanization and increasing world population may require increased energy use in order to transport enough safe drinking water to communities. In addition, the increase in water use would result in increased energy consumption, thereby resulting in increased green-house gas emissions that promote global climate change. The study of multiple Municipal Drinking Water Distribution Systems (MDWDSs) that relates various MDWDS aspects--system components and properties--to energy use is strongly desirable. The understanding of the relationship between system aspects and energy use aids in energy-efficient design. In this study, components of a MDWDS, and/or the characteristics associated with the component are termed as MDWDS aspects (hereafter--system aspects). There are many aspects of MDWDSs that affect the energy usage. Three system aspects (1) system-wide water demand, (2) storage tank parameters, and (3) pumping stations were analyzed in this study. The study involved seven MDWDSs to understand the relationship between the above-mentioned system aspects in relation with energy use. A MDWDSs model, EPANET 2.0, was utilized to analyze the seven systems. Six of the systems were real and one was a hypothetical system. The study presented here is unique in its statistical approach using seven municipal water distribution systems. The first system aspect studied was system-wide water demand. The analysis involved analyzing seven systems for the variation of water demand and its impact on energy use. To quantify the effects of water use reduction on energy use in a municipal water distribution system, the seven systems were modeled and the energy usage quantified for various amounts of water conservation. It was found that the effect of water conservation on energy use was linear for all seven systems and that all the average values of all the systems' energy use plotted on the same line with a high R 2 value. From this relationship, it can be ascertained that a 20% reduction in water demand results in approximately a 13% savings in energy use for all seven systems analyzed. This figure might hold true for many similar systems that are dominated by pumping and not gravity driven. The second system aspect analyzed was storage tank(s) parameters. Various tank parameters: (1) tank maximum water levels, (2) tank elevation, and (3) tank diameter were considered in this part of the study. MDWDSs use a significant amount of electrical energy for the pumping of water from low elevations (usually a source) to higher ones (usually storage tanks). The use of electrical energy has an effect on pollution emissions and, therefore, potential global climate change as well. Various values of these tank parameters were modeled on seven MDWDSs of various sizes using a network solver and the energy usage recorded. It was found that when averaged over all seven analyzed systems (1) the reduction of maximum tank water level by 50% results in a 2% energy reduction, (2) energy use for a change in tank elevation is system specific, and (2) a reduction of tank diameter of 50% results in approximately a 7% energy savings. The third system aspect analyzed in this study was pumping station parameters. A pumping station consists of one or more pumps. The seven systems were analyzed to understand the effect of the variation of pump horsepower and the number of booster stations on energy use. It was found that adding booster stations could save energy depending upon the system characteristics. For systems with flat topography, a single main pumping station was found to use less energy. In systems with a higher-elevation neighborhood, however, one or more booster pumps with a reduced main pumping station capacity used less energy. The energy savings for the seven systems was dependent on the number of boosters and ranged from 5% to 66% for the analyzed five systems with higher elevation neighborhoods (S3, S4, S5, S6, and S7). No energy savings was realized for the remaining two flat topography systems, S1, and S2. The present study analyzed and established the relationship between various system aspects and energy use in seven MDWDSs. This aids in estimating the amount of energy savings in MDWDSs. This energy savings would ultimately help reduce Greenhouse gases (GHGs) emissions including per capita CO 2 emissions thereby potentially lowering the global climate change effect. This will in turn contribute to meeting the MDG of ensuring environmental sustainability.