8 resultados para method development
em Digital Commons - Michigan Tech
Resumo:
Regional flood frequency techniques are commonly used to estimate flood quantiles when flood data is unavailable or the record length at an individual gauging station is insufficient for reliable analyses. These methods compensate for limited or unavailable data by pooling data from nearby gauged sites. This requires the delineation of hydrologically homogeneous regions in which the flood regime is sufficiently similar to allow the spatial transfer of information. It is generally accepted that hydrologic similarity results from similar physiographic characteristics, and thus these characteristics can be used to delineate regions and classify ungauged sites. However, as currently practiced, the delineation is highly subjective and dependent on the similarity measures and classification techniques employed. A standardized procedure for delineation of hydrologically homogeneous regions is presented herein. Key aspects are a new statistical metric to identify physically discordant sites, and the identification of an appropriate set of physically based measures of extreme hydrological similarity. A combination of multivariate statistical techniques applied to multiple flood statistics and basin characteristics for gauging stations in the Southeastern U.S. revealed that basin slope, elevation, and soil drainage largely determine the extreme hydrological behavior of a watershed. Use of these characteristics as similarity measures in the standardized approach for region delineation yields regions which are more homogeneous and more efficient for quantile estimation at ungauged sites than those delineated using alternative physically-based procedures typically employed in practice. The proposed methods and key physical characteristics are also shown to be efficient for region delineation and quantile development in alternative areas composed of watersheds with statistically different physical composition. In addition, the use of aggregated values of key watershed characteristics was found to be sufficient for the regionalization of flood data; the added time and computational effort required to derive spatially distributed watershed variables does not increase the accuracy of quantile estimators for ungauged sites. This dissertation also presents a methodology by which flood quantile estimates in Haiti can be derived using relationships developed for data rich regions of the U.S. As currently practiced, regional flood frequency techniques can only be applied within the predefined area used for model development. However, results presented herein demonstrate that the regional flood distribution can successfully be extrapolated to areas of similar physical composition located beyond the extent of that used for model development provided differences in precipitation are accounted for and the site in question can be appropriately classified within a delineated region.
Resumo:
Since the introduction of the rope-pump in Nicaragua in the 1990s, the dependence on wells in rural areas has grown steadily. However, little or no attention is paid to rope-pump well performance after installation. Due to financial restraints, groundwater resource monitoring using conventional testing methods is too costly and out of reach of rural municipalities. Nonetheless, there is widespread agreement that without a way to quantify the changes in well performance over time, prioritizing regulatory actions is impossible. A manual pumping test method is presented, which at a fraction of the cost of a conventional pumping test, measures the specific capacity of rope-pump wells. The method requires only sight modifcations to the well and reasonable limitations on well useage prior to testing. The pumping test was performed a minimum of 33 times in three wells over an eight-month period in a small rural community in Chontales, Nicaragua. Data was used to measure seasonal variations in specific well capacity for three rope-pump wells completed in fractured crystalline basalt. Data collected from the tests were analyzed using four methods (equilibrium approximation, time-drawdown during pumping, time-drawdown during recovery, and time-drawdown during late-time recovery) to determine the best data-analyzing method. One conventional pumping test was performed to aid in evaluating the manual method. The equilibrim approximation can be performed while in the field with only a calculator and is the most technologically appropriate method for analyzing data. Results from this method overestimate specific capacity by 41% when compared to results from the conventional pumping test. The other analyes methods, requiring more sophisticated tools and higher-level interpretation skills, yielded results that agree to within 14% (pumping phase), 31% (recovery phase) and 133% (late-time recovery) of the conventional test productivity value. The wide variability in accuracy results principally from difficulties in achieving equilibrated pumping level and casing storage effects in the puping/recovery data. Decreases in well productivity resulting from naturally occuring seasonal water-table drops varied from insignificant in two wells to 80% in the third. Despite practical and theoretical limitations on the method, the collected data may be useful for municipal institutions to track changes in well behavior, eventually developing a database for planning future ground water development projects. Furthermore, the data could improve well-users’ abilities to self regulate well usage without expensive aquifer characterization.
Resumo:
In this project we developed conductive thermoplastic resins by adding varying amounts of three different carbon fillers: carbon black (CB), synthetic graphite (SG) and multi-walled carbon nanotubes (CNT) to a polypropylene matrix for application as fuel cell bipolar plates. This component of fuel cells provides mechanical support to the stack, circulates the gases that participate in the electrochemical reaction within the fuel cell and allows for removal of the excess heat from the system. The materials fabricated in this work were tested to determine their mechanical and thermal properties. These materials were produced by adding varying amounts of single carbon fillers to a polypropylene matrix (2.5 to 15 wt.% Ketjenblack EC-600 JD carbon black, 10 to 80 wt.% Asbury Carbon's Thermocarb TC-300 synthetic graphite, and 2.5 to 15 wt.% of Hyperion Catalysis International's FIBRILTM multi-walled carbon nanotubes) In addition, composite materials containing combinations of these three fillers were produced. The thermal conductivity results showed an increase in both through-plane and in-plane thermal conductivities, with the largest increase observed for synthetic graphite. The Department of Energy (DOE) had previously set a thermal conductivity goal of 20 W/m·K, which was surpassed by formulations containing 75 wt.% and 80 wt.% SG, yielding in-plane thermal conductivity values of 24.4 W/m·K and 33.6 W/m·K, respectively. In addition, composites containing 2.5 wt.% CB, 65 wt.% SG, and 6 wt.% CNT in PP had an in–plane thermal conductivity of 37 W/m·K. Flexural and tensile tests were conducted. All composite formulations exceeded the flexural strength target of 25 MPa set by DOE. The tensile and flexural modulus of the composites increased with higher concentration of carbon fillers. Carbon black and synthetic graphite caused a decrease in the tensile and flexural strengths of the composites. However, carbon nanotubes increased the composite tensile and flexural strengths. Mathematical models were applied to estimate through-plane and in-plane thermal conductivities of single and multiple filler formulations, and tensile modulus of single-filler formulations. For thermal conductivity, Nielsen's model yielded accurate thermal conductivity values when compared to experimental results obtained through the Flash method. For prediction of tensile modulus Nielsen's model yielded the smallest error between the predicted and experimental values. The second part of this project consisted of the development of a curriculum in Fuel Cell and Hydrogen Technologies to address different educational barriers identified by the Department of Energy. By the creation of new courses and enterprise programs in the areas of fuel cells and the use of hydrogen as an energy carrier, we introduced engineering students to the new technologies, policies and challenges present with this alternative energy. Feedback provided by students participating in these courses and enterprise programs indicate positive acceptance of the different educational tools. Results obtained from a survey applied to students after participating in these courses showed an increase in the knowledge and awareness of energy fundamentals, which indicates the modules developed in this project are effective in introducing students to alternative energy sources.
Resumo:
Traditionally, asphalt mixtures were produced at high temperatures (between 150°C to 180°C) and therefore often referred to as Hot Mix Asphalt (HMA). Recently, a new technology named Warm Mix Asphalt (WMA) was developed in Europe that allows HMA to be produced at a lower temperature. Over years of research efforts, a few WMA technologies were introduced including the foaming method using Aspha-min® and Advera® WMA; organic additives such as Sasobit® and Asphaltan B®; and chemical packages such as Evotherm® and Cecabase RT®. Benefits were found when lower temperatures were used to produce asphalt mixtures, especially when it comes to environmental and energy savings. Even though WMA has shown promising results in energy savings and emission reduction, however, only limited studies and laboratory tests have been conducted to date. The objectives of this project are to 1) develop a mix design framework for WMA by evaluating its mechanical properties; 2) evaluate performance of WMA containing high percentages of recycled asphalt material; and 3) evaluate the moisture sensitivity in WMA. The test results show that most of the WMA has higher fatigue life and TSR which indicated WMA has better fatigue cracking and moisture damage resistant; however, the rutting potential of most of the WMA tested were higher than the control HMA. A recommended WMA mix design framework was developed as well. The WMA design framework was presented in this study to provide contractors, and government agencies successfully design WMA. Mixtures containing high RAP and RAS were studied as well and the overall results show that WMA technology allows the mixture containing high RAP content and RAS to be produced at lower temperature (up to 35°C lower) without significantly affect the performance of asphalt mixture in terms of rutting, fatigue and moisture susceptibility. Lastly, the study also found that by introducing the hydrated lime in the WMA, all mixtures modified by the hydrated lime passed the minimum requirement of 0.80. This indicated that, the moisture susceptibility of the WMA can be improved by adding the hydrated lime.
Resumo:
As an important Civil Engineering material, asphalt concrete (AC) is commonly used to build road surfaces, airports, and parking lots. With traditional laboratory tests and theoretical equations, it is a challenge to fully understand such a random composite material. Based on the discrete element method (DEM), this research seeks to develop and implement computer models as research approaches for improving understandings of AC microstructure-based mechanics. In this research, three categories of approaches were developed or employed to simulate microstructures of AC materials, namely the randomly-generated models, the idealized models, and image-based models. The image-based models were recommended for accurately predicting AC performance, while the other models were recommended as research tools to obtain deep insight into the AC microstructure-based mechanics. A viscoelastic micromechanical model was developed to capture viscoelastic interactions within the AC microstructure. Four types of constitutive models were built to address the four categories of interactions within an AC specimen. Each of the constitutive models consists of three parts which represent three different interaction behaviors: a stiffness model (force-displace relation), a bonding model (shear and tensile strengths), and a slip model (frictional property). Three techniques were developed to reduce the computational time for AC viscoelastic simulations. It was found that the computational time was significantly reduced to days or hours from years or months for typical three-dimensional models. Dynamic modulus and creep stiffness tests were simulated and methodologies were developed to determine the viscoelastic parameters. It was found that the DE models could successfully predict dynamic modulus, phase angles, and creep stiffness in a wide range of frequencies, temperatures, and time spans. Mineral aggregate morphology characteristics (sphericity, orientation, and angularity) were studied to investigate their impacts on AC creep stiffness. It was found that aggregate characteristics significantly impact creep stiffness. Pavement responses and pavement-vehicle interactions were investigated by simulating pavement sections under a rolling wheel. It was found that wheel acceleration, steadily moving, and deceleration significantly impact contact forces. Additionally, summary and recommendations were provided in the last chapter and part of computer programming codes wree provided in the appendixes.
Resumo:
Hall thrusters have been under active development around the world since the 1960’s. Thrusters using traditional propellants such as xenon have been flown on a variety of satellite orbit raising and maintenance missions with an excellent record. To expand the mission envelope, it is necessary to lower the specific impulse of the thrusters but xenon and krypton are poor performers at specific impulses below 1,200 seconds. To enhance low specific impulse performance, this dissertation examines the development of a Hall-effect thruster which uses bismuth as a propellant. Bismuth, the heaviest non-radioactive element, holds many advantages over noble gas propellants from an energetics as well as a practical economic standpoint. Low ionization energy, large electron-impact crosssection and high atomic mass make bismuth ideal for low-specific impulse applications. The primary disadvantage lies in the high temperatures which are required to generate the bismuth vapors. Previous efforts carried out in the Soviet Union relied upon the complete bismuth vaporization and gas phase delivery to the anode. While this proved successful, the power required to vaporize and maintain gas phase throughout the mass flow system quickly removed many of the efficiency gains expected from using bismuth. To solve these problems, a unique method of delivering liquid bismuth to the anode has been developed. Bismuth is contained within a hollow anode reservoir that is capped by a porous metallic disc. By utilizing the inherent waste heat generated in a Hall thruster, liquid bismuth is evaporated and the vapors pass through the porous disc into the discharge chamber. Due to the high temperatures and material compatibility requirements, the anode was fabricated out of pure molybdenum. The porous vaporizer was not available commercially so a method of creating a refractory porous plate with 40-50% open porosity was developed. Molybdenum also does not respond well to most forms of welding so a diffusion bonding process was also developed to join the molybdenum porous disc to the molybdenum anode. Operation of the direct evaporation bismuth Hall thruster revealed interesting phenomenon. By utilizing constant current mode on a discharge power supply, the discharge voltage settles out to a stable operating point which is a function of discharge current, anode face area and average pore size on the vaporizer. Oscillations with a 40 second period were also observed. Preliminary performance data suggests that the direct evaporation bismuth Hall thruster performs similar to xenon and krypton Hall thrusters. Plume interrogation with a Retarding Potential Analyzer confirmed that bismuth ions were being efficiently accelerated while Faraday probe data gave a view of the ion density in the exhausted plume.
Resumo:
There is a need by engine manufactures for computationally efficient and accurate predictive combustion modeling tools for integration in engine simulation software for the assessment of combustion system hardware designs and early development of engine calibrations. This thesis discusses the process for the development and validation of a combustion modeling tool for Gasoline Direct Injected Spark Ignited Engine with variable valve timing, lift and duration valvetrain hardware from experimental data. Data was correlated and regressed from accepted methods for calculating the turbulent flow and flame propagation characteristics for an internal combustion engine. A non-linear regression modeling method was utilized to develop a combustion model to determine the fuel mass burn rate at multiple points during the combustion process. The computational fluid dynamic software Converge ©, was used to simulate and correlate the 3-D combustion system, port and piston geometry to the turbulent flow development within the cylinder to properly predict the experimental data turbulent flow parameters through the intake, compression and expansion processes. The engine simulation software GT-Power © is then used to determine the 1-D flow characteristics of the engine hardware being tested to correlate the regressed combustion modeling tool to experimental data to determine accuracy. The results of the combustion modeling tool show accurate trends capturing the combustion sensitivities to turbulent flow, thermodynamic and internal residual effects with changes in intake and exhaust valve timing, lift and duration.
Resumo:
Transformer protection is one of the most challenging applications within the power system protective relay field. Transformers with a capacity rating exceeding 10 MVA are usually protected using differential current relays. Transformers are an aging and vulnerable bottleneck in the present power grid; therefore, quick fault detection and corresponding transformer de-energization is the key element in minimizing transformer damage. Present differential current relays are based on digital signal processing (DSP). They combine DSP phasor estimation and protective-logic-based decision making. The limitations of existing DSP-based differential current relays must be identified to determine the best protection options for sensitive and quick fault detection. The development, implementation, and evaluation of a DSP differential current relay is detailed. The overall goal is to make fault detection faster without compromising secure and safe transformer operation. A detailed background on the DSP differential current relay is provided. Then different DSP phasor estimation filters are implemented and evaluated based on their ability to extract desired frequency components from the measured current signal quickly and accurately. The main focus of the phasor estimation evaluation is to identify the difference between using non-recursive and recursive filtering methods. Then the protective logic of the DSP differential current relay is implemented and required settings made in accordance with transformer application. Finally, the DSP differential current relay will be evaluated using available transformer models within the ATP simulation environment. Recursive filtering methods were found to have significant advantage over non-recursive filtering methods when evaluated individually and when applied in the DSP differential relay. Recursive filtering methods can be up to 50% faster than non-recursive methods, but can cause false trip due to overshoot if the only objective is speed. The relay sensitivity is however independent of filtering method and depends on the settings of the relay’s differential characteristics (pickup threshold and percent slope).