22 resultados para Location-aware process modeling

em Digital Commons - Michigan Tech


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic spectrum access (DSA) aims at utilizing spectral opportunities both in time and frequency domains at any given location, which arise due to variations in spectrum usage. Recently, Cognitive radios (CRs) have been proposed as a means of implementing DSA. In this work we focus on the aspect of resource management in overlaid CRNs. We formulate resource allocation strategies for cognitive radio networks (CRNs) as mathematical optimization problems. Specifically, we focus on two key problems in resource management: Sum Rate Maximization and Maximization of Number of Admitted Users. Since both the above mentioned problems are NP hard due to presence of binary assignment variables, we propose novel graph based algorithms to optimally solve these problems. Further, we analyze the impact of location awareness on network performance of CRNs by considering three cases: Full location Aware, Partial location Aware and Non location Aware. Our results clearly show that location awareness has significant impact on performance of overlaid CRNs and leads to increase in spectrum utilization effciency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over 2 million Anterior Cruciate Ligament (ACL) injuries occur annually worldwide resulting in considerable economic and health burdens (e.g., suffering, surgery, loss of function, risk for re-injury, and osteoarthritis). Current screening methods are effective but they generally rely on expensive and time-consuming biomechanical movement analysis, and thus are impractical solutions. In this dissertation, I report on a series of studies that begins to investigate one potentially efficient alternative to biomechanical screening, namely skilled observational risk assessment (e.g., having experts estimate risk based on observations of athletes movements). Specifically, in Study 1 I discovered that ACL injury risk can be accurately and reliably estimated with nearly instantaneous visual inspection when observed by skilled and knowledgeable professionals. Modern psychometric optimization techniques were then used to develop a robust and efficient 5-item test of ACL injury risk prediction skill—i.e., the ACL Injury-Risk-Estimation Quiz or ACL-IQ. Study 2 cross-validated the results from Study 1 in a larger representative sample of both skilled (Exercise Science/Sports Medicine) and un-skilled (General Population) groups. In accord with research on human expertise, quantitative structural and process modeling of risk estimation indicated that superior performance was largely mediated by specific strategies and skills (e.g., ignoring irrelevant information), independent of domain general cognitive abilities (e.g., metal rotation, general decision skill). These cognitive models suggest that ACL-IQ is a trainable skill, providing a foundation for future research and applications in training, decision support, and ultimately clinical screening investigations. Overall, I present the first evidence that observational ACL injury risk prediction is possible including a robust technology for fast, accurate and reliable measurement—i.e., the ACL-IQ. Discussion focuses on applications and outreach including a web platform that was developed to house the test, provide a repository for further data collection, and increase public and professional awareness and outreach (www.ACL-IQ.org). Future directions and general applications of the skilled movement analysis approach are also discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work presents a 1-D process scale model used to investigate the chemical dynamics and temporal variability of nitrogen oxides (NOx) and ozone (O3) within and above snowpack at Summit, Greenland for March-May 2009 and estimates surface exchange of NOx between the snowpack and surface layer in April-May 2009. The model assumes the surface of snowflakes have a Liquid Like Layer (LLL) where aqueous chemistry occurs and interacts with the interstitial air of the snowpack. Model parameters and initialization are physically and chemically representative of snowpack at Summit, Greenland and model results are compared to measurements of NOx and O3 collected by our group at Summit, Greenland from 2008-2010. The model paired with measurements confirmed the main hypothesis in literature that photolysis of nitrate on the surface of snowflakes is responsible for nitrogen dioxide (NO2) production in the top ~50 cm of the snowpack at solar noon for March – May time periods in 2009. Nighttime peaks of NO2 in the snowpack for April and May were reproduced with aqueous formation of peroxynitric acid (HNO4) in the top ~50 cm of the snowpack with subsequent mass transfer to the gas phase, decomposition to form NO2 at nighttime, and transportation of the NO2 to depths of 2 meters. Modeled production of HNO4 was hindered in March 2009 due to the low production of its precursor, hydroperoxy radical, resulting in underestimation of nighttime NO2 in the snowpack for March 2009. The aqueous reaction of O3 with formic acid was the major sync of O3 in the snowpack for March-May, 2009. Nitrogen monoxide (NO) production in the top ~50 cm of the snowpack is related to the photolysis of NO2, which underrepresents NO in May of 2009. Modeled surface exchange of NOx in April and May are on the order of 1011 molecules m-2 s-1. Removal of measured downward fluxes of NO and NO2 in measured fluxes resulted in agreement between measured NOx fluxes and modeled surface exchange in April and an order of magnitude deviation in May. Modeled transport of NOx above the snowpack in May shows an order of magnitude increase of NOx fluxes in the first 50 cm of the snowpack and is attributed to the production of NO2 during the day from the thermal decomposition and photolysis of peroxynitric acid with minor contributions of NO from HONO photolysis in the early morning.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Determination of combustion metrics for a diesel engine has the potential of providing feedback for closed-loop combustion phasing control to meet current and upcoming emission and fuel consumption regulations. This thesis focused on the estimation of combustion metrics including start of combustion (SOC), crank angle location of 50% cumulative heat release (CA50), peak pressure crank angle location (PPCL), and peak pressure amplitude (PPA), peak apparent heat release rate crank angle location (PACL), mean absolute pressure error (MAPE), and peak apparent heat release rate amplitude (PAA). In-cylinder pressure has been used in the laboratory as the primary mechanism for characterization of combustion rates and more recently in-cylinder pressure has been used in series production vehicles for feedback control. However, the intrusive measurement with the in-cylinder pressure sensor is expensive and requires special mounting process and engine structure modification. As an alternative method, this work investigated block mounted accelerometers to estimate combustion metrics in a 9L I6 diesel engine. So the transfer path between the accelerometer signal and the in-cylinder pressure signal needs to be modeled. Depending on the transfer path, the in-cylinder pressure signal and the combustion metrics can be accurately estimated - recovered from accelerometer signals. The method and applicability for determining the transfer path is critical in utilizing an accelerometer(s) for feedback. Single-input single-output (SISO) frequency response function (FRF) is the most common transfer path model; however, it is shown here to have low robustness for varying engine operating conditions. This thesis examines mechanisms to improve the robustness of FRF for combustion metrics estimation. First, an adaptation process based on the particle swarm optimization algorithm was developed and added to the single-input single-output model. Second, a multiple-input single-output (MISO) FRF model coupled with principal component analysis and an offset compensation process was investigated and applied. Improvement of the FRF robustness was achieved based on these two approaches. Furthermore a neural network as a nonlinear model of the transfer path between the accelerometer signal and the apparent heat release rate was also investigated. Transfer path between the acoustical emissions and the in-cylinder pressure signal was also investigated in this dissertation on a high pressure common rail (HPCR) 1.9L TDI diesel engine. The acoustical emissions are an important factor in the powertrain development process. In this part of the research a transfer path was developed between the two and then used to predict the engine noise level with the measured in-cylinder pressure as the input. Three methods for transfer path modeling were applied and the method based on the cepstral smoothing technique led to the most accurate results with averaged estimation errors of 2 dBA and a root mean square error of 1.5dBA. Finally, a linear model for engine noise level estimation was proposed with the in-cylinder pressure signal and the engine speed as components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

EPON 862 is an epoxy resin which is cured with the hardening agent DETDA to form a crosslinked epoxy polymer and is used as a component in modern aircraft structures. These crosslinked polymers are often exposed to prolonged periods of temperatures below glass transition range which cause physical aging to occur. Because physical aging can compromise the performance of epoxies and their composites and because experimental techniques cannot provide all of the necessary physical insight that is needed to fully understand physical aging, efficient computational approaches to predict the effects of physical aging on thermo-mechanical properties are needed. In this study, Molecular Dynamics and Molecular Minimization simulations are being used to establish well-equilibrated, validated molecular models of the EPON 862-DETDA epoxy system with a range of crosslink densities using a united-atom force field. These simulations are subsequently used to predict the glass transition temperature, thermal expansion coefficients, and elastic properties of each of the crosslinked systems for validation of the modeling techniques. The results indicate that glass transition temperature and elastic properties increase with increasing levels of crosslink density and the thermal expansion coefficient decreases with crosslink density, both above and below the glass transition temperature. The results also indicate that there may be an upper limit to crosslink density that can be realistically achieved in epoxy systems. After evaluation of the thermo-mechanical properties, a method is developed to efficiently establish molecular models of epoxy resins that represent the corresponding real molecular structure at specific aging times. Although this approach does not model the physical aging process, it is useful in establishing a molecular model that resembles the physically-aged state for further use in predicting thermo-mechanical properties as a function of aging time. An equation has been predicted based on the results which directly correlate aging time to aged volume of the molecular model. This equation can be helpful for modelers who want to study properties of epoxy resins at different levels of aging but have little information about volume shrinkage occurring during physical aging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intraneural Ganglion Cysts expand within in a nerve, causing neurological deficits in afflicted patients. Modeling the propagation of these cysts, originating in the articular branch and then expanding radially outward, will help prove articular theory, and ultimately allow for more purposeful treatment of this condition. In Finite Element Analysis, traditional Lagrangian meshing methods fail to model the excessive deformation that occurs in the propagation of these cysts. This report explores the method of manual adaptive remeshing as a method to allow for the use of Lagrangian meshing, while circumventing the severe mesh distortions typical of using a Lagrangian mesh with a large deformation. Manual adaptive remeshing is the process of remeshing a deformed meshed part and then reapplying loads in order to achieve a larger deformation than a single mesh can achieve without excessive distortion. The methods of manual adaptive remeshing described in this Master’s Report are sufficient in modeling large deformations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As environmental problems became more complex, policy and regulatory decisions become far more difficult to make. The use of science has become an important practice in the decision making process of many federal agencies. Many different types of scientific information are used to make decisions within the EPA, with computer models becoming especially important. Environmental models are used throughout the EPA in a variety of contexts and their predictive capacity has become highly valued in decision making. The main focus of this research is to examine the EPA’s Council for Regulatory Modeling (CREM) as a case study in addressing science issues, particularly models, in government agencies. Specifically, the goal was to answer the following questions: What is the history of the CREM and how can this information shed light on the process of science policy implementation? What were the goals of implementing the CREM? Were these goals reached and how have they changed? What have been the impediments that the CREM has faced and why did these impediments occur? The three main sources of information for this research came from observations during summer employment with the CREM, document review and supplemental interviews with CREM participants and other members of the modeling community. Examining a history of modeling at the EPA, as well as a history of the CREM, provides insight into the many challenges that are faced when implementing science policy and science policy programs. After examining the many impediments that the CREM has faced in implementing modeling policies, it was clear that the impediments fall into two separate categories, classic and paradoxical. The classic impediments include the more standard impediments to science policy implementation that might be found in any regulatory environment, such as lack of resources and changes in administration. Paradoxical impediments are cyclical in nature, with no clear solution, such as balancing top-down versus bottom-up initiatives and coping with differing perceptions. These impediments, when not properly addressed, severely hinder the ability for organizations to successfully implement science policy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A mass‐balance model for Lake Superior was applied to polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs), and mercury to determine the major routes of entry and the major mechanisms of loss from this ecosystem as well as the time required for each contaminant class to approach steady state. A two‐box model (water column, surface sediments) incorporating seasonally adjusted environmental parameters was used. Both numerical (forward Euler) and analytical solutions were employed and compared. For validation, the model was compared with current and historical concentrations and fluxes in the lake and sediments. Results for PCBs were similar to prior work showing that air‐water exchange is the most rapid input and loss process. The model indicates that mercury behaves similarly to a moderately‐chlorinated PCB, with air‐water exchange being a relatively rapid input and loss process. Modeled accumulation fluxes of PBDEs in sediments agreed with measured values reported in the literature. Wet deposition rates were about three times greater than dry particulate deposition rates for PBDEs. Gas deposition was an important process for tri‐ and tetra‐BDEs (BDEs 28 and 47), but not for higher‐brominated BDEs. Sediment burial was the dominant loss mechanism for most of the PBDE congeners while volatilization was still significant for tri‐ and tetra‐BDEs. Because volatilization is a relatively rapid loss process for both mercury and the most abundant PCBs (tri‐ through penta‐), the model predicts that similar times (from 2 ‐ 10 yr) are required for the compounds to approach steady state in the lake. The model predicts that if inputs of Hg(II) to the lake decrease in the future then concentrations of mercury in the lake will decrease at a rate similar to the historical decline in PCB concentrations following the ban on production and most uses in the U.S. In contrast, PBDEs are likely to respond more slowly if atmospheric concentrations are reduced in the future because loss by volatilization is a much slower process for PBDEs, leading to lesser overall loss rates for PBDEs in comparison to PCBs and mercury. Uncertainties in the chemical degradation rates and partitioning constants of PBDEs are the largest source of uncertainty in the modeled times to steady‐state for this class of chemicals. The modeled organic PBT loading rates are sensitive to uncertainties in scavenging efficiencies by rain and snow, dry deposition velocity, watershed runoff concentrations, and uncertainties in air‐water exchange such as the effect of atmospheric stability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intraneural Ganglion Cyst is a 200 year old mystery related to nerve injury which is yet to be solved. Current treatments for the above problem are relatively simple procedures related to removal of cystic contents from the nerve. However, these treatments may result into neuropathic pain and recurrence of the cyst. The articular theory proposed by Spinner et al., (Spinner et al. 2003) takes into consideration the neurological deficit in Common Peroneal Nerve (CPN) branch of the sciatic nerve and affirms that in addition to the above treatments, ligation of articular branch results into foolproof eradication of the deficit. Mechanical Modeling of the Affected Nerve Cross Section will reinforce the articular theory (Spinner et al. 2003). As the cyst propagates, it compresses the neighboring fascicles and the nerve cross section appears like a signet ring. Hence, in order to mechanically model the affected nerve cross section; computational methods capable of modeling excessively large deformations are required. Traditional FEM produces distorted elements while modeling such deformations, resulting into inaccuracies and premature termination of the analysis. The methods described in this Master’s Thesis are effective enough to be able to simulate such deformations. The results obtained from the model adequately resemble the MRI image obtained at the same location and shows an appearance of a signet ring. This Master’s Thesis describes the neurological deficit in brief followed by detail explanation of the advanced computational methods used to simulate this problem. Finally, qualitative results show the resemblance of mechanical model to MRI images of the Nerve Cross Section at the same location validating the capability of these methods to study this neurological deficit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent times, the demand for the storage of electrical energy has grown rapidly for both static applications and the portable electronics enforcing the substantial improvement in battery systems, and Li-ion batteries have been proven to have maximum energy storage density in all rechargeable batteries. However, major breakthroughs are required to consummate the requirement of higher energy density with lower cost to penetrate new markets. Graphite anode having limited capacity has become a bottle neck in the process of developing next generation batteries and can be replaced by higher capacity metals such as Silicon. In the present study we are focusing on the mechanical behavior of the Si-thin film anode under various operating conditions. A numerical model is developed to simulate the intercalation induced stress and the failure mechanism of the complex anode structure. Effect of the various physical phenomena such as diffusion induced stress, plasticity and the crack propagation are investigated to predict better performance parameters for improved design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Particulate matter (PM) emissions standards set by the US Environmental Protection Agency (EPA) have become increasingly stringent over the years. The EPA regulation for PM in heavy duty diesel engines has been reduced to 0.01 g/bhp-hr for the year 2010. Heavy duty diesel engines make use of an aftertreatment filtration device, the Diesel Particulate Filter (DPF). DPFs are highly efficient in filtering PM (known as soot) and are an integral part of 2010 heavy duty diesel aftertreatment system. PM is accumulated in the DPF as the exhaust gas flows through it. This PM needs to be removed by oxidation periodically for the efficient functioning of the filter. This oxidation process is also known as regeneration. There are 2 types of regeneration processes, namely active regeneration (oxidation of PM by external means) and passive oxidation (oxidation of PM by internal means). Active regeneration occurs typically in high temperature regions, about 500 - 600 °C, which is much higher than normal diesel exhaust temperatures. Thus, the exhaust temperature has to be raised with the help of external devices like a Diesel Oxidation Catalyst (DOC) or a fuel burner. The O2 oxidizes PM producing CO2 as oxidation product. In passive oxidation, one way of regeneration is by the use of NO2. NO2 oxidizes the PM producing NO and CO2 as oxidation products. The passive oxidation process occurs at lower temperatures (200 - 400 °C) in comparison to the active regeneration temperatures. Generally, DPF substrate walls are washcoated with catalyst material to speed up the rate of PM oxidation. The catalyst washcoat is observed to increase the rate of PM oxidation. The goal of this research is to develop a simple mathematical model to simulate the PM depletion during the active regeneration process in a DPF (catalyzed and non-catalyzed). A simple, zero-dimensional kinetic model was developed in MATLAB. Experimental data required for calibration was obtained by active regeneration experiments performed on PM loaded mini DPFs in an automated flow reactor. The DPFs were loaded with PM from the exhaust of a commercial heavy duty diesel engine. The model was calibrated to the data obtained from active regeneration experiments. Numerical gradient based optimization techniques were used to estimate the kinetic parameters of the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To mitigate greenhouse gas (GHG) emissions and reduce U.S. dependence on imported oil, the United States (U.S.) is pursuing several options to create biofuels from renewable woody biomass (hereafter referred to as “biomass”). Because of the distributed nature of biomass feedstock, the cost and complexity of biomass recovery operations has significant challenges that hinder increased biomass utilization for energy production. To facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization and tapping unused forest residues, it is proposed to develop biofuel supply chain models based on optimization and simulation approaches. The biofuel supply chain is structured around four components: biofuel facility locations and sizes, biomass harvesting/forwarding, transportation, and storage. A Geographic Information System (GIS) based approach is proposed as a first step for selecting potential facility locations for biofuel production from forest biomass based on a set of evaluation criteria, such as accessibility to biomass, railway/road transportation network, water body and workforce. The development of optimization and simulation models is also proposed. The results of the models will be used to determine (1) the number, location, and size of the biofuel facilities, and (2) the amounts of biomass to be transported between the harvesting areas and the biofuel facilities over a 20-year timeframe. The multi-criteria objective is to minimize the weighted sum of the delivered feedstock cost, energy consumption, and GHG emissions simultaneously. Finally, a series of sensitivity analyses will be conducted to identify the sensitivity of the decisions, such as the optimal site selected for the biofuel facility, to changes in influential parameters, such as biomass availability and transportation fuel price. Intellectual Merit The proposed research will facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization in the renewable biofuel industry. The GIS-based facility location analysis considers a series of factors which have not been considered simultaneously in previous research. Location analysis is critical to the financial success of producing biofuel. The modeling of woody biomass supply chains using both optimization and simulation, combing with the GIS-based approach as a precursor, have not been done to date. The optimization and simulation models can help to ensure the economic and environmental viability and sustainability of the entire biofuel supply chain at both the strategic design level and the operational planning level. Broader Impacts The proposed models for biorefineries can be applied to other types of manufacturing or processing operations using biomass. This is because the biomass feedstock supply chain is similar, if not the same, for biorefineries, biomass fired or co-fired power plants, or torrefaction/pelletization operations. Additionally, the research results of this research will continue to be disseminated internationally through publications in journals, such as Biomass and Bioenergy, and Renewable Energy, and presentations at conferences, such as the 2011 Industrial Engineering Research Conference. For example, part of the research work related to biofuel facility identification has been published: Zhang, Johnson and Sutherland [2011] (see Appendix A). There will also be opportunities for the Michigan Tech campus community to learn about the research through the Sustainable Future Institute.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biofuels are alternative fuels that have the promise of reducing reliance on imported fossil fuels and decreasing emission of greenhouse gases from energy consumption. This thesis analyses the environmental impacts focusing on the greenhouse gas (GHG) emissions associated with the production and delivery of biofuel using the new Integrated Hydropyrolysis and Hydroconversion (IH2) process. The IH2 process is an innovative process for the conversion of woody biomass into hydrocarbon liquid transportation fuels in the range of gasoline and diesel. A cradle-to-grave life cycle assessment (LCA) was used to calculate the greenhouse gas emissions associated with diverse feedstocks production systems and delivery to the IH2 facility plus producing and using these new renewable liquid fuels. The biomass feedstocks analyzed include algae (microalgae), bagasse from a sugar cane-producing locations such as Brazil or extreme southern US, corn stover from Midwest US locations, and forest feedstocks from a northern Wisconsin location. The life cycle greenhouse gas (GHG) emissions savings of 58%–98% were calculated for IH2 gasoline and diesel production and combustion use in vehicles compared to fossil fuels. The range of savings is due to different biomass feedstocks and transportation modes and distances. Different scenarios were conducted to understand the uncertainties in certain input data to the LCA model, particularly in the feedstock production section, the IH2 biofuel production section, and transportation sections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The numerical solution of the incompressible Navier-Stokes Equations offers an effective alternative to the experimental analysis of Fluid-Structure interaction i.e. dynamical coupling between a fluid and a solid which otherwise is very complex, time consuming and very expensive. To have a method which can accurately model these types of mechanical systems by numerical solutions becomes a great option, since these advantages are even more obvious when considering huge structures like bridges, high rise buildings, or even wind turbine blades with diameters as large as 200 meters. The modeling of such processes, however, involves complex multiphysics problems along with complex geometries. This thesis focuses on a novel vorticity-velocity formulation called the KLE to solve the incompressible Navier-stokes equations for such FSI problems. This scheme allows for the implementation of robust adaptive ODE time integration schemes and thus allows us to tackle the various multiphysics problems as separate modules. The current algorithm for KLE employs a structured or unstructured mesh for spatial discretization and it allows the use of a self-adaptive or fixed time step ODE solver while dealing with unsteady problems. This research deals with the analysis of the effects of the Courant-Friedrichs-Lewy (CFL) condition for KLE when applied to unsteady Stoke’s problem. The objective is to conduct a numerical analysis for stability and, hence, for convergence. Our results confirmthat the time step ∆t is constrained by the CFL-like condition ∆t ≤ const. hα, where h denotes the variable that represents spatial discretization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The South Florida Water Management District (SFWMD) manages and operates numerous water control structures that are subject to scour. In an effort to reduce scour downstream of these gated structures, laboratory experiments were performed to investigate the effect of active air-injection downstream of the terminal structure of a gated spillway on the depth of the scour hole. A literature review involving similar research revealed significant variables such as the ratio of headwater-to-tailwater depths, the diffuser angle, sediment uniformity, and the ratio of air-to-water volumetric discharge values. The experimental design was based on the analysis of several of these non-dimensional parameters. Bed scouring at stilling basins downstream of gated spillways has been identified as posing a serious risk to the spillway’s structural stability. Although this type of scour has been studied in the past, it continues to represent a real threat to water control structures and requires additional attention. A hydraulic scour channel comprised of a head tank, flow straightening section, gated spillway, stilling basin, scour section, sediment trap, and tail-tank was used to further this analysis. Experiments were performed in a laboratory channel consisting of a 1:30 scale model of the SFWMD S65E spillway structure. To ascertain the feasibility of air injection for scour reduction a proof-of-concept study was performed. Experiments were conducted without air entrainment and with high, medium, and low air entrainment rates for high and low headwater conditions. For the cases with no air entrainment it was found that there was excessive scour downstream of the structure due to a downward roller formed upon exiting the downstream sill of the stilling basin. When air was introduced vertically just downstream of, and at the same level as, the stilling basin sill, it was found that air entrainment does reduce scour depth by up to 58% depending on the air flow rate, but shifts the deepest scour location to the sides of the channel bed instead of the center. Various hydraulic flow conditions were tested without air injection to verify which scenario caused more scour. That scenario, uncontrolled free, in which water does not contact the gate and the water elevation in the stilling basin is lower than the spillway crest, would be used for the remainder of experiments testing air injection. Various air flow rates, diffuser elevations, air hole diameters, air hole spacings, diffuser angles and widths were tested in over 120 experiments. Optimal parameters include air injection at a rate that results in a water-to-air ratio of 0.28, air holes 1.016mm in diameter the entire width of the stilling basin, and a vertically orientated injection pattern. Detailed flow measurements were collected for one case using air injection and one without. An identical flow scenario was used for each experiment, namely that of a high flow rate and upstream headwater depth and a low tailwater depth. Equilibrium bed scour and velocity measurements were taken using an Acoustic Doppler Velocimeter at nearly 3000 points. Velocity data was used to construct a vector plot in order to identify which flow components contribute to the scour hole. Additionally, turbulence parameters were calculated in an effort to help understand why air-injection reduced bed scour. Turbulence intensities, normalized mean flow, normalized kinetic energy, and anisotropy of turbulence plots were constructed. A clear trend emerged that showed air-injection reduces turbulence near the bed and therefore reduces scour potential.