11 resultados para Using Lean tools

em Digital Commons - Michigan Tech


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research is about producing recombinant Trichoderma reesei endoglucanase Cel7B by using Kluyveromyces lactis, transformed with chromosomally integrated Cel7B cDNA, as a host cell (K. lactis Cel7B). Cel7B is one of the glycoside hydrolyze family of proteins that are produced by T. reesei. Cel7B together with other endoglucanases, exoglucanases, and â-glucosidases hydrolyze cellulose to glucose, which can then be fermented to biofuels or other value-added products. The research objective of this MS project is to examine favorable fermentation conditions for recombinant Cel7B enzyme production and improved activity. Production of enzyme on different types of media was examined, and the activity of the enzyme was measured by using different tools or procedures. The first condition tested for was using different concentrations of galactose as a carbon and energy source; however galactose also acts as a potent promoter of recombinant Cel7B expression in K. lactis Cel7B. The purpose of this method is to determine the relationship between production of enzyme with increasing sugar concentration. The second culture condition test was using different types of media: a complex medium-yeast extract, peptone, galactose (YPGal); a minimal medium-yeast nitrogen base (YNB) with galactose; and a minimal medium with supplement-yeast nitrogen base with casamino acid (YBC), a nitrogen source, with galactose. The third condition was using different types of reactors or fermenters: a small reactor (shake flask) and a larger automated bioreactor (BioFlo 3000 fermenter). The purpose of this method is to determine the quantity of the protein produced by using different environments of production. Different tools to determine the presence and activity of Cel7B enzyme were used. For the presence of enzyme, sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) was used. Secondly, to detect enzyme activity, the carboxymethyl cellulose- 3,5-dinitrosalicylic acid (CMC- DNS) assay was employed. SDS-PAGE showed that the enzyme band was at 67 kDa, which is larger than native Cel7B (52 kDa.), likely due to over glycolylation during post-translational processing in K. lactis. For the different types of media used in our fermentation, recombinant Cel7B was produced from yeast extract peptone galactose (YPGal), and yeast nitrogen base with casamino acid (YBC), but was not produced and no activity was detected from yeast nitrogen base (YNB). This experiment concluded that the Cel7B production requires the amino acid resources as part of fermentation medium. In experiments where recombinant Cel7B net activity was measured at 1% galactose initial concentration in YPGal and YBC media, higher enzyme activity was detected for the complex medium YPGal. Higher activity of recombinant Cel7B was detected for flask culture in 2% galactose compared to 1% galactose for YBC medium. Two bioreactor experiments were conducted under these culture conditions at 30°C, pH 7.0, dissolved oxygen of 50% of saturation, and 250 rpm agitation (variable depending on DO control) K. lactis-Cel7B yeast growth curves were quite reproducible with maximum optical density (O.D) at 600 nm of between 7 and 8 (when factoring dilution of 10:1). Galactose was consumed rapidly during the first 15 hours of bioreactor culture and recombinant Cel7B started to appear in the culture at 10-15 hours and increased thereafter up to a maximum of between 0.9 and 1.6 mg/mL/hr in these experiments. These bioreactor enzyme activity results are much higher than comparable experiments conducted with flask-scale culture (0.5 mg/mL/hr). In order to achieve the highest recombinant Cel7B activity from batch culture of K. lactis-Cel7B, based on this research it is best to use a complex medium, 2% initial galactose concentration, and an automated bioreactor where good control of temperature, pH, and dissolved oxygen can be achieved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several modern-day cooling applications require the incorporation of mini/micro-channel shear-driven flow condensers. There are several design challenges that need to be overcome in order to meet those requirements. The difficulty in developing effective design tools for shear-driven flow condensers is exacerbated due to the lack of a bridge between the physics-based modelling of condensing flows and the current, popular approach based on semi-empirical heat transfer correlations. One of the primary contributors of this disconnect is a lack of understanding caused by the fact that typical heat transfer correlations eliminate the dependence of the heat transfer coefficient on the method of cooling employed on the condenser surface when it may very well not be the case. This is in direct contrast to direct physics-based modeling approaches where the thermal boundary conditions have a direct and huge impact on the heat transfer coefficient values. Typical heat transfer correlations instead introduce vapor quality as one of the variables on which the value of the heat transfer coefficient depends. This study shows how, under certain conditions, a heat transfer correlation from direct physics-based modeling can be equivalent to typical engineering heat transfer correlations without making the same apriori assumptions. Another huge factor that raises doubts on the validity of the heat-transfer correlations is the opacity associated with the application of flow regime maps for internal condensing flows. It is well known that flow regimes influence heat transfer rates strongly. However, several heat transfer correlations ignore flow regimes entirely and present a single heat transfer correlation for all flow regimes. This is believed to be inaccurate since one would expect significant differences in the heat transfer correlations for different flow regimes. Several other studies present a heat transfer correlation for a particular flow regime - however, they ignore the method by which extents of the flow regime is established. This thesis provides a definitive answer (in the context of stratified/annular flows) to: (i) whether a heat transfer correlation can always be independent of the thermal boundary condition and represented as a function of vapor quality, and (ii) whether a heat transfer correlation can be independently obtained for a flow regime without knowing the flow regime boundary (even if the flow regime boundary is represented through a separate and independent correlation). To obtain the results required to arrive at an answer to these questions, this study uses two numerical simulation tools - the approximate but highly efficient Quasi-1D simulation tool and the exact but more expensive 2D Steady Simulation tool. Using these tools and the approximate values of flow regime transitions, a deeper understanding of the current state of knowledge in flow regime maps and heat transfer correlations in shear-driven internal condensing flows is obtained. The ideas presented here can be extended for other flow regimes of shear-driven flows as well. Analogous correlations can also be obtained for internal condensers in the gravity-driven and mixed-driven configuration.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Quantifying belowground dynamics is critical to our understanding of plant and ecosystem function and belowground carbon cycling, yet currently available tools for complex belowground image analyses are insufficient. We introduce novel techniques combining digital image processing tools and geographic information systems (GIS) analysis to permit semi-automated analysis of complex root and soil dynamics. We illustrate methodologies with imagery from microcosms, minirhizotrons, and a rhizotron, in upland and peatland soils. We provide guidelines for correct image capture, a method that automatically stitches together numerous minirhizotron images into one seamless image, and image analysis using image segmentation and classification in SPRING or change analysis in ArcMap. These methods facilitate spatial and temporal root and soil interaction studies, providing a framework to expand a more comprehensive understanding of belowground dynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation presents an effective quasi one-dimensional (1-D) computational simulation tool and a full two-dimensional (2-D) computational simulation methodology for steady annular/stratified internal condensing flows of pure vapor. These simulation tools are used to investigate internal condensing flows in both gravity as well as shear driven environments. Through accurate numerical simulations of the full two dimensional governing equations, results for laminar/laminar condensing flows inside mm-scale ducts are presented. The methodology has been developed using MATLAB/COMSOL platform and is currently capable of simulating film-wise condensation for steady (and unsteady flows). Moreover, a novel 1-D solution technique, capable of simulating condensing flows inside rectangular and circular ducts with different thermal boundary conditions is also presented. The results obtained from the 2-D scientific tool and 1-D engineering tool, are validated and synthesized with experimental results for gravity dominated flows inside vertical tube and inclined channel; and, also, for shear/pressure driven flows inside horizontal channels. Furthermore, these simulation tools are employed to demonstrate key differences of physics between gravity dominated and shear/pressure driven flows. A transition map that distinguishes shear driven, gravity driven, and “mixed” driven flow zones within the non-dimensional parameter space that govern these duct flows is presented along with the film thickness and heat transfer correlations that are valid in these zones. It has also been shown that internal condensing flows in a micro-meter scale duct experiences shear driven flow, even in different gravitational environments. The full 2-D steady computational tool has been employed to investigate the length of annularity. The result for a shear driven flow in a horizontal channel shows that in absence of any noise or pressure fluctuation at the inlet, the onset of non-annularity is partly due to insufficient shear at the liquid-vapor interface. This result is being further corroborated/investigated by R. R. Naik with the help of the unsteady simulation tool. The condensing flow results and flow physics understanding developed through these simulation tools will be instrumental in reliable design of modern micro-scale and spacebased thermal systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Routine bridge inspections require labor intensive and highly subjective visual interpretation to determine bridge deck surface condition. Light Detection and Ranging (LiDAR) a relatively new class of survey instrument has become a popular and increasingly used technology for providing as-built and inventory data in civil applications. While an increasing number of private and governmental agencies possess terrestrial and mobile LiDAR systems, an understanding of the technology’s capabilities and potential applications continues to evolve. LiDAR is a line-of-sight instrument and as such, care must be taken when establishing scan locations and resolution to allow the capture of data at an adequate resolution for defining features that contribute to the analysis of bridge deck surface condition. Information such as the location, area, and volume of spalling on deck surfaces, undersides, and support columns can be derived from properly collected LiDAR point clouds. The LiDAR point clouds contain information that can provide quantitative surface condition information, resulting in more accurate structural health monitoring. LiDAR scans were collected at three study bridges, each of which displayed a varying degree of degradation. A variety of commercially available analysis tools and an independently developed algorithm written in ArcGIS Python (ArcPy) were used to locate and quantify surface defects such as location, volume, and area of spalls. The results were visual and numerically displayed in a user-friendly web-based decision support tool integrating prior bridge condition metrics for comparison. LiDAR data processing procedures along with strengths and limitations of point clouds for defining features useful for assessing bridge deck condition are discussed. Point cloud density and incidence angle are two attributes that must be managed carefully to ensure data collected are of high quality and useful for bridge condition evaluation. When collected properly to ensure effective evaluation of bridge surface condition, LiDAR data can be analyzed to provide a useful data set from which to derive bridge deck condition information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project addresses the potential impacts of changing climate on dry-season water storage and discharge from a small, mountain catchment in Tanzania. Villagers and water managers around the catchment have experienced worsening water scarcity and attribute it to increasing population and demand, but very little has been done to understand the physical characteristics and hydrological behavior of the spring catchment. The physical nature of the aquifer was characterized and water balance models were calibrated to discharge observations so as to be able to explore relative changes in aquifer storage resulting from climate changes. To characterize the shallow aquifer supplying water to the Jandu spring, water quality and geochemistry data were analyzed, discharge recession analysis was performed, and two water balance models were developed and tested. Jandu geochemistry suggests a shallow, meteorically-recharged aquifer system with short circulation times. Baseflow recession analysis showed that the catchment behavior could be represented by a linear storage model with an average recession constant of 0.151/month from 2004-2010. Two modified Thornthwaite-Mather Water Balance (TMWB) models were calibrated using historic rainfall and discharge data and shown to reproduce dry-season flows with Nash-Sutcliffe efficiencies between 0.86 and 0.91. The modified TMWB models were then used to examine the impacts of nineteen, perturbed climate scenarios to test the potential impacts of regional climate change on catchment storage during the dry season. Forcing the models with realistic scenarios for average monthly temperature, annual precipitation, and seasonal rainfall distribution demonstrated that even small climate changes might adversely impact aquifer storage conditions at the onset of the dry season. The scale of the change was dependent on the direction (increasing vs. decreasing) and magnitude of climate change (temperature and precipitation). This study demonstrates that small, mountain aquifer characterization is possible using simple water quality parameters, recession analysis can be integrated into modeling aquifer storage parameters, and water balance models can accurately reproduce dry-season discharges and might be useful tools to assess climate change impacts. However, uncertainty in current climate projections and lack of data for testing the predictive capabilities of the model beyond the present data set, make the forecasts of changes in discharge also uncertain. The hydrologic tools used herein offer promise for future research in understanding small, shallow, mountainous aquifers and could potentially be developed and used by water resource professionals to assess climatic influences on local hydrologic systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of optimal design of a multi-gravity-assist space trajectories, with free number of deep space maneuvers (MGADSM) poses multi-modal cost functions. In the general form of the problem, the number of design variables is solution dependent. To handle global optimization problems where the number of design variables varies from one solution to another, two novel genetic-based techniques are introduced: hidden genes genetic algorithm (HGGA) and dynamic-size multiple population genetic algorithm (DSMPGA). In HGGA, a fixed length for the design variables is assigned for all solutions. Independent variables of each solution are divided into effective and ineffective (hidden) genes. Hidden genes are excluded in cost function evaluations. Full-length solutions undergo standard genetic operations. In DSMPGA, sub-populations of fixed size design spaces are randomly initialized. Standard genetic operations are carried out for a stage of generations. A new population is then created by reproduction from all members based on their relative fitness. The resulting sub-populations have different sizes from their initial sizes. The process repeats, leading to increasing the size of sub-populations of more fit solutions. Both techniques are applied to several MGADSM problems. They have the capability to determine the number of swing-bys, the planets to swing by, launch and arrival dates, and the number of deep space maneuvers as well as their locations, magnitudes, and directions in an optimal sense. The results show that solutions obtained using the developed tools match known solutions for complex case studies. The HGGA is also used to obtain the asteroids sequence and the mission structure in the global trajectory optimization competition (GTOC) problem. As an application of GA optimization to Earth orbits, the problem of visiting a set of ground sites within a constrained time frame is solved. The J2 perturbation and zonal coverage are considered to design repeated Sun-synchronous orbits. Finally, a new set of orbits, the repeated shadow track orbits (RSTO), is introduced. The orbit parameters are optimized such that the shadow of a spacecraft on the Earth visits the same locations periodically every desired number of days.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional flow visualization plays an essential role in many areas of science and engineering, such as aero- and hydro-dynamical systems which dominate various physical and natural phenomena. For popular methods such as the streamline visualization to be effective, they should capture the underlying flow features while facilitating user observation and understanding of the flow field in a clear manner. My research mainly focuses on the analysis and visualization of flow fields using various techniques, e.g. information-theoretic techniques and graph-based representations. Since the streamline visualization is a popular technique in flow field visualization, how to select good streamlines to capture flow patterns and how to pick good viewpoints to observe flow fields become critical. We treat streamline selection and viewpoint selection as symmetric problems and solve them simultaneously using the dual information channel [81]. To the best of my knowledge, this is the first attempt in flow visualization to combine these two selection problems in a unified approach. This work selects streamline in a view-independent manner and the selected streamlines will not change for all viewpoints. My another work [56] uses an information-theoretic approach to evaluate the importance of each streamline under various sample viewpoints and presents a solution for view-dependent streamline selection that guarantees coherent streamline update when the view changes gradually. When projecting 3D streamlines to 2D images for viewing, occlusion and clutter become inevitable. To address this challenge, we design FlowGraph [57, 58], a novel compound graph representation that organizes field line clusters and spatiotemporal regions hierarchically for occlusion-free and controllable visual exploration. We enable observation and exploration of the relationships among field line clusters, spatiotemporal regions and their interconnection in the transformed space. Most viewpoint selection methods only consider the external viewpoints outside of the flow field. This will not convey a clear observation when the flow field is clutter on the boundary side. Therefore, we propose a new way to explore flow fields by selecting several internal viewpoints around the flow features inside of the flow field and then generating a B-Spline curve path traversing these viewpoints to provide users with closeup views of the flow field for detailed observation of hidden or occluded internal flow features [54]. This work is also extended to deal with unsteady flow fields. Besides flow field visualization, some other topics relevant to visualization also attract my attention. In iGraph [31], we leverage a distributed system along with a tiled display wall to provide users with high-resolution visual analytics of big image and text collections in real time. Developing pedagogical visualization tools forms my other research focus. Since most cryptography algorithms use sophisticated mathematics, it is difficult for beginners to understand both what the algorithm does and how the algorithm does that. Therefore, we develop a set of visualization tools to provide users with an intuitive way to learn and understand these algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The push for improved fuel economy and reduced emissions has led to great achievements in engine performance and control. These achievements have increased the efficiency and power density of gasoline engines dramatically in the last two decades. With the added power density, thermal management of the engine has become increasingly important. Therefore it is critical to have accurate temperature and heat transfer models as well as data to validate them. With the recent adoption of the 2025 Corporate Average Fuel Economy(CAFE) standard, there has been a push to improve the thermal efficiency of internal combustion engines even further. Lean and dilute combustion regimes along with waste heat recovery systems are being explored as options for improving efficiency. In order to understand how these technologies will impact engine performance and each other, this research sought to analyze the engine from both a 1st law energy balance perspective, as well as from a 2nd law exergy analysis. This research also provided insights into the effects of various parameters on in-cylinder temperatures and heat transfer as well as provides data for validation of other models. It was found that the engine load was the dominant factor for the energy distribution, with higher loads resulting in lower coolant heat transfer and higher brake work and exhaust energy. From an exergy perspective, the exhaust system provided the best waste heat recovery potential due to its significantly higher temperatures compared to the cooling circuit. EGR and lean combustion both resulted in lower combustion chamber and exhaust temperatures; however, in most cases the increased flow rates resulted in a net increase in the energy in the exhaust. The exhaust exergy, on the other hand, was either increased or decreased depending on the location in the exhaust system and the other operating conditions. The effects of dilution from lean operation and EGR were compared using a dilution ratio, and the results showed that lean operation resulted in a larger increase in efficiency than the same amount of dilution with EGR. Finally, a method for identifying fuel spray impingement from piston surface temperature measurements was found. Note: The material contained in this section is planned for submission as part of a journal article and/or conference paper in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasing prices for fuel with depletion and instability in foreign oil imports has driven the importance for using alternative and renewable fuels. The alternative fuels such as ethanol, methanol, butyl alcohol, and natural gas are of interest to be used to relieve some of the dependence on oil for transportation. The renewable fuel, ethanol which is made from the sugars of corn, has been used widely in fuel for vehicles in the United States because of its unique qualities. As with any renewable fuel, ethanol has many advantages but also has disadvantages. Cold startability of engines is one area of concern when using ethanol blended fuel. This research was focused on the cold startability of snowmobiles at ambient temperatures of 20 °F, 0 °F, and -20 °F. The tests were performed in a modified 48 foot refrigerated trailer which was retrofitted for the purpose of cold-start tests. Pure gasoline (E0) was used as a baseline test. A splash blended ethanol and gasoline mixture (E15, 15% ethanol and 85% gasoline by volume) was then tested and compared to the E0 fuel. Four different types of snowmobiles were used for the testing including a Yamaha FX Nytro RTX four-stroke, Ski-doo MX Z TNT 600 E-TEC direct injected two stroke, Polaris 800 Rush semi-direct injected two-stroke, and an Arctic Cat F570 carbureted two-stroke. All of the snowmobiles operate on open loop systems which means there was no compensation for the change in fuel properties. Emissions were sampled using a Sensors Inc. Semtech DS five gas emissions analyzer and engine data was recoded using AIM Racing Data Power EVO3 Pro and EVO4 systems. The recorded raw exhaust emissions included carbon monoxide (CO), carbon dioxide (CO2), total hydrocarbons (THC), and oxygen (O2). To help explain the trends in the emissions data, engine parameters were also recorded. The EVO equipment was installed on each vehicle to record the following parameters: engine speed, exhaust gas temperature, head temperature, coolant temperature, and test cell air temperature. At least three consistent tests to ensure repeatability were taken at each fuel and temperature combination so a total of 18 valid tests were taken on each snowmobile. The snowmobiles were run at operating temperature to clear any excess fuel in the engine crankcase before each cold-start test. The trends from switching from E0 to E15 were different for each snowmobile as they all employ different engine technologies. The Yamaha snowmobile (four-stroke EFI) achieved higher levels of CO2 with lower CO and THC emissions on E15. Engine speeds were fairly consistent between fuels but the average engine speeds were increased as the temperatures decreased. The average exhaust gas temperature increased from 1.3-1.8% for the E15 compared to E0 due to enleanment. For the Ski-doo snowmobile (direct injected two-stroke) only slight differences were noted when switching from E0 to E15. This could possibly be due to the lean of stoichiometric operation of the engine at idle. The CO2 emissions decreased slightly at 20 °F and 0 °F for E15 fuel with a small difference at -20 °F. Almost no change in CO or THC emissions was noted for all temperatures. The only significant difference in the engine data observed was the exhaust gas temperature which decreased with E15. The Polaris snowmobile (semi-direct injected two-stroke) had similar raw exhaust emissions for each of the two fuels. This was probably due to changing a resistor when using E15 which changed the fuel map for an ethanol mixture (E10 vs. E0). This snowmobile operates at a rich condition which caused the engine to emit higher values of CO than CO2 along with exceeding the THC analyzer range at idle. The engine parameters and emissions did not increase or decrease significantly with decreasing temperature. The average idle engine speed did increase as the ambient temperature decreased. The Arctic Cat snowmobile (carbureted two-stroke) was equipped with a choke lever to assist cold-starts. The choke was operated in the same manor for both fuels. Lower levels of CO emissions with E15 fuel were observed yet the THC emissions exceeded the analyzer range. The engine had a slightly lower speed with E15.