18 resultados para digital forensic tool testing
em Digital Commons - Michigan Tech
Resumo:
In the realm of computer programming, the experience of writing a program is used to reinforce concepts and evaluate ability. This research uses three case studies to evaluate the introduction of testing through Kolb's Experiential Learning Model (ELM). We then analyze the impact of those testing experiences to determine methods for improving future courses. The first testing experience that students encounter are unit test reports in their early courses. This course demonstrates that automating and improving feedback can provide more ELM iterations. The JUnit Generation (JUG) tool also provided a positive experience for the instructor by reducing the overall workload. Later, undergraduate and graduate students have the opportunity to work together in a multi-role Human-Computer Interaction (HCI) course. The interactions use usability analysis techniques with graduate students as usability experts and undergraduate students as design engineers. Students get experience testing the user experience of their product prototypes using methods varying from heuristic analysis to user testing. From this course, we learned the importance of the instructors role in the ELM. As more roles were added to the HCI course, a desire arose to provide more complete, quality assured software. This inspired the addition of unit testing experiences to the course. However, we learned that significant preparations must be made to apply the ELM when students are resistant. The research presented through these courses was driven by the recognition of a need for testing in a Computer Science curriculum. Our understanding of the ELM suggests the need for student experience when being introduced to testing concepts. We learned that experiential learning, when appropriately implemented, can provide benefits to the Computer Science classroom. When examined together, these course-based research projects provided insight into building strong testing practices into a curriculum.
Resumo:
With the introduction of the mid-level ethanol blend gasoline fuel for commercial sale, the compatibility of different off-road engines is needed. This report details the test study of using one mid-level ethanol fuel in a two stroke hand held gasoline engine used to power line trimmers. The study sponsored by E3 is to test the effectiveness of an aftermarket spark plug from E3 Spark Plug when using a mid-level ethanol blend gasoline. A 15% ethanol by volume (E15) is the test mid-level ethanol used and the 10% ethanol by volume (E10) was used as the baseline fuel. The testing comprises running the engine at different load points and throttle positions to evaluate the cylinder head temperature, exhaust temperature and engine speed. Raw gas emissions were also measured to determine the impact of the performance spark plug. The low calorific value of the E15 fuel decreased the speed of the engine along with reduction in the fuel consumption and exhaust gas temperature. The HC emissions for E15 fuel and E3 spark plug increased when compared to the base line in most of the cases and NO formation was dependent on the cylinder head temperature. The E3 spark plug had a tendency to increase the temperature of the cylinder head irrespective of fuel type while reducing engine speed.
Resumo:
We used differential GPS measurements from a 13 station GPS network spanning the Santa Ana Volcano and Coatepeque Caldera to characterize the inter-eruptive activity and tectonic movements near these two active and potentially hazardous features. Caldera-forming events occurred from 70-40 ka and at Santa Ana/Izalco volcanoes eruptive activity occurred as recently as 2005. Twelve differential stations were surveyed for 1 to 2 hours on a monthly basis from February through September 2009 and tied to a centrally located continuous GPS station, which serves as the reference site for this volcanic network. Repeatabilities of the averages from 20-minute sessions taken over 20 hours or longer range from 2-11 mm in the horizontal (north and east) components of the inter-station baselines, suggesting a lower detection limit for the horizontal components of any short-term tectonic or volcanic deformation. Repeatabilities of the vertical baseline component range from 12-34 mm. Analysis of the precipitable water vapor in the troposphere suggests that tropospheric decorrelation as a function of baseline lengths and variable site elevations are the most likely sources of vertical error. Differential motions of the 12 sites relative to the continuous reference site reveal inflation from February through July at several sites surrounding the caldera with vertical displacements that range from 61 mm to 139 mm followed by a lower magnitude deflation event on 1.8-7.4 km-long baselines. Uplift rates for the inflationary period reach 300 mm/yr with 1σ uncertainties of +/- 26 – 119 mm. Only one other station outside the caldera exhibits a similar deformation trend, suggesting a localized source. The results suggest that the use of differential GPS measurements from short duration occupations over short baselines can be a useful monitoring tool at sub-tropical volcanoes and calderas.
Resumo:
Transformers are very important elements of any power system. Unfortunately, they are subjected to through-faults and abnormal operating conditions which can affect not only the transformer itself but also other equipment connected to the transformer. Thus, it is essential to provide sufficient protection for transformers as well as the best possible selectivity and sensitivity of the protection. Nowadays microprocessor-based relays are widely used to protect power equipment. Current differential and voltage protection strategies are used in transformer protection applications and provide fast and sensitive multi-level protection and monitoring. The elements responsible for detecting turn-to-turn and turn-to-ground faults are the negative-sequence percentage differential element and restricted earth-fault (REF) element, respectively. During severe internal faults current transformers can saturate and slow down the speed of relay operation which affects the degree of equipment damage. The scope of this work is to develop a modeling methodology to perform simulations and laboratory tests for internal faults such as turn-to-turn and turn-to-ground for two step-down power transformers with capacity ratings of 11.2 MVA and 290 MVA. The simulated current waveforms are injected to a microprocessor relay to check its sensitivity for these internal faults. Saturation of current transformers is also studied in this work. All simulations are performed with the Alternative Transients Program (ATP) utilizing the internal fault model for three-phase two-winding transformers. The tested microprocessor relay is the SEL-487E current differential and voltage protection relay. The results showed that the ATP internal fault model can be used for testing microprocessor relays for any percentage of turns involved in an internal fault. An interesting observation from the experiments was that the SEL-487E relay is more sensitive to turn-to-turn faults than advertized for the transformers studied. The sensitivity of the restricted earth-fault element was confirmed. CT saturation cases showed that low accuracy CTs can be saturated with a high percentage of turn-to-turn faults, where the CT burden will affect the extent of saturation. Recommendations for future work include more accurate simulation of internal faults, transformer energization inrush, and other scenarios involving core saturation, using the newest version of the internal fault model. The SEL-487E relay or other microprocessor relays should again be tested for performance. Also, application of a grounding bank to the delta-connected side of a transformer will increase the zone of protection and relay performance can be tested for internal ground faults on both sides of a transformer.
Resumo:
Electrochemical capacitors (ECs), also known as supercapacitors or ultracapacitors, are energy storage devices with properties between batteries and conventional capacitors. EC have evolved through several generations. The trend in EC is to combine a double-layer electrode with a battery-type electrode in an asymmetric capacitor configuration. The double-layer electrode is usually an activated carbon (AC) since it has high surface area, good conductivity, and relatively low cost. The battery-type electrode usually consists of PbO2 or Ni(OH)2. In this research, a graphitic carbon foam was impregnated with Co-substituted Ni(OH)2 using electrochemical deposition to serve as the positive electrode in the asymmetric capacitor. The purpose was to reduce the cost and weight of the ECs while maintaining or increasing capacitance and gravimetric energy storage density. The XRD result indicated that the nickel-carbon foam electrode was a typical α-Ni(OH)2. The specific capacitance of the nickel-carbon foam electrode was 2641 F/g at 5 mA/cm2, higher than the previously reported value of 2080 F/g for a 7.5% Al-substituted α-Ni(OH)2 electrode. Three different ACs (RP-20, YP-50F, and Ketjenblack EC-600JD) were evaluated through their morphology and electrochemical performance to determine their suitability for use in ECs. The study indicated that YP-50F demonstrated the better overall performance because of the combination of micropore and mesopore structures. Therefore, YP-50F was chosen to combine with the nickel-carbon foam electrode for further evaluation. Six cells with different mass ratios of negative to positive active mass were fabricated to study the electrochemical performance. Among the different mass ratios, the asymmetric capacitor with the mass ratio of 3.71 gave the highest specific energy and specific power, 24.5 W.h/kg and 498 W/kg, respectively.
Resumo:
State standardized testing has always been a tool to measure a school’s performance and to help evaluate school curriculum. However, with the school of choice legislation in 1992, the MEAP test became a measuring stick to grade schools by and a major tool in attracting school of choice students. Now, declining enrollment and a state budget struggling to stay out of the red have made school of choice students more important than ever before. MEAP scores have become the deciding factor in some cases. For the past five years, the Hancock Middle School staff has been working hard to improve their students’ MEAP scores in accordance with President Bush's “No Child Left Behind” legislation. In 2005, the school was awarded a grant that enabled staff to work for two years on writing and working towards school goals that were based on the improvement of MEAP scores in writing and math. As part of this effort, the school purchased an internet-based program geared at giving students practice on state content standards. This study examined the results of efforts by Hancock Middle School to help improve student scores in mathematics on the MEAP test through the use of an online program called “Study Island.” In the past, the program was used to remediate students, and as a review with an incentive at the end of the year for students completing a certain number of objectives. It had also been used as a review before upcoming MEAP testing in the fall. All of these methods may have helped a few students perform at an increased level on their standardized test, but the question remained of whether a sustained use of the program in a classroom setting would increase an understanding of concepts and performance on the MEAP for the masses. This study addressed this question. Student MEAP scores and Study Island data from experimental and comparison groups of students were compared to understand how a sustained use of Study Island in the classroom would impact student test scores on the MEAP. In addition, these data were analyzed to determine whether Study Island results provide a good indicator of students’ MEAP performance. The results of the study suggest that there were limited benefits related to sustained use of Study Island and gave some indications about the effectiveness of the mathematics curriculum at Hancock Middle School. These results and implications for instruction are discussed.
Resumo:
With energy demands and costs growing every day, the need for improving energy efficiency in electrical devices has become very important. Research into various methods of improving efficiency for all electrical components will be a key to meet future energy needs. This report documents the design, construction, and testing of a research quality electric machine dynamometer and test bed. This test cell system can be used for research in several areas including: electric drives systems, electric vehicle propulsion systems, power electronic converters, load/source element in an AC Microgrid, as well as many others. The test cell design criteria, and decisions, will be discussed in reference to user functionality and flexibility. The individual power components will be discussed in detail to how they relate to the project, highlighting any feature used in operation of the test cell. A project timeline will be discussed, clearly stating the work done by the different individuals involved in the project. In addition, the system will be parameterized and benchmark data will be used to provide the functional operation of the system. With energy demands and costs growing every day, the need for improving energy efficiency in electrical devices has become very important. Research into various methods of improving efficiency for all electrical components will be a key to meet future energy needs. This report documents the design, construction, and testing of a research quality electric machine dynamometer and test bed. This test cell system can be used for research in several areas including: electric drives systems, electric vehicle propulsion systems, power electronic converters, load/source element in an AC Microgrid, as well as many others. The test cell design criteria, and decisions, will be discussed in reference to user functionality and flexibility. The individual power components will be discussed in detail to how they relate to the project, highlighting any feature used in operation of the test cell. A project timeline will be discussed, clearly stating the work done by the different individuals involved in the project. In addition, the system will be parameterized and benchmark data will be used to provide the functional operation of the system.
Resumo:
There has been a continuous evolutionary process in asphalt pavement design. In the beginning it was crude and based on past experience. Through research, empirical methods were developed based on materials response to specific loading at the AASHO Road Test. Today, pavement design has progressed to a mechanistic-empirical method. This methodology takes into account the mechanical properties of the individual layers and uses empirical relationships to relate them to performance. The mechanical tests that are used as part of this methodology include dynamic modulus and flow number, which have been shown to correlate with field pavement performance. This thesis was based on a portion of a research project being conducted at Michigan Technological University (MTU) for the Wisconsin Department of Transportation (WisDOT). The global scope of this project dealt with the development of a library of values as they pertain to the mechanical properties of the asphalt pavement mixtures paved in Wisconsin. Additionally, a comparison with the current associated pavement design to that of the new AASHTO Design Guide was conducted. This thesis describes the development of the current pavement design methodology as well as the associated tests as part of a literature review. This report also details the materials that were sampled from field operations around the state of Wisconsin and their testing preparation and procedures. Testing was conducted on available round robin and three Wisconsin mixtures and the main results of the research were: The test history of the Superpave SPT (fatigue and permanent deformation dynamic modulus) does not affect the mean response for both dynamic modulus and flow number, but does increase the variability in the test results of the flow number. The method of specimen preparation, compacting to test geometry versus sawing/coring to test geometry, does not statistically appear to affect the intermediate and high temperature dynamic modulus and flow number test results. The 2002 AASHTO Design Guide simulations support the findings of the statistical analyses that the method of specimen preparation did not impact the performance of the HMA as a structural layer as predicted by the Design Guide software. The methodologies for determining the temperature-viscosity relationship as stipulated by Witczak are sensitive to the viscosity test temperatures employed. The increase in asphalt binder content by 0.3% was found to actually increase the dynamic modulus at the intermediate and high test temperature as well as flow number. This result was based the testing that was conducted and was contradictory to previous research and the hypothesis that was put forth for this thesis. This result should be used with caution and requires further review. Based on the limited results presented herein, the asphalt binder grade appears to have a greater impact on performance in the Superpave SPT than aggregate angularity. Dynamic modulus and flow number was shown to increase with traffic level (requiring an increase in aggregate angularity) and with a decrease in air voids and confirm the hypotheses regarding these two factors. Accumulated micro-strain at flow number as opposed to the use of flow number appeared to be a promising measure for comparing the quality of specimens within a specific mixture. At the current time the Design Guide and its associate software needs to be further improved prior to implementation by owner/agencies.
Resumo:
Infrared thermography is a well-recognized non-destructive testing technique for evaluating concrete bridge elements such as bridge decks and piers. However, overcoming some obstacles and limitations are necessary to be able to add this invaluable technique to the bridge inspector's tool box. Infrared thermography is based on collecting radiant temperature and presenting the results as a thermal infrared image. Two methods considered in conducting an infrared thermography test include passive and active. The source of heat is the main difference between these two approaches of infrared thermography testing. Solar energy and ambient temperature change are the main heat sources in conducting a passive infrared thermography test, while active infrared thermography involves generating a temperature gradient using an external source of heat other than sun. Passive infrared thermography testing was conducted on three concrete bridge decks in Michigan. Ground truth information was gathered through coring several locations on each bridge deck to validate the results obtained from the passive infrared thermography test. Challenges associated with data collection and processing using passive infrared thermography are discussed and provide additional evidence to confirm that passive infrared thermography is a promising remote sensing tool for bridge inspections. To improve the capabilities of the infrared thermography technique for evaluation of the underside of bridge decks and bridge girders, an active infrared thermography technique using the surface heating method was developed in the laboratory on five concrete slabs with simulated delaminations. Results from this study demonstrated that active infrared thermography not only eliminates some limitations associated with passive infrared thermography, but also provides information regarding the depth of the delaminations. Active infrared thermography was conducted on a segment of an out-of-service prestressed box beam and cores were extracted from several locations on the beam to validate the results. This study confirms the feasibility of the application of active infrared thermography on concrete bridges and of estimating the size and depth of delaminations. From the results gathered in this dissertation, it was established that applying both passive and active thermography can provide transportation agencies with qualitative and quantitative measures for efficient maintenance and repair decision-making.
Resumo:
Strain rate significantly affects the strength of a material. The Split-Hopkinson Pressure Bar (SHPB) was initially used to study the effects of high strain rate (~103 1/s) testing of metals. Later modifications to the original technique allowed for the study of brittle materials such as ceramics, concrete, and rock. While material properties of wood for static and creep strain rates are readily available, data on the dynamic properties of wood are sparse. Previous work using the SHPB technique with wood has been limited in scope to variability of only a few conditions and tests of the applicability of the SHPB theory on wood have not been performed. Tests were conducted using a large diameter (3.0 inch (75 mm)) SHPB. The strain rate and total strain applied to a specimen are dependent on the striker bar length and velocity at impact. Pulse shapers are used to further modify the strain rate and change the shape of the strain pulse. A series of tests were used to determine test conditions necessary to produce a strain rate, total strain, and pulse shape appropriate for testing wood specimens. Hard maple, consisting of sugar maple (Acer saccharum) and black maple (Acer nigrum), and eastern white pine (Pinus strobus) specimens were used to represent a dense hardwood and a low-density soft wood. Specimens were machined to diameters of 2.5 and 3.0 inches and an assortment of lengths were tested to determine the appropriate specimen dimensions. Longitudinal specimens of 1.5 inch length and radial and tangential specimens of 0.5 inch length were found to be most applicable to SHPB testing. Stress/strain curves were generated from the SHPB data and validated with 6061-T6 aluminum and wood specimens. Stress was indirectly corroborated with gaged aluminum specimens. Specimen strain was assessed with strain gages, digital image analysis, and measurement of residual strain to confirm the strain calculated from SHPB data. The SHPB was found to be a useful tool in accurately assessing the material properties of wood under high strain rates (70 to 340 1/s) and short load durations (70 to 150 μs to compressive failure).
Resumo:
As the development of genotyping and next-generation sequencing technologies, multi-marker testing in genome-wide association study and rare variant association study became active research areas in statistical genetics. This dissertation contains three methodologies for association study by exploring different genetic data features and demonstrates how to use those methods to test genetic association hypothesis. The methods can be categorized into in three scenarios: 1) multi-marker testing for strong Linkage Disequilibrium regions, 2) multi-marker testing for family-based association studies, 3) multi-marker testing for rare variant association study. I also discussed the advantage of using these methods and demonstrated its power by simulation studies and applications to real genetic data.
Resumo:
This report shares my efforts in developing a solid unit of instruction that has a clear focus on student outcomes. I have been a teacher for 20 years and have been writing and revising curricula for much of that time. However, most has been developed without the benefit of current research on how students learn and did not focus on what and how students are learning. My journey as a teacher has involved a lot of trial and error. My traditional method of teaching is to look at the benchmarks (now content expectations) to see what needs to be covered. My unit consists of having students read the appropriate sections in the textbook, complete work sheets, watch a video, and take some notes. I try to include at least one hands-on activity, one or more quizzes, and the traditional end-of-unit test consisting mostly of multiple choice questions I find in the textbook. I try to be engaging, make the lessons fun, and hope that at the end of the unit my students get whatever concepts I‘ve presented so that we can move on to the next topic. I want to increase students‘ understanding of science concepts and their ability to connect understanding to the real-world. However, sometimes I feel that my lessons are missing something. For a long time I have wanted to develop a unit of instruction that I know is an effective tool for the teaching and learning of science. In this report, I describe my efforts to reform my curricula using the “Understanding by Design” process. I want to see if this style of curriculum design will help me be a more effective teacher and if it will lead to an increase in student learning. My hypothesis is that this new (for me) approach to teaching will lead to increased understanding of science concepts among students because it is based on purposefully thinking about learning targets based on “big ideas” in science. For my reformed curricula I incorporate lessons from several outstanding programs I‘ve been involved with including EpiCenter (Purdue University), Incorporated Research Institutions for Seismology (IRIS), the Master of Science Program in Applied Science Education at Michigan Technological University, and the Michigan Association for Computer Users in Learning (MACUL). In this report, I present the methodology on how I developed a new unit of instruction based on the Understanding by Design process. I present several lessons and learning plans I‘ve developed for the unit that follow the 5E Learning Cycle as appendices at the end of this report. I also include the results of pilot testing of one of lessons. Although the lesson I pilot-tested was not as successful in increasing student learning outcomes as I had anticipated, the development process I followed was helpful in that it required me to focus on important concepts. Conducting the pilot test was also helpful to me because it led me to identify ways in which I could improve upon the lesson in the future.
Resumo:
The persuasive power of music is often relegated to the dimension of pathos: that which moves us emotionally. Yet, the music commodity is now situated in and around the liminal spaces of digitality. To think about how music functions, how it argues across media, and how it moves us, we must examine its material and immaterial realities as they present themselves to us and as we so create them. This dissertation rethinks the relationship between rhetoric and music by examining the creation, performance, and distribution of music in its material and immaterial forms to demonstrate its persuasive power. While both Plato and Aristotle understood music as a means to move men toward virtue, Aristotle tells us in his Laws, through the Athenian Stranger, that the very best kinds of music can help guide us to truth. From this starting point, I assess the historical problem of understanding the rhetorical potential of music as merely that which directs or imitates the emotions: that which “Soothes the savage breast,” as William Congreve writes. By furthering work by Vickers and Farnsworth, who suggest that the Baroque fascination with applying rhetorical figures to musical figures is an insufficient framework for assessing the rhetorical potential of music, I demonstrate the gravity of musical persuasion in its political weight, in its violence—the subjective violence of musical torture at Guantanamo and the objective, ideological violence of music—and in what Jacques Attali calls the prophetic nature of music. I argue that music has a significant function, and as a non-discursive form of argumentation, works on us beyond affect. Moreover, with the emergence of digital music distribution and domestic digital recording technologies, the digital music commodity in its material and immaterial forms allows for ruptures in the former methods of musical composition, production, and distribution and in the political potential of music which Jacques Attali describes as being able to foresee new political realities. I thus suggest a new theoretical framework for thinking about rhetoric and music by expanding on Lloyd Bitzer’s rhetorical situation, by offering the idea of “openings” to the existing exigence, audience, and constraints. The prophetic and rhetorical power of music in the aleatoric moment can help provide openings from which new exigencies can be conceived. We must, therefore, reconsider the role of rhetorical-musical composition for the citizen, not merely as a tool for entertainment or emotional persuasion, but as an arena for engaging with the political.
Resumo:
This dissertation serves as a call to geoscientists to share responsibility with K-12 educators for increasing Earth science literacy. When partnerships are created among K-12 educators and geoscientists, the synergy created can promote Earth science literacy in students, teachers, and the broader community. The research described here resulted in development of tools that can support effective professional development for teachers. One tool is used during the planning stages to structure a professional development program, another set of tools supports measurement of the effectiveness of a development program, and the third tool supports sustainability of professional development programs. The Michigan Teacher Excellence Program (MiTEP), a Math/Science Partnership project funded by the National Science Foundation, served as the test bed for developing and testing these tools. The first tool, the planning tool, is the Earth Science Literacy Principles (ESLP). The ESLP served as a planning tool for the two-week summer field courses as part of the MiTEP program. The ESLP, published in 2009, clearly describe what an Earth science literate person should know. The ESLP consists of nine big ideas and their supporting fundamental concepts. Using the ESLP for planning a professional development program assisted both instructors and teacher-participants focus on important concepts throughout the professional development activity. The measurement tools were developed to measure change in teachers’ Earth science content-area knowledge and perceptions related to teaching and learning that result from participating in a professional development program. The first measurement tool, the Earth System Concept Inventory (ESCI), directly measures content-area knowledge through a succession of multiple-choice questions that are aligned with the content of the professional development experience. The second measurement, an exit survey, collects qualitative data from teachers regarding their impression of the professional development. Both the ESCI and the exit survey were tested for validity and reliability. Lesson study is discussed here as a strategy for sustaining professional development in a school or a district after the end of a professional development activity. Lesson study, as described here, was offered as a formal course. Teachers engaged in lesson study worked collaboratively to design and test lessons that improve the teachers’ classroom practices. Data regarding the impact of the lesson study activity were acquired through surveys, written documents, and group interviews. The data are interpreted to indicate that the lesson study process improved teacher quality and classroom practices. In the case described here, the lesson study process was adopted by the teachers’ district and currently serves as part of the district’s work in Professional Learning Communities, resulting in ongoing professional development throughout the district.
Resumo:
Increasing prices for fuel with depletion and instability in foreign oil imports has driven the importance for using alternative and renewable fuels. The alternative fuels such as ethanol, methanol, butyl alcohol, and natural gas are of interest to be used to relieve some of the dependence on oil for transportation. The renewable fuel, ethanol which is made from the sugars of corn, has been used widely in fuel for vehicles in the United States because of its unique qualities. As with any renewable fuel, ethanol has many advantages but also has disadvantages. Cold startability of engines is one area of concern when using ethanol blended fuel. This research was focused on the cold startability of snowmobiles at ambient temperatures of 20 °F, 0 °F, and -20 °F. The tests were performed in a modified 48 foot refrigerated trailer which was retrofitted for the purpose of cold-start tests. Pure gasoline (E0) was used as a baseline test. A splash blended ethanol and gasoline mixture (E15, 15% ethanol and 85% gasoline by volume) was then tested and compared to the E0 fuel. Four different types of snowmobiles were used for the testing including a Yamaha FX Nytro RTX four-stroke, Ski-doo MX Z TNT 600 E-TEC direct injected two stroke, Polaris 800 Rush semi-direct injected two-stroke, and an Arctic Cat F570 carbureted two-stroke. All of the snowmobiles operate on open loop systems which means there was no compensation for the change in fuel properties. Emissions were sampled using a Sensors Inc. Semtech DS five gas emissions analyzer and engine data was recoded using AIM Racing Data Power EVO3 Pro and EVO4 systems. The recorded raw exhaust emissions included carbon monoxide (CO), carbon dioxide (CO2), total hydrocarbons (THC), and oxygen (O2). To help explain the trends in the emissions data, engine parameters were also recorded. The EVO equipment was installed on each vehicle to record the following parameters: engine speed, exhaust gas temperature, head temperature, coolant temperature, and test cell air temperature. At least three consistent tests to ensure repeatability were taken at each fuel and temperature combination so a total of 18 valid tests were taken on each snowmobile. The snowmobiles were run at operating temperature to clear any excess fuel in the engine crankcase before each cold-start test. The trends from switching from E0 to E15 were different for each snowmobile as they all employ different engine technologies. The Yamaha snowmobile (four-stroke EFI) achieved higher levels of CO2 with lower CO and THC emissions on E15. Engine speeds were fairly consistent between fuels but the average engine speeds were increased as the temperatures decreased. The average exhaust gas temperature increased from 1.3-1.8% for the E15 compared to E0 due to enleanment. For the Ski-doo snowmobile (direct injected two-stroke) only slight differences were noted when switching from E0 to E15. This could possibly be due to the lean of stoichiometric operation of the engine at idle. The CO2 emissions decreased slightly at 20 °F and 0 °F for E15 fuel with a small difference at -20 °F. Almost no change in CO or THC emissions was noted for all temperatures. The only significant difference in the engine data observed was the exhaust gas temperature which decreased with E15. The Polaris snowmobile (semi-direct injected two-stroke) had similar raw exhaust emissions for each of the two fuels. This was probably due to changing a resistor when using E15 which changed the fuel map for an ethanol mixture (E10 vs. E0). This snowmobile operates at a rich condition which caused the engine to emit higher values of CO than CO2 along with exceeding the THC analyzer range at idle. The engine parameters and emissions did not increase or decrease significantly with decreasing temperature. The average idle engine speed did increase as the ambient temperature decreased. The Arctic Cat snowmobile (carbureted two-stroke) was equipped with a choke lever to assist cold-starts. The choke was operated in the same manor for both fuels. Lower levels of CO emissions with E15 fuel were observed yet the THC emissions exceeded the analyzer range. The engine had a slightly lower speed with E15.