1000 resultados para Michigan Tech
Resumo:
This report presents the development of a Stochastic Knock Detection (SKD) method for combustion knock detection in a spark-ignition engine using a model based design approach. Knock Signal Simulator (KSS) was developed as the plant model for the engine. The KSS as the plant model for the engine generates cycle-to-cycle accelerometer knock intensities following a stochastic approach with intensities that are generated using a Monte Carlo method from a lognormal distribution whose parameters have been predetermined from engine tests and dependent upon spark-timing, engine speed and load. The lognormal distribution has been shown to be a good approximation to the distribution of measured knock intensities over a range of engine conditions and spark-timings for multiple engines in previous studies. The SKD method is implemented in Knock Detection Module (KDM) which processes the knock intensities generated by KSS with a stochastic distribution estimation algorithm and outputs estimates of high and low knock intensity levels which characterize knock and reference level respectively. These estimates are then used to determine a knock factor which provides quantitative measure of knock level and can be used as a feedback signal to control engine knock. The knock factor is analyzed and compared with a traditional knock detection method to detect engine knock under various engine operating conditions. To verify the effectiveness of the SKD method, a knock controller was also developed and tested in a model-in-loop (MIL) system. The objective of the knock controller is to allow the engine to operate as close as possible to its border-line spark-timing without significant engine knock. The controller parameters were tuned to minimize the cycle-to-cycle variation in spark timing and the settling time of the controller in responding to step increase in spark advance resulting in the onset of engine knock. The simulation results showed that the combined system can be used adequately to model engine knock and evaluated knock control strategies for a wide range of engine operating conditions.
Resumo:
It is an important and difficult challenge to protect modern interconnected power system from blackouts. Applying advanced power system protection techniques and increasing power system stability are ways to improve the reliability and security of power systems. Phasor-domain software packages such as Power System Simulator for Engineers (PSS/E) can be used to study large power systems but cannot be used for transient analysis. In order to observe both power system stability and transient behavior of the system during disturbances, modeling has to be done in the time-domain. This work focuses on modeling of power systems and various control systems in the Alternative Transients Program (ATP). ATP is a time-domain power system modeling software in which all the power system components can be modeled in detail. Models are implemented with attention to component representation and parameters. The synchronous machine model includes the saturation characteristics and control interface. Transient Analysis Control System is used to model the excitation control system, power system stabilizer and the turbine governor system of the synchronous machine. Several base cases of a single machine system are modeled and benchmarked against PSS/E. A two area system is modeled and inter-area and intra-area oscillations are observed. The two area system is reduced to a two machine system using reduced dynamic equivalencing. The original and the reduced systems are benchmarked against PSS/E. This work also includes the simulation of single-pole tripping using one of the base case models. Advantages of single-pole tripping and comparison of system behavior against three-pole tripping are studied. Results indicate that the built-in control system models in PSS/E can be effectively reproduced in ATP. The benchmarked models correctly simulate the power system dynamics. The successful implementation of a dynamically reduced system in ATP shows promise for studying a small sub-system of a large system without losing the dynamic behaviors. Other aspects such as relaying can be investigated using the benchmarked models. It is expected that this work will provide guidance in modeling different control systems for the synchronous machine and in representing dynamic equivalents of large power systems.
Resumo:
This study describes the development and establishment of a proposed Simple Performance Test (SPT) specification in order to contribute to the asphalt materials technology in the state of Michigan. The properties and characteristic of materials, performance testing of specimens, and field analyses are used in developing draft SPT specifications. These advanced and more effective specifications should significantly improve the qualities of designed and constructed hot mix asphalt (HMA) leading to improvement in pavement life in Michigan. The objectives of this study include the following: 1) using the SPT, conduct a laboratory study to measure the parameters including the dynamic modulus terms (E*/sinϕ and E*) and the flow number (Fn) for typical Michigan HMA mixtures, 2) correlate the results of the laboratory study to field performance as they relate to flexible pavement performance (rutting, fatigue, and low temperature cracking), and 3) make recommendations for the SPT criteria at specific traffic levels (e.g. E3, E10, E30), including recommendations for a draft test specification for use in Michigan. The specification criteria of dynamic modulus were developed based upon field rutting performance and contractor warranty criteria.
Resumo:
Virtualization has become a common abstraction layer in modern data centers. By multiplexing hardware resources into multiple virtual machines (VMs) and thus enabling several operating systems to run on the same physical platform simultaneously, it can effectively reduce power consumption and building size or improve security by isolating VMs. In a virtualized system, memory resource management plays a critical role in achieving high resource utilization and performance. Insufficient memory allocation to a VM will degrade its performance dramatically. On the contrary, over-allocation causes waste of memory resources. Meanwhile, a VM’s memory demand may vary significantly. As a result, effective memory resource management calls for a dynamic memory balancer, which, ideally, can adjust memory allocation in a timely manner for each VM based on their current memory demand and thus achieve the best memory utilization and the optimal overall performance. In order to estimate the memory demand of each VM and to arbitrate possible memory resource contention, a widely proposed approach is to construct an LRU-based miss ratio curve (MRC), which provides not only the current working set size (WSS) but also the correlation between performance and the target memory allocation size. Unfortunately, the cost of constructing an MRC is nontrivial. In this dissertation, we first present a low overhead LRU-based memory demand tracking scheme, which includes three orthogonal optimizations: AVL-based LRU organization, dynamic hot set sizing and intermittent memory tracking. Our evaluation results show that, for the whole SPEC CPU 2006 benchmark suite, after applying the three optimizing techniques, the mean overhead of MRC construction is lowered from 173% to only 2%. Based on current WSS, we then predict its trend in the near future and take different strategies for different prediction results. When there is a sufficient amount of physical memory on the host, it locally balances its memory resource for the VMs. Once the local memory resource is insufficient and the memory pressure is predicted to sustain for a sufficiently long time, a relatively expensive solution, VM live migration, is used to move one or more VMs from the hot host to other host(s). Finally, for transient memory pressure, a remote cache is used to alleviate the temporary performance penalty. Our experimental results show that this design achieves 49% center-wide speedup.
Resumo:
Studies are suggesting that hurricane hazard patterns (e.g. intensity and frequency) may change as a consequence of the changing global climate. As hurricane patterns change, it can be expected that hurricane damage risks and costs may change as a result. This indicates the necessity to develop hurricane risk assessment models that are capable of accounting for changing hurricane hazard patterns, and develop hurricane mitigation and climatic adaptation strategies. This thesis proposes a comprehensive hurricane risk assessment and mitigation strategies that account for a changing global climate and that has the ability of being adapted to various types of infrastructure including residential buildings and power distribution poles. The framework includes hurricane wind field models, hurricane surge height models and hurricane vulnerability models to estimate damage risks due to hurricane wind speed, hurricane frequency, and hurricane-induced storm surge and accounts for the timedependant properties of these parameters as a result of climate change. The research then implements median insured house values, discount rates, housing inventory, etc. to estimate hurricane damage costs to residential construction. The framework was also adapted to timber distribution poles to assess the impacts climate change may have on timber distribution pole failure. This research finds that climate change may have a significant impact on the hurricane damage risks and damage costs of residential construction and timber distribution poles. In an effort to reduce damage costs, this research develops mitigation/adaptation strategies for residential construction and timber distribution poles. The costeffectiveness of these adaptation/mitigation strategies are evaluated through the use of a Life-Cycle Cost (LCC) analysis. In addition, a scenario-based analysis of mitigation strategies for timber distribution poles is included. For both residential construction and timber distribution poles, adaptation/mitigation measures were found to reduce damage costs. Finally, the research develops the Coastal Community Social Vulnerability Index (CCSVI) to include the social vulnerability of a region to hurricane hazards within this hurricane risk assessment. This index quantifies the social vulnerability of a region, by combining various social characteristics of a region with time-dependant parameters of hurricanes (i.e. hurricane wind and hurricane-induced storm surge). Climate change was found to have an impact on the CCSVI (i.e. climate change may have an impact on the social vulnerability of hurricane-prone regions).
Resumo:
Mount Etna, Italy, is one of the most active volcanoes in the world, and is also regarded as one of the strongest volcanic sources of sulfur dioxide (SO2) emissions to the atmosphere. Since October 2004, an automated ultraviolet (UV) spectrometer network (FLAME) has provided ground-based SO2 measurements with high temporal resolution, providing an opportunity to validate satellite SO2 measurements at Etna. The Ozone Monitoring Instrument (OMI) on the NASA Aura satellite, which makes global daily measurements of trace gases in the atmosphere, was used to compare SO2 amount released by the volcano during paroxysmal lava-fountaining events from 2004 to present. We present the first comparison between SO2 emission rates and SO2 burdens obtained by the OMI transect technique and OMI Normalized Cloud-Mass (NCM) technique and the ground-based FLAME Mini-DOAS measurements. In spite of a good data set from the FLAME network, finding coincident OMI and FLAME measurements proved challenging and only one paroxysmal event provided a good validation for OMI. Another goal of this work was to assess the efficacy of the FLAME network in capturing paroxysmal SO2 emissions from Etna, given that the FLAME network is only operational during daylight hours and some paroxysms occur at night. OMI measurements are advantageous since SO2 emissions from nighttime paroxysms can often be quantified on the following day, providing improved constraints on Etna’s SO2 budget.
Resumo:
Experimental warming provides a method to determine how an ecosystem will respond to increased temperatures. Northern peatland ecosystems, sensitive to changing climates, provide an excellent setting for experimental warming. Storing great quantities of carbon, northern peatlands play a critical role in regulating global temperatures. Two of the most common methods of experimental warming include open top chambers (OTCs) and infrared (IR) lamps. These warming systems have been used in many ecosystems throughout the world, yet their efficacy to create a warmer environment is variable and has not been widely studied. To date, there has not been a direct, experimentally controlled comparison of OTCs and IR lamps. As a result, a factorial study was implemented to compare the warming efficacy of OTCs and IR lamps and to examine the resulting carbon dioxide (CO2) and methane (CH4) flux rates in a Lake Superior peatland. IR lamps warmed the ecosystem on average by 1-2 #°C, with the majority of warming occurring during nighttime hours. OTC's did not provide any long-term warming above control plots, which is contrary to similar OTC studies at high latitudes. By investigating diurnal heating patterns and micrometeorological variables, we were able to conclude that OTCs were not achieving strong daytime heating peaks and were often cooler than control plots during nighttime hours. Temperate day-length, cloudy and humid conditions, and latent heat loss were factors that inhibited OTC warming. There were no changes in CO2 flux between warming treatments in lawn plots. Gross ecosystem production was significantly greater in IR lamp-hummock plots, while ecosystem respiration was not affected. CH4 flux was not significantly affected by warming treatment. Minimal daytime heating differences, high ambient temperatures, decay resistant substrate, as well as other factors suppressed significant gas flux responses from warming treatments.
Resumo:
Experimental studies on epoxies report that the microstructure consists of highly crosslinked localized regions connected with a dispersed phase of low crosslink density. The various thermo-mechanical properties of epoxies might be affected by the crosslink distribution. But as experiments cannot report the exact number of crosslinked covalent bonds present in the structure, molecular dynamics is thus being used in this work to determine the influence of crosslink distribution on thermo-mechanical properties. Molecular dynamics and molecular mechanics simulations are used to establish wellequilibrated molecular models of EPON 862-DETDA epoxy system with a range of crosslink densities and various crosslink distributions. Crosslink distributions are being varied by forming differently crosslinked localized clusters and then by forming different number of crosslinks interconnecting the clusters. Simulations are subsequently used to predict the volume shrinkage, thermal expansion coefficients, and elastic properties of each of the crosslinked systems. The results indicate that elastic properties increase with increasing levels of overall crosslink density and the thermal expansion coefficient decreases with overall crosslink density, both above and below the glass transition temperature. Elastic moduli and coefficients of linear thermal expansion values were found to be different for systems with same overall crosslink density but having different crosslink distributions, thus indicating an effect of the epoxy nanostructure on physical properties. The values of thermo-mechanical properties for all the crosslinked systems are within the range of values reported in literature.
Resumo:
Hooked reinforcing bars (rebar) are used frequently to carry the tension forces developed in beams and transferred to columns. Research into epoxy coated hooked bars has only been minimally performed and no research has been carried out incorporating the coating process found in ASTM A934. This research program compares hooked rebar that are uncoated, coated by ASTM A775, and coated by ASTM A934. In total, forty-two full size beam-column specimens were created, instrumented and tested to failure. The program was carried out in three phases. The first phase was used to refine the test setup and procedures. Phase two explored the spacing of column ties within the joint region. Phase three explored the three coating types found above. Each specimen included two hooked rebar which were loaded and measured independently for relative rebar slip. The load and displacement of the hooked rebar were analyzed, focusing on behavior at the levels of 30 ksi, 42 ksi and 60 ksi of rebar stress. Statistical and general comparisons were made using the coating types, tie spacing, and rebar stress level. Many of the parameters composing the rebar and concrete were also tested to characterize the components and specimens. All rebar tested met ASTM standards for tensile strength, but the newer ASTM A934 method seemed to produce slightly lower yield strengths. The A934 method also produced coating thicknesses that were very inconsistent and were higher than ASTM maximum limits in many locations. Continuity of coating surfaces was found to be less than 100% for both A775 and A934 rebar, but for different reasons. The many comparisons made did not always produce clear conclusions. The data suggests that the ACI Code (318-05) parameter of 1.2 for including epoxy coating on hooked rebar may need to be raised, possibly to 2.5, but more testing needs to be performed before such a large value change is set forth. This is particularly important as variables were identified which may have a larger influence on rebar capacity than the Development Length, of which the current 1.2 factor modifies. Many suggestions for future work are included throughout the thesis to help guide other researchers in carrying out successful and productive programs which will further the highly understudied topic of hooked rebar.
Resumo:
The emissions, filtration and oxidation characteristics of a diesel oxidation catalyst (DOC) and a catalyzed particulate filter (CPF) in a Johnson Matthey catalyzed continuously regenerating trap (CCRT ®) were studied by using computational models. Experimental data needed to calibrate the models were obtained by characterization experiments with raw exhaust sampling from a Cummins ISM 2002 engine with variable geometry turbocharging (VGT) and programmed exhaust gas recirculation (EGR). The experiments were performed at 20, 40, 60 and 75% of full load (1120 Nm) at rated speed (2100 rpm), with and without the DOC upstream of the CPF. This was done to study the effect of temperature and CPF-inlet NO2 concentrations on particulate matter oxidation in the CCRT ®. A previously developed computational model was used to determine the kinetic parameters describing the oxidation characteristics of HCs, CO and NO in the DOC and the pressure drop across it. The model was calibrated at five temperatures in the range of 280 – 465° C, and exhaust volumetric flow rates of 0.447 – 0.843 act-m3/sec. The downstream HCs, CO and NO concentrations were predicted by the DOC model to within ±3 ppm. The HCs and CO oxidation kinetics in the temperature range of 280 - 465°C and an exhaust volumetric flow rate of 0.447 - 0.843 act-m3/sec can be represented by one ’apparent’ activation energy and pre-exponential factor. The NO oxidation kinetics in the same temperature and exhaust flow rate range can be represented by ’apparent’ activation energies and pre-exponential factors in two regimes. The DOC pressure drop was always predicted within 0.5 kPa by the model. The MTU 1-D 2-layer CPF model was enhanced in several ways to better model the performance of the CCRT ®. A model to simulate the oxidation of particulate inside the filter wall was developed. A particulate cake layer filtration model which describes particle filtration in terms of more fundamental parameters was developed and coupled to the wall oxidation model. To better model the particulate oxidation kinetics, a model to take into account the NO2 produced in the washcoat of the CPF was developed. The overall 1-D 2-layer model can be used to predict the pressure drop of the exhaust gas across the filter, the evolution of particulate mass inside the filter, the particulate mass oxidized, the filtration efficiency and the particle number distribution downstream of the CPF. The model was used to better understand the internal performance of the CCRT®, by determining the components of the total pressure drop across the filter, by classifying the total particulate matter in layer I, layer II, the filter wall, and by the means of oxidation i.e. by O2, NO2 entering the filter and by NO2 being produced in the filter. The CPF model was calibrated at four temperatures in the range of 280 – 465 °C, and exhaust volumetric flow rates of 0.447 – 0.843 act-m3/sec, in CPF-only and CCRT ® (DOC+CPF) configurations. The clean filter wall permeability was determined to be 2.00E-13 m2, which is in agreement with values in the literature for cordierite filters. The particulate packing density in the filter wall had values between 2.92 kg/m3 - 3.95 kg/m3 for all the loads. The mean pore size of the catalyst loaded filter wall was found to be 11.0 µm. The particulate cake packing densities and permeabilities, ranged from 131 kg/m3 - 134 kg/m3, and 0.42E-14 m2 and 2.00E-14 m2 respectively, and are in agreement with the Peclet number correlations in the literature. Particulate cake layer porosities determined from the particulate cake layer filtration model ranged between 0.841 and 0.814 and decreased with load, which is about 0.1 lower than experimental and more complex discrete particle simulations in the literature. The thickness of layer I was kept constant at 20 µm. The model kinetics in the CPF-only and CCRT ® configurations, showed that no ’catalyst effect’ with O2 was present. The kinetic parameters for the NO2-assisted oxidation of particulate in the CPF were determined from the simulation of transient temperature programmed oxidation data in the literature. It was determined that the thermal and NO2 kinetic parameters do not change with temperature, exhaust flow rate or NO2 concentrations. However, different kinetic parameters are used for particulate oxidation in the wall and on the wall. Model results showed that oxidation of particulate in the pores of the filter wall can cause disproportionate decreases in the filter pressure drop with respect to particulate mass. The wall oxidation model along with the particulate cake filtration model were developed to model the sudden and rapid decreases in pressure drop across the CPF. The particulate cake and wall filtration models result in higher particulate filtration efficiencies than with just the wall filtration model, with overall filtration efficiencies of 98-99% being predicted by the model. The pre-exponential factors for oxidation by NO2 did not change with temperature or NO2 concentrations because of the NO2 wall production model. In both CPF-only and CCRT ® configurations, the model showed NO2 and layer I to be the dominant means and dominant physical location of particulate oxidation respectively. However, at temperatures of 280 °C, NO2 is not a significant oxidizer of particulate matter, which is in agreement with studies in the literature. The model showed that 8.6 and 81.6% of the CPF-inlet particulate matter was oxidized after 5 hours at 20 and 75% load in CCRT® configuration. In CPF-only configuration at the same loads, the model showed that after 5 hours, 4.4 and 64.8% of the inlet particulate matter was oxidized. The increase in NO2 concentrations across the DOC contributes significantly to the oxidation of particulate in the CPF and is supplemented by the oxidation of NO to NO2 by the catalyst in the CPF, which increases the particulate oxidation rates. From the model, it was determined that the catalyst in the CPF modeslty increases the particulate oxidation rates in the range of 4.5 – 8.3% in the CCRT® configuration. Hence, the catalyst loading in the CPF of the CCRT® could possibly be reduced without significantly decreasing particulate oxidation rates leading to catalyst cost savings and better engine performance due to lower exhaust backpressures.
Resumo:
Water-saturated debris flows are among some of the most destructive mass movements. Their complex nature presents a challenge for quantitative description and modeling. In order to improve understanding of the dynamics of these flows, it is important to seek a simplified dynamic system underlying their behavior. Models currently in use to describe the motion of debris flows employ depth-averaged equations of motion, typically assuming negligible effects from vertical acceleration. However, in many cases debris flows experience significant vertical acceleration as they move across irregular surfaces, and it has been proposed that friction associated with vertical forces and liquefaction merit inclusion in any comprehensive mechanical model. The intent of this work is to determine the effect of vertical acceleration through a series of laboratory experiments designed to simulate debris flows, testing a recent model for debris flows experimentally. In the experiments, a mass of water-saturated sediment is released suddenly from a holding container, and parameters including rate of collapse, pore-fluid pressure, and bed load are monitored. Experiments are simplified to axial geometry so that variables act solely in the vertical dimension. Steady state equations to infer motion of the moving sediment mass are not sufficient to model accurately the independent solid and fluid constituents in these experiments. The model developed in this work more accurately predicts the bed-normal stress of a saturated sediment mass in motion and illustrates the importance of acceleration and deceleration.
Resumo:
In an increasingly interconnected world characterized by the accelerating interplay of cultural, linguistic, and national difference, the ability to negotiate that difference in an equitable and ethical manner is a crucial skill for both individuals and larger social groups. This dissertation, Writing Center Handbooks and Travel Guidebooks: Redesigning Instructional Texts for Multicultural, Multilingual, and Multinational Contexts, considers how instructional texts that ostensibly support the negotiation of difference (i.e., accepting and learning from difference) actually promote the management of difference (i.e., rejecting, assimilating, and erasing difference). As a corrective to this focus on managing difference, chapter two constructs a theoretical framework that facilitates the redesign of handbooks, guidebooks, and similar instructional texts. This framework centers on reflexive design practices and is informed by literacy theory (Gee; New London Group; Street), social learning theory (Wenger), globalization theory (Nederveen Pieterse), and composition theory (Canagarajah; Horner and Trimbur; Lu; Matsuda; Pratt). By implementing reflexive design practices in the redesign of instructional texts, this dissertation argues that instructional texts can promote the negotiation of difference and a multicultural/multilingual sensibility that accounts for twenty-first century linguistic and cultural realities. Informed by the theoretical framework of chapter two, chapters three and four conduct a rhetorical analysis of two forms of instructional text that are representative of the larger genre: writing center coach handbooks and travel guidebooks to Hong Kong. This rhetorical analysis reveals how both forms of text employ rhetorical strategies that uphold dominant monolingual and monocultural assumptions. Alternative rhetorical strategies are then proposed that can be used to redesign these two forms of instructional texts in a manner that aligns with multicultural and multilingual assumptions. These chapters draw on the work of scholars in Writing Center Studies (Boquet and Lerner; Carino; DiPardo; Grimm; North; Severino) and Technical Communication (Barton and Barton; Dilger; Johnson; Kimball; Slack), respectively. Chapter five explores how the redesign of coach handbooks and travel guidebooks proposed in this dissertation can be conceptualized as a political act. Ultimately, this dissertation argues that instructional texts are powerful heuristic tools that can enact social change if they are redesigned to foster the negotiation of difference and to promote multicultural/multilingual world views.
Resumo:
The exotic emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), was first discovered in North America in southeastern Michigan, USA, and Windsor, Ontario, Canada in 2002. Significant ash (Fraxinus spp.) mortality has been caused in areas where this insect has become well established, and new infestations continue to be discovered in several states in the United States and in Canada. This beetle is difficult to detect when it invades new areas or occurs at low density. Girdled trap tree and ground surveys have been important tools for detecting emerald ash borer populations, and more recently, purple baited prism traps have been used in detection efforts. Girdled trap trees were found to be more effective than purple prism traps at detecting emerald ash borer as they acted as sinks for larvae in an area of known low density emerald ash borer infestation. The canopy condition of the trap trees was not predictive of whether they were infested or not, indicating that ground surveys may not be effective for detection in an area of low density emerald ash borer population. When landing rates of low density emerald ash borer populations were monitored on non-girdled ash trees, landing rates were higher on larger, open grown trees with canopies that contain a few dead branches. As a result of these studies, we suggest that the threshold for emerald ash borer detection using baited purple prism traps hung at the canopy base of trees is higher than for girdled trap trees. In addition, detection of developing populations of EAB may be possible by selectively placing sticky trapping surfaces on non-girdled trap trees that are the larger and more open grown trees at a site.
Resumo:
High flexural strength and stiffness can be achieved by forming a thin panel into a wave shape perpendicular to the bending direction. The use of corrugated shapes to gain flexural strength and stiffness is common in metal and reinforced plastic products. However, there is no commercial production of corrugated wood composite panels. This research focuses on the application of corrugated shapes to wood strand composite panels. Beam theory, classical plate theory and finite element models were used to analyze the bending behavior of corrugated panels. The most promising shallow corrugated panel configuration was identified based on structural performance and compatibility with construction practices. The corrugation profile selected has a wavelength equal to 8”, a channel depth equal to ¾”, a sidewall angle equal to 45 degrees and a panel thickness equal to 3/8”. 16”x16” panels were produced using random mats and 3-layer aligned mats with surface flakes parallel to the channels. Strong axis and weak axis bending tests were conducted. The test results indicate that flake orientation has little effect on the strong axis bending stiffness. The 3/8” thick random mat corrugated panels exhibit bending stiffness (400,000 lbs-in2/ft) and bending strength (3,000 in-lbs/ft) higher than 23/32” or 3/4” thick APA Rated Sturd-I-Floor with a 24” o.c. span rating. Shear and bearing test results show that the corrugated panel can withstand more than 50 psf of uniform load at 48” joist spacings. Molding trials on 16”x16” panels provided data for full size panel production. Full size 4’x8’ shallow corrugated panels were produced with only minor changes to the current oriented strandboard manufacturing process. Panel testing was done to simulate floor loading during construction, without a top underlayment layer, and during occupancy, with an underlayment over the panel to form a composite deck. Flexural tests were performed in single-span and two-span bending with line loads applied at mid-span. The average strong axis bending stiffness and bending strength of the full size corrugated panels (without the underlayment) were over 400,000 lbs-in2/ft and 3,000 in-lbs/ft, respectively. The composite deck system, which consisted of an OSB sheathing (15/32” thick) nailed-glued (using 3d ringshank nails and AFG-01 subfloor adhesive) to the corrugated subfloor achieved about 60% of the full composite stiffness resulting in about 3 times the bending stiffness of the corrugated subfloor (1,250,000 lbs-in2/ft). Based on the LRFD design criteria, the corrugated composite floor system can carry 40 psf of unfactored uniform loads, limited by the L/480 deflection limit state, at 48” joist spacings. Four 10-ft long composite T-beam specimens were built and tested for the composite action and the load sharing between a 24” wide corrugated deck system and the supporting I-joist. The average bending stiffness of the composite T-beam was 1.6 times higher than the bending stiffness of the I-joist. A 8-ft x 12-ft mock up floor was built to evaluate construction procedures. The assembly of the composite floor system is relatively simple. The corrugated composite floor system might be able to offset the cheaper labor costs of the single-layer Sturd-IFloor through the material savings. However, no conclusive result can be drawn, in terms of the construction costs, at this point without an in depth cost analysis of the two systems. The shallow corrugated composite floor system might be a potential alternative to the Sturd-I-Floor in the near future because of the excellent flexural stiffness provided.