89 resultados para Other Civil and Environmental Engineering
Resumo:
High flexural strength and stiffness can be achieved by forming a thin panel into a wave shape perpendicular to the bending direction. The use of corrugated shapes to gain flexural strength and stiffness is common in metal and reinforced plastic products. However, there is no commercial production of corrugated wood composite panels. This research focuses on the application of corrugated shapes to wood strand composite panels. Beam theory, classical plate theory and finite element models were used to analyze the bending behavior of corrugated panels. The most promising shallow corrugated panel configuration was identified based on structural performance and compatibility with construction practices. The corrugation profile selected has a wavelength equal to 8”, a channel depth equal to ¾”, a sidewall angle equal to 45 degrees and a panel thickness equal to 3/8”. 16”x16” panels were produced using random mats and 3-layer aligned mats with surface flakes parallel to the channels. Strong axis and weak axis bending tests were conducted. The test results indicate that flake orientation has little effect on the strong axis bending stiffness. The 3/8” thick random mat corrugated panels exhibit bending stiffness (400,000 lbs-in2/ft) and bending strength (3,000 in-lbs/ft) higher than 23/32” or 3/4” thick APA Rated Sturd-I-Floor with a 24” o.c. span rating. Shear and bearing test results show that the corrugated panel can withstand more than 50 psf of uniform load at 48” joist spacings. Molding trials on 16”x16” panels provided data for full size panel production. Full size 4’x8’ shallow corrugated panels were produced with only minor changes to the current oriented strandboard manufacturing process. Panel testing was done to simulate floor loading during construction, without a top underlayment layer, and during occupancy, with an underlayment over the panel to form a composite deck. Flexural tests were performed in single-span and two-span bending with line loads applied at mid-span. The average strong axis bending stiffness and bending strength of the full size corrugated panels (without the underlayment) were over 400,000 lbs-in2/ft and 3,000 in-lbs/ft, respectively. The composite deck system, which consisted of an OSB sheathing (15/32” thick) nailed-glued (using 3d ringshank nails and AFG-01 subfloor adhesive) to the corrugated subfloor achieved about 60% of the full composite stiffness resulting in about 3 times the bending stiffness of the corrugated subfloor (1,250,000 lbs-in2/ft). Based on the LRFD design criteria, the corrugated composite floor system can carry 40 psf of unfactored uniform loads, limited by the L/480 deflection limit state, at 48” joist spacings. Four 10-ft long composite T-beam specimens were built and tested for the composite action and the load sharing between a 24” wide corrugated deck system and the supporting I-joist. The average bending stiffness of the composite T-beam was 1.6 times higher than the bending stiffness of the I-joist. A 8-ft x 12-ft mock up floor was built to evaluate construction procedures. The assembly of the composite floor system is relatively simple. The corrugated composite floor system might be able to offset the cheaper labor costs of the single-layer Sturd-IFloor through the material savings. However, no conclusive result can be drawn, in terms of the construction costs, at this point without an in depth cost analysis of the two systems. The shallow corrugated composite floor system might be a potential alternative to the Sturd-I-Floor in the near future because of the excellent flexural stiffness provided.
Resumo:
Carboxylate-based deicing and anti-icing chemicals became widely used in the mid 1990s, replacing more environmentally burdensome chemicals. Within a few years of their adoption, distress of portland cement concrete runways was reported by a few airports using the new chemicals. Distress manifested characteristics identical to that of alkali silica reactivity (ASR), but onset occurred early in the pavement’s operating life and with pavements thought to contain innocuous aggregate. The carboxylate-based deicing chemicals were suspected of exacerbating ASR-like expansion. Innocuous, moderately, and highly reactive aggregates were tested using modified ASTM C1260 and ASTM C1567 procedures with soak solutions containing deicer solutions and sodium hydroxide or potassium hydroxide. ASR-like expansion is exacerbated in the presence of potassium acetate. The expansion rate produced by a given aggregate is also a function of the alkali hydroxide used. Petrographic analyses were performed on thin sections prepared from mortar bars used in the experiments. Expansion occurred via two mechanisms; rupture of aggregate grains and expansion of paste.
Resumo:
Light-frame wood buildings are widely built in the United States (U.S.). Natural hazards cause huge losses to light-frame wood construction. This study proposes methodologies and a framework to evaluate the performance and risk of light-frame wood construction. Performance-based engineering (PBE) aims to ensure that a building achieves the desired performance objectives when subjected to hazard loads. In this study, the collapse risk of a typical one-story light-frame wood building is determined using the Incremental Dynamic Analysis method. The collapse risks of buildings at four sites in the Eastern, Western, and Central regions of U.S. are evaluated. Various sources of uncertainties are considered in the collapse risk assessment so that the influence of uncertainties on the collapse risk of lightframe wood construction is evaluated. The collapse risks of the same building subjected to maximum considered earthquakes at different seismic zones are found to be non-uniform. In certain areas in the U.S., the snow accumulation is significant and causes huge economic losses and threatens life safety. Limited study has been performed to investigate the snow hazard when combined with a seismic hazard. A Filtered Poisson Process (FPP) model is developed in this study, overcoming the shortcomings of the typically used Bernoulli model. The FPP model is validated by comparing the simulation results to weather records obtained from the National Climatic Data Center. The FPP model is applied in the proposed framework to assess the risk of a light-frame wood building subjected to combined snow and earthquake loads. The snow accumulation has a significant influence on the seismic losses of the building. The Bernoulli snow model underestimates the seismic loss of buildings in areas with snow accumulation. An object-oriented framework is proposed in this study to performrisk assessment for lightframe wood construction. For home owners and stake holders, risks in terms of economic losses is much easier to understand than engineering parameters (e.g., inter story drift). The proposed framework is used in two applications. One is to assess the loss of the building subjected to mainshock-aftershock sequences. Aftershock and downtime costs are found to be important factors in the assessment of seismic losses. The framework is also applied to a wood building in the state of Washington to assess the loss of the building subjected to combined earthquake and snow loads. The proposed framework is proven to be an appropriate tool for risk assessment of buildings subjected to multiple hazards. Limitations and future works are also identified.
Resumo:
Accurate seasonal to interannual streamflow forecasts based on climate information are critical for optimal management and operation of water resources systems. Considering most water supply systems are multipurpose, operating these systems to meet increasing demand under the growing stresses of climate variability and climate change, population and economic growth, and environmental concerns could be very challenging. This study was to investigate improvement in water resources systems management through the use of seasonal climate forecasts. Hydrological persistence (streamflow and precipitation) and large-scale recurrent oceanic-atmospheric patterns such as the El Niño/Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), the Atlantic Multidecadal Oscillation (AMO), the Pacific North American (PNA), and customized sea surface temperature (SST) indices were investigated for their potential to improve streamflow forecast accuracy and increase forecast lead-time in a river basin in central Texas. First, an ordinal polytomous logistic regression approach is proposed as a means of incorporating multiple predictor variables into a probabilistic forecast model. Forecast performance is assessed through a cross-validation procedure, using distributions-oriented metrics, and implications for decision making are discussed. Results indicate that, of the predictors evaluated, only hydrologic persistence and Pacific Ocean sea surface temperature patterns associated with ENSO and PDO provide forecasts which are statistically better than climatology. Secondly, a class of data mining techniques, known as tree-structured models, is investigated to address the nonlinear dynamics of climate teleconnections and screen promising probabilistic streamflow forecast models for river-reservoir systems. Results show that the tree-structured models can effectively capture the nonlinear features hidden in the data. Skill scores of probabilistic forecasts generated by both classification trees and logistic regression trees indicate that seasonal inflows throughout the system can be predicted with sufficient accuracy to improve water management, especially in the winter and spring seasons in central Texas. Lastly, a simplified two-stage stochastic economic-optimization model was proposed to investigate improvement in water use efficiency and the potential value of using seasonal forecasts, under the assumption of optimal decision making under uncertainty. Model results demonstrate that incorporating the probabilistic inflow forecasts into the optimization model can provide a significant improvement in seasonal water contract benefits over climatology, with lower average deficits (increased reliability) for a given average contract amount, or improved mean contract benefits for a given level of reliability compared to climatology. The results also illustrate the trade-off between the expected contract amount and reliability, i.e., larger contracts can be signed at greater risk.
Resumo:
Routine bridge inspections require labor intensive and highly subjective visual interpretation to determine bridge deck surface condition. Light Detection and Ranging (LiDAR) a relatively new class of survey instrument has become a popular and increasingly used technology for providing as-built and inventory data in civil applications. While an increasing number of private and governmental agencies possess terrestrial and mobile LiDAR systems, an understanding of the technology’s capabilities and potential applications continues to evolve. LiDAR is a line-of-sight instrument and as such, care must be taken when establishing scan locations and resolution to allow the capture of data at an adequate resolution for defining features that contribute to the analysis of bridge deck surface condition. Information such as the location, area, and volume of spalling on deck surfaces, undersides, and support columns can be derived from properly collected LiDAR point clouds. The LiDAR point clouds contain information that can provide quantitative surface condition information, resulting in more accurate structural health monitoring. LiDAR scans were collected at three study bridges, each of which displayed a varying degree of degradation. A variety of commercially available analysis tools and an independently developed algorithm written in ArcGIS Python (ArcPy) were used to locate and quantify surface defects such as location, volume, and area of spalls. The results were visual and numerically displayed in a user-friendly web-based decision support tool integrating prior bridge condition metrics for comparison. LiDAR data processing procedures along with strengths and limitations of point clouds for defining features useful for assessing bridge deck condition are discussed. Point cloud density and incidence angle are two attributes that must be managed carefully to ensure data collected are of high quality and useful for bridge condition evaluation. When collected properly to ensure effective evaluation of bridge surface condition, LiDAR data can be analyzed to provide a useful data set from which to derive bridge deck condition information.
Resumo:
With the increasing importance of conserving natural resources and moving toward sustainable practices, the aging transportation infrastructure can benefit from these ideas by improving their existing recycling practices. When an asphalt pavement needs to be replaced, the existing pavement is removed and ground up. This ground material, known as reclaimed asphalt pavement (RAP), is then added into new asphalt roads. However, since RAP was exposed to years of ultraviolet degradation and environmental weathering, the material has aged and cannot be used as a direct substitute for aggregate and binder in new asphalt pavements. One material that holds potential for restoring the aged asphalt binder to a usable state is waste engine oil. This research aims to study the feasibility of using waste engine oil as a recycling agent to improve the recyclability of pavements containing RAP. Testing was conducted in three phases, asphalt binder testing, advanced asphalt binder testing, and laboratory mixture testing. Asphalt binder testing consisted of dynamic shear rheometer and rotational viscometer testing on both unaged and aged binders containing waste engine oil and reclaimed asphalt binder (RAB). Fourier Transform Infrared Spectroscopy (FTIR) testing was carried out to on the asphalt binders blended with RAB and waste engine oil compare the structural indices indicative of aging. Lastly, sample asphalt samples containing waste engine oil and RAP were subjected to rutting testing and tensile strength ratio testing. These tests lend evidence to support the claim that waste engine oil can be used as a rejuvenating agent to chemically restore asphalt pavements containing RAP. Waste engine oil can reduce the stiffness and improve the low temperature properties of asphalt binders blended with RAB. Waste engine oil can also soften asphalt pavements without having a detrimental effect on the moisture susceptibility.
Resumo:
This report details the outcomes of a study designed to investigate the piezoelectric properties of Portland cement paste for its possible applications in structural health monitoring. Specifically, this study provides insights into the effects on piezoelectric properties of hardened cement paste from the application of an electric field during the curing process. As part of the reporting of this study, the state of the art in structural health monitoring is reviewed. In this study it is demonstrated that application of an electric field using a spatially-coarse array of electrodes to cure cement paste was not effective in increasing the magnitude of the piezoelectric coupling, but did increase repeatability of the piezoelectric response of the hardened material.
Resumo:
The bridge inspection industry has yet to utilize a rapidly growing technology that shows promise to help improve the inspection process. This thesis investigates the abilities that 3D photogrammetry is capable of providing to the bridge inspector for a number of deterioration mechanisms. The technology can provide information about the surface condition of some bridge components, primarily focusing on the surface defects of a concrete bridge which include cracking, spalling and scaling. Testing was completed using a Canon EOS 7D camera which then processed photos using AgiSoft PhotoScan to align the photos and develop models. Further processing of the models was done using ArcMap in the ArcGIS 10 program to view the digital elevation models of the concrete surface. Several experiments were completed to determine the ability of the technique for the detection of the different defects. The cracks that were able to be resolved in this study were a 1/8 inch crack at a distance of two feet above the surface. 3D photogrammetry was able to be detect a depression of 1 inch wide with 3/16 inch depth which would be sufficient to measure any scaling or spalling that would be required be the inspector. The percentage scaled or spalled was also able to be calculated from the digital elevation models in ArcMap. Different camera factors including the distance from the defects, number of photos and angle, were also investigated to see how each factor affected the capabilities. 3D photogrammetry showed great promise in the detection of scaling or spalling of the concrete bridge surface.
Resumo:
Worldwide, rural populations are far less likely to have access to clean drinking water than are urban ones. In many developing countries, the current approach to rural water supply uses a model of demand-driven, community-managed water systems. In Suriname, South America rural populations have limited access to improved water supplies; community-managed water supply systems have been installed in several rural communities by nongovernmental organizations as part of the solution. To date, there has been no review of the performance of these water supply systems. This report presents the results of an investigation of three rural water supply systems constructed in Saramaka villages in the interior of Suriname. The investigation used a combination of qualitative and quantitative methods, coupled with ethnographic information, to construct a comprehensive overview of these water systems. This overview includes the water use of the communities, the current status of the water supply systems, histories and sustainability of the water supply projects, technical reviews, and community perceptions. From this overview, factors important to the sustainability of these water systems were identified. Community water supply systems are engineered solutions that operate through social cooperation. The results from this investigation show that technical adequacy is the first and most critical factor for long-term sustainability of a water system. It also shows that technical adequacy is dependent on the appropriateness of the engineering design for the social, cultural, and natural setting in which it takes place. The complex relationships between technical adequacy, community support, and the involvement of women play important roles in the success of water supply projects. Addressing these factors during the project process and taking advantage of alternative water resources may increase the supply of improved drinking water to rural communities.
Resumo:
There has been a continuous evolutionary process in asphalt pavement design. In the beginning it was crude and based on past experience. Through research, empirical methods were developed based on materials response to specific loading at the AASHO Road Test. Today, pavement design has progressed to a mechanistic-empirical method. This methodology takes into account the mechanical properties of the individual layers and uses empirical relationships to relate them to performance. The mechanical tests that are used as part of this methodology include dynamic modulus and flow number, which have been shown to correlate with field pavement performance. This thesis was based on a portion of a research project being conducted at Michigan Technological University (MTU) for the Wisconsin Department of Transportation (WisDOT). The global scope of this project dealt with the development of a library of values as they pertain to the mechanical properties of the asphalt pavement mixtures paved in Wisconsin. Additionally, a comparison with the current associated pavement design to that of the new AASHTO Design Guide was conducted. This thesis describes the development of the current pavement design methodology as well as the associated tests as part of a literature review. This report also details the materials that were sampled from field operations around the state of Wisconsin and their testing preparation and procedures. Testing was conducted on available round robin and three Wisconsin mixtures and the main results of the research were: The test history of the Superpave SPT (fatigue and permanent deformation dynamic modulus) does not affect the mean response for both dynamic modulus and flow number, but does increase the variability in the test results of the flow number. The method of specimen preparation, compacting to test geometry versus sawing/coring to test geometry, does not statistically appear to affect the intermediate and high temperature dynamic modulus and flow number test results. The 2002 AASHTO Design Guide simulations support the findings of the statistical analyses that the method of specimen preparation did not impact the performance of the HMA as a structural layer as predicted by the Design Guide software. The methodologies for determining the temperature-viscosity relationship as stipulated by Witczak are sensitive to the viscosity test temperatures employed. The increase in asphalt binder content by 0.3% was found to actually increase the dynamic modulus at the intermediate and high test temperature as well as flow number. This result was based the testing that was conducted and was contradictory to previous research and the hypothesis that was put forth for this thesis. This result should be used with caution and requires further review. Based on the limited results presented herein, the asphalt binder grade appears to have a greater impact on performance in the Superpave SPT than aggregate angularity. Dynamic modulus and flow number was shown to increase with traffic level (requiring an increase in aggregate angularity) and with a decrease in air voids and confirm the hypotheses regarding these two factors. Accumulated micro-strain at flow number as opposed to the use of flow number appeared to be a promising measure for comparing the quality of specimens within a specific mixture. At the current time the Design Guide and its associate software needs to be further improved prior to implementation by owner/agencies.
Resumo:
This research was conducted in August of 2011 in the villages of Kigisu and Rubona in rural Uganda while the author was serving as a community health volunteer with the U.S. Peace Corps. The study used the contingent valuation method (CVM) to estimate the populations’ willingness to pay (WTP) for the operation and maintenance of an improved water source. The survey was administered to 122 households out of 400 in the community, gathering demographic information, health and water behaviors, and using an iterative bidding process to estimate WTP. Households indicated a mean WTP of 286 Ugandan Shillings (UGX) per 20 liters for a public tap and 202 UGX per 20 liters from a private tap. The data were also analyzed using an ordered probit model. It was determined that the number of children in the home, and the distance from the existing source were the primary variables influencing households’ WTP.
Resumo:
Information on phosphorus bioavailability can provide water quality managers with the support required to target point source and watershed loads contributing most significantly to water quality conditions. This study presents results from a limited sampling program focusing on the five largest sources of total phosphorus to the U.S. waters of the Great Lakes. The work provides validation of the utility of a bioavailability-based approach, confirming that the method is robust and repeatable. Chemical surrogates for bioavailability were shown to hold promise, however further research is needed to address site-to-site and seasonal variability before a universal relationship can be accepted. Recent changes in the relative contribution of P constituents to the total phosphorus analyte and differences in their bioavailability suggest that loading estimates of bioavailable P will need to address all three components (SRP, DOP and PP). A bioavailability approach, taking advantage of chemical surrogate methodologies is recommended as a means of guiding P management in the Great Lakes.
Resumo:
During my Peace Corps service as a community health liaison in rural Uganda I noticed that many improved water wells in our area had been abandoned. The communities described the water in these wells as being reddish in color, having a foul taste and odor, discoloring clothes and food, and not able to produce lather for washing. Personal investigations and an initial literature search suggested that the primary contaminant was iron. The water in these wells had a low pH and a rusty metallic smell. The water produced early in the morning appeared very red but the water became more transparent as pumping continued. The iron components of many of these wells experienced accelerated corrosion resulting in frequent pump failure. This rapid corrosion coupled with the timing of the onset of iron contamination (months to years after these wells were completed) suggests that the most likely cause of the poor quality water was iron related bacteria and/or sulphate reducing bacteria. This report describes a remedy for iron contamination employed at 5 wells. The remedy involved disinfecting the wells with chlorine and replacing iron pump components with plastic and stainless steel. Iron concentrations in the wells were less than 1 mg/L when the wells were drilled but ranged from 2.5 to 40 mg/L prior to the remedy. After the remedy was applied, the total iron concentrations returned to levels below 1 mg/L. The presence of iron related bacteria was measured in all of these wells using Biological Activity Reaction Tests. Although IRB are still present in all the wells, the dissolved iron concentrations remain less than 1 mg/L. This remedy is practical for rural areas because the work can be performed with only hand tools and costs less than US $850. Because the source of iron contamination is removed in this approach, substantial follow-up maintenance is not necessary.
Resumo:
The seasonal appearance of a deep chlorophyll maximum (DCM) in Lake Superior is a striking phenomenon that is widely observed; however its mechanisms of formation and maintenance are not well understood. As this phenomenon may be the reflection of an ecological driver, or a driver itself, a lack of understanding its driving forces limits the ability to accurately predict and manage changes in this ecosystem. Key mechanisms generally associated with DCM dynamics (i.e. ecological, physiological and physical phenomena) are examined individually and in concert to establish their role. First the prevailing paradigm, “the DCM is a great place to live”, is analyzed through an integration of the results of laboratory experiments and field measurements. The analysis indicates that growth at this depth is severely restricted and thus not able to explain the full magnitude of this phenomenon. Additional contributing mechanisms like photoadaptation, settling and grazing are reviewed with a one-dimensional mathematical model of chlorophyll and particulate organic carbon. Settling has the strongest impact on the formation and maintenance of the DCM, transporting biomass to the metalimnion and resulting in the accumulation of algae, i.e. a peak in the particulate organic carbon profile. Subsequently, shade adaptation becomes manifest as a chlorophyll maximum deeper in the water column where light conditions particularly favor the process. Shade adaptation mediates the magnitude, shape and vertical position of the chlorophyll peak. Growth at DCM depth shows only a marginal contribution, while grazing has an adverse effect on the extent of the DCM. The observed separation of the carbon biomass and chlorophyll maximum should caution scientists to equate the DCM with a large nutrient pool that is available to higher trophic levels. The ecological significance of the DCM should not be separated from the underlying carbon dynamics. When evaluated in its entirety, the DCM becomes the projected image of a structure that remains elusive to measure but represents the foundation of all higher trophic levels. These results also offer guidance in examine ecosystem perturbations such as climate change. For example, warming would be expected to prolong the period of thermal stratification, extending the late summer period of suboptimal (phosphorus-limited) growth and attendant transport of phytoplankton to the metalimnion. This reduction in epilimnetic algal production would decrease the supply of algae to the metalimnion, possibly reducing the supply of prey to the grazer community. This work demonstrates the value of modeling to challenge and advance our understanding of ecosystem dynamics, steps vital to reliable testing of management alternatives.
Resumo:
Universities in the United States are applying more sustainable approaches to their dining service operations. "The increase in social consciousness and environmental stewardship on college campuses has spurred an array of new and innovative sustainability programs"(ARAMARK Higher Education 2008). University residence dining is typically cafeteria style, with students using trays to carry food. Studies report that food served without trays substantially reduces food waste and water and electrical consumption associated with washing trays. Commonly, these reported results are estimates and not measurements taken under actual operating conditions. This study utilizes measurements recorded under actual dining service conditions in student residence halls at Michigan Technological University to develop the following: 1) operational-specific data on the issues and potential savings associated with a conversion to trayless dining and 2) life cycle assessment (LCA) cost and environmental impact analyses comparing dining with and without trays. For the LCA, the entire life cycle of the system is considered, from the manufacturing to the usage and disposal phases. The study shows that trayless dining reduces food waste because diners carry less food. The total savings for the diner shifts when not using trays for the standard academic year (205 days), with an average number of 700 diners, is 7,032 pounds of food waste from the pre-rinse area (33% reduction) and 3,157 pounds of food waste from the pan washing area (39% reduction). In addition, for each day of the study, the diners consumed more food during the trayless portion of the experiment. One possible explanation for the increased food consumption during this short duration study could be that the diners found it more convenient to eat the extra food on their plate rather than carrying it back for disposal. The trayless dining experiment shows a reduction in dishwasher water, steam, and electrical consumption for each day of the study. The average reduction of dishwasher water, steam, and electrical consumption over the duration of the study were 10.7%, 9.5%, and 6.4% respectively. Trayless dining implementation would result in a decrease of 4,305 gallons of consumption and wastewater discharge, 2.87 mm BTU of steam consumption, and 158 kWh of electrical consumption for the dinner shift over the academic year. Results of the LCA indicate a total savings of $190.4 when trays are not used during the dinner shift. Trayless dining requires zero CO2 eq and cumulative energy demand in the manufacturing stage, reductions of 1005 kg CO2 eq and 861 MJ eq in the usage phase, and reductions of 6458 kg CO2 eq and 1821 MJ eq in the end of the life cycle.