895 resultados para Civil and Environmental Engineering
Resumo:
There has been a continuous evolutionary process in asphalt pavement design. In the beginning it was crude and based on past experience. Through research, empirical methods were developed based on materials response to specific loading at the AASHO Road Test. Today, pavement design has progressed to a mechanistic-empirical method. This methodology takes into account the mechanical properties of the individual layers and uses empirical relationships to relate them to performance. The mechanical tests that are used as part of this methodology include dynamic modulus and flow number, which have been shown to correlate with field pavement performance. This thesis was based on a portion of a research project being conducted at Michigan Technological University (MTU) for the Wisconsin Department of Transportation (WisDOT). The global scope of this project dealt with the development of a library of values as they pertain to the mechanical properties of the asphalt pavement mixtures paved in Wisconsin. Additionally, a comparison with the current associated pavement design to that of the new AASHTO Design Guide was conducted. This thesis describes the development of the current pavement design methodology as well as the associated tests as part of a literature review. This report also details the materials that were sampled from field operations around the state of Wisconsin and their testing preparation and procedures. Testing was conducted on available round robin and three Wisconsin mixtures and the main results of the research were: The test history of the Superpave SPT (fatigue and permanent deformation dynamic modulus) does not affect the mean response for both dynamic modulus and flow number, but does increase the variability in the test results of the flow number. The method of specimen preparation, compacting to test geometry versus sawing/coring to test geometry, does not statistically appear to affect the intermediate and high temperature dynamic modulus and flow number test results. The 2002 AASHTO Design Guide simulations support the findings of the statistical analyses that the method of specimen preparation did not impact the performance of the HMA as a structural layer as predicted by the Design Guide software. The methodologies for determining the temperature-viscosity relationship as stipulated by Witczak are sensitive to the viscosity test temperatures employed. The increase in asphalt binder content by 0.3% was found to actually increase the dynamic modulus at the intermediate and high test temperature as well as flow number. This result was based the testing that was conducted and was contradictory to previous research and the hypothesis that was put forth for this thesis. This result should be used with caution and requires further review. Based on the limited results presented herein, the asphalt binder grade appears to have a greater impact on performance in the Superpave SPT than aggregate angularity. Dynamic modulus and flow number was shown to increase with traffic level (requiring an increase in aggregate angularity) and with a decrease in air voids and confirm the hypotheses regarding these two factors. Accumulated micro-strain at flow number as opposed to the use of flow number appeared to be a promising measure for comparing the quality of specimens within a specific mixture. At the current time the Design Guide and its associate software needs to be further improved prior to implementation by owner/agencies.
Resumo:
This research was conducted in August of 2011 in the villages of Kigisu and Rubona in rural Uganda while the author was serving as a community health volunteer with the U.S. Peace Corps. The study used the contingent valuation method (CVM) to estimate the populations’ willingness to pay (WTP) for the operation and maintenance of an improved water source. The survey was administered to 122 households out of 400 in the community, gathering demographic information, health and water behaviors, and using an iterative bidding process to estimate WTP. Households indicated a mean WTP of 286 Ugandan Shillings (UGX) per 20 liters for a public tap and 202 UGX per 20 liters from a private tap. The data were also analyzed using an ordered probit model. It was determined that the number of children in the home, and the distance from the existing source were the primary variables influencing households’ WTP.
Resumo:
Information on phosphorus bioavailability can provide water quality managers with the support required to target point source and watershed loads contributing most significantly to water quality conditions. This study presents results from a limited sampling program focusing on the five largest sources of total phosphorus to the U.S. waters of the Great Lakes. The work provides validation of the utility of a bioavailability-based approach, confirming that the method is robust and repeatable. Chemical surrogates for bioavailability were shown to hold promise, however further research is needed to address site-to-site and seasonal variability before a universal relationship can be accepted. Recent changes in the relative contribution of P constituents to the total phosphorus analyte and differences in their bioavailability suggest that loading estimates of bioavailable P will need to address all three components (SRP, DOP and PP). A bioavailability approach, taking advantage of chemical surrogate methodologies is recommended as a means of guiding P management in the Great Lakes.
Resumo:
During my Peace Corps service as a community health liaison in rural Uganda I noticed that many improved water wells in our area had been abandoned. The communities described the water in these wells as being reddish in color, having a foul taste and odor, discoloring clothes and food, and not able to produce lather for washing. Personal investigations and an initial literature search suggested that the primary contaminant was iron. The water in these wells had a low pH and a rusty metallic smell. The water produced early in the morning appeared very red but the water became more transparent as pumping continued. The iron components of many of these wells experienced accelerated corrosion resulting in frequent pump failure. This rapid corrosion coupled with the timing of the onset of iron contamination (months to years after these wells were completed) suggests that the most likely cause of the poor quality water was iron related bacteria and/or sulphate reducing bacteria. This report describes a remedy for iron contamination employed at 5 wells. The remedy involved disinfecting the wells with chlorine and replacing iron pump components with plastic and stainless steel. Iron concentrations in the wells were less than 1 mg/L when the wells were drilled but ranged from 2.5 to 40 mg/L prior to the remedy. After the remedy was applied, the total iron concentrations returned to levels below 1 mg/L. The presence of iron related bacteria was measured in all of these wells using Biological Activity Reaction Tests. Although IRB are still present in all the wells, the dissolved iron concentrations remain less than 1 mg/L. This remedy is practical for rural areas because the work can be performed with only hand tools and costs less than US $850. Because the source of iron contamination is removed in this approach, substantial follow-up maintenance is not necessary.
Resumo:
The seasonal appearance of a deep chlorophyll maximum (DCM) in Lake Superior is a striking phenomenon that is widely observed; however its mechanisms of formation and maintenance are not well understood. As this phenomenon may be the reflection of an ecological driver, or a driver itself, a lack of understanding its driving forces limits the ability to accurately predict and manage changes in this ecosystem. Key mechanisms generally associated with DCM dynamics (i.e. ecological, physiological and physical phenomena) are examined individually and in concert to establish their role. First the prevailing paradigm, “the DCM is a great place to live”, is analyzed through an integration of the results of laboratory experiments and field measurements. The analysis indicates that growth at this depth is severely restricted and thus not able to explain the full magnitude of this phenomenon. Additional contributing mechanisms like photoadaptation, settling and grazing are reviewed with a one-dimensional mathematical model of chlorophyll and particulate organic carbon. Settling has the strongest impact on the formation and maintenance of the DCM, transporting biomass to the metalimnion and resulting in the accumulation of algae, i.e. a peak in the particulate organic carbon profile. Subsequently, shade adaptation becomes manifest as a chlorophyll maximum deeper in the water column where light conditions particularly favor the process. Shade adaptation mediates the magnitude, shape and vertical position of the chlorophyll peak. Growth at DCM depth shows only a marginal contribution, while grazing has an adverse effect on the extent of the DCM. The observed separation of the carbon biomass and chlorophyll maximum should caution scientists to equate the DCM with a large nutrient pool that is available to higher trophic levels. The ecological significance of the DCM should not be separated from the underlying carbon dynamics. When evaluated in its entirety, the DCM becomes the projected image of a structure that remains elusive to measure but represents the foundation of all higher trophic levels. These results also offer guidance in examine ecosystem perturbations such as climate change. For example, warming would be expected to prolong the period of thermal stratification, extending the late summer period of suboptimal (phosphorus-limited) growth and attendant transport of phytoplankton to the metalimnion. This reduction in epilimnetic algal production would decrease the supply of algae to the metalimnion, possibly reducing the supply of prey to the grazer community. This work demonstrates the value of modeling to challenge and advance our understanding of ecosystem dynamics, steps vital to reliable testing of management alternatives.
Resumo:
Universities in the United States are applying more sustainable approaches to their dining service operations. "The increase in social consciousness and environmental stewardship on college campuses has spurred an array of new and innovative sustainability programs"(ARAMARK Higher Education 2008). University residence dining is typically cafeteria style, with students using trays to carry food. Studies report that food served without trays substantially reduces food waste and water and electrical consumption associated with washing trays. Commonly, these reported results are estimates and not measurements taken under actual operating conditions. This study utilizes measurements recorded under actual dining service conditions in student residence halls at Michigan Technological University to develop the following: 1) operational-specific data on the issues and potential savings associated with a conversion to trayless dining and 2) life cycle assessment (LCA) cost and environmental impact analyses comparing dining with and without trays. For the LCA, the entire life cycle of the system is considered, from the manufacturing to the usage and disposal phases. The study shows that trayless dining reduces food waste because diners carry less food. The total savings for the diner shifts when not using trays for the standard academic year (205 days), with an average number of 700 diners, is 7,032 pounds of food waste from the pre-rinse area (33% reduction) and 3,157 pounds of food waste from the pan washing area (39% reduction). In addition, for each day of the study, the diners consumed more food during the trayless portion of the experiment. One possible explanation for the increased food consumption during this short duration study could be that the diners found it more convenient to eat the extra food on their plate rather than carrying it back for disposal. The trayless dining experiment shows a reduction in dishwasher water, steam, and electrical consumption for each day of the study. The average reduction of dishwasher water, steam, and electrical consumption over the duration of the study were 10.7%, 9.5%, and 6.4% respectively. Trayless dining implementation would result in a decrease of 4,305 gallons of consumption and wastewater discharge, 2.87 mm BTU of steam consumption, and 158 kWh of electrical consumption for the dinner shift over the academic year. Results of the LCA indicate a total savings of $190.4 when trays are not used during the dinner shift. Trayless dining requires zero CO2 eq and cumulative energy demand in the manufacturing stage, reductions of 1005 kg CO2 eq and 861 MJ eq in the usage phase, and reductions of 6458 kg CO2 eq and 1821 MJ eq in the end of the life cycle.
Resumo:
Advances in information technology and global data availability have opened the door for assessments of sustainable development at a truly macro scale. It is now fairly easy to conduct a study of sustainability using the entire planet as the unit of analysis; this is precisely what this work set out to accomplish. The study began by examining some of the best known composite indicator frameworks developed to measure sustainability at the country level today. Most of these were found to value human development factors and a clean local environment, but to gravely overlook consumption of (remote) resources in relation to nature’s capacity to renew them, a basic requirement for a sustainable state. Thus, a new measuring standard is proposed, based on the Global Sustainability Quadrant approach. In a two‐dimensional plot of nations’ Human Development Index (HDI) vs. their Ecological Footprint (EF) per capita, the Sustainability Quadrant is defined by the area where both dimensions satisfy the minimum conditions of sustainable development: an HDI score above 0.8 (considered ‘high’ human development), and an EF below the fair Earth‐share of 2.063 global hectares per person. After developing methods to identify those countries that are closest to the Quadrant in the present‐day and, most importantly, those that are moving towards it over time, the study tackled the question: what indicators of performance set these countries apart? To answer this, an analysis of raw data, covering a wide array of environmental, social, economic, and governance performance metrics, was undertaken. The analysis used country rank lists for each individual metric and compared them, using the Pearson Product Moment Correlation function, to the rank lists generated by the proximity/movement relative to the Quadrant measuring methods. The analysis yielded a list of metrics which are, with a high degree of statistical significance, associated with proximity to – and movement towards – the Quadrant; most notably: Favorable for sustainable development: use of contraception, high life expectancy, high literacy rate, and urbanization. Unfavorable for sustainable development: high GDP per capita, high language diversity, high energy consumption, and high meat consumption. A momentary gain, but a burden in the long‐run: high carbon footprint and debt. These results could serve as a solid stepping stone for the development of more reliable composite index frameworks for assessing countries’ sustainability.
Resumo:
Ultra-high performance fiber reinforced concrete (UHPFRC) has arisen from the implementation of a variety of concrete engineering and materials science concepts developed over the last century. This material offers superior strength, serviceability, and durability over its conventional counterparts. One of the most important differences for UHPFRC over other concrete materials is its ability to resist fracture through the use of randomly dispersed discontinuous fibers and improvements to the fiber-matrix bond. Of particular interest is the materials ability to achieve higher loads after first crack, as well as its high fracture toughness. In this research, a study of the fracture behavior of UHPFRC with steel fibers was conducted to look at the effect of several parameters related to the fracture behavior and to develop a fracture model based on a non-linear curve fit of the data. To determine this, a series of three-point bending tests were performed on various single edge notched prisms (SENPs). Compression tests were also performed for quality assurance. Testing was conducted on specimens of different cross-sections, span/depth (S/D) ratios, curing regimes, ages, and fiber contents. By comparing the results from prisms of different sizes this study examines the weakening mechanism due to the size effect. Furthermore, by employing the concept of fracture energy it was possible to obtain a comparison of the fracture toughness and ductility. The model was determined based on a fit to P-w fracture curves, which was cross referenced for comparability to the results. Once obtained the model was then compared to the models proposed by the AFGC in the 2003 and to the ACI 544 model for conventional fiber reinforced concretes.
Resumo:
With proper application of Best Management Practices (BMPs), the impact from the sediment to the water bodies could be minimized. However, finding the optimal allocation of BMP can be difficult, since there are numerous possible options. Also, economics plays an important role in BMP affordability and, therefore, the number of BMPs able to be placed in a given budget year. In this study, two methodologies are presented to determine the optimal cost-effective BMP allocation, by coupling a watershed-level model, Soil and Water Assessment Tool (SWAT), with two different methods, targeting and a multi-objective genetic algorithm (Non-dominated Sorting Genetic Algorithm II, NSGA-II). For demonstration, these two methodologies were applied to an agriculture-dominant watershed located in Lower Michigan to find the optimal allocation of filter strips and grassed waterways. For targeting, three different criteria were investigated for sediment yield minimization, during the process of which it was found that the grassed waterways near the watershed outlet reduced the watershed outlet sediment yield the most under this study condition, and cost minimization was also included as a second objective during the cost-effective BMP allocation selection. NSGA-II was used to find the optimal BMP allocation for both sediment yield reduction and cost minimization. By comparing the results and computational time of both methodologies, targeting was determined to be a better method for finding optimal cost-effective BMP allocation under this study condition, since it provided more than 13 times the amount of solutions with better fitness for the objective functions while using less than one eighth of the SWAT computational time than the NSGA-II with 150 generations did.
Resumo:
Diagenesis of particulate organic matter in lake sediments consumes and produces chemical species that have significant effects on water quality, e.g. oxygen and nitrate depletion and attendant mediation of nutrient and metal recycling. A mechanistic, mass balance model (SED2K) is applied here in quantifying the time course and magnitude of sediment response to reductions in depositional fluxes of organic matter. In applying the model, direct, site-specific measurements of the sedimentation and POM rates in Onondaga Lake are used, leaving only the diagenesis coefficient (solubilization) for estimation by fit to downcore POM profiles. Model calibration is constrained by the dual requirement that both POM profiles and the time series of efflux of the products of diagenesis must be matched. Simulations point to the existence of POM preservation processes at depth, a phenomenon that may enhance the timing and magnitude of lake recovery.
Resumo:
A significant cost for foundations is the design and installation of piles when they are required due to poor ground conditions. Not only is it important that piles be designed properly, but also that the installation equipment and total cost be evaluated. To assist in the evaluation of piles a number of methods have been developed. In this research three of these methods were investigated, which were developed by the Federal Highway Administration, the US Corps of Engineers and the American Petroleum Institute (API). The results from these methods were entered into the program GRLWEAPTM to assess the pile drivability and to provide a standard base for comparing the three methods. An additional element of this research was to develop EXCEL spreadsheets to implement these three methods. Currently the Army Corps and API methods do not have publicly available software and must be performed manually, which requires that data is taken off of figures and tables, which can introduce error in the prediction of pile capacities. Following development of the EXCEL spreadsheet, they were validated with both manual calculations and existing data sets to ensure that the data output is correct. To evaluate the three pile capacity methods data was utilized from four project sites from North America. The data included site geotechnical data along with field determined pile capacities. In order to achieve a standard comparison of the data, the pile capacities and geotechnical data from the three methods were entered into GRLWEAPTM. The sites consisted of both cohesive and cohesionless soils; where one site was primarily cohesive, one was primarily cohesionless, and the other two consisted of inter-bedded cohesive and cohesionless soils. Based on this limited set of data the results indicated that the US Corps of Engineers method more closely compared with the field test data, followed by the API method to a lesser degree. The DRIVEN program compared favorably in cohesive soils, but over predicted in cohesionless material.
Resumo:
High concentrations of fluoride naturally occurring in the ground water in the Arusha region of Tanzania cause dental, skeletal and non-skeletal fluorosis in up to 90% of the region’s population [1]. Symptoms of this incurable but completely preventable disease include brittle, discolored teeth, malformed bones and stiff and swollen joints. The consumption of high fluoride water has also been proven to cause headaches and insomnia [2] and adversely affect the development of children’s intelligence [3, 4]. Despite the fact that this array of symptoms may significantly impact a society’s development and the citizens’ ability to perform work and enjoy a reasonable quality of life, little is offered in the Arusha region in the form of solutions for the poor, those hardest hit by the problem. Multiple defluoridation technologies do exist, yet none are successfully reaching the Tanzanian public. This report takes a closer look at the efforts of one local organization, the Defluoridation Technology Project (DTP), to address the region’s fluorosis problem through the production and dissemination of bone char defluoridation filters, an appropriate technology solution that is proven to work. The goal of this research is to improve the sustainability of DTP’s operations and help them reach a wider range of clients so that they may reduce the occurrence of fluorosis more effectively. This was done first through laboratory testing of current products. Results of this testing show a wide range in uptake capacity across batches of bone char emphasizing the need to modify kiln design in order to produce a more consistent and high quality product. The issue of filter dissemination was addressed through the development of a multi-level, customerfunded business model promoting the availability of filters to Tanzanians of all socioeconomic levels. Central to this model is the recommendation to focus on community managed, institutional sized filters in order to make fluoride free water available to lower income clients and to increase Tanzanian involvement at the management level.
Resumo:
Strain rate significantly affects the strength of a material. The Split-Hopkinson Pressure Bar (SHPB) was initially used to study the effects of high strain rate (~103 1/s) testing of metals. Later modifications to the original technique allowed for the study of brittle materials such as ceramics, concrete, and rock. While material properties of wood for static and creep strain rates are readily available, data on the dynamic properties of wood are sparse. Previous work using the SHPB technique with wood has been limited in scope to variability of only a few conditions and tests of the applicability of the SHPB theory on wood have not been performed. Tests were conducted using a large diameter (3.0 inch (75 mm)) SHPB. The strain rate and total strain applied to a specimen are dependent on the striker bar length and velocity at impact. Pulse shapers are used to further modify the strain rate and change the shape of the strain pulse. A series of tests were used to determine test conditions necessary to produce a strain rate, total strain, and pulse shape appropriate for testing wood specimens. Hard maple, consisting of sugar maple (Acer saccharum) and black maple (Acer nigrum), and eastern white pine (Pinus strobus) specimens were used to represent a dense hardwood and a low-density soft wood. Specimens were machined to diameters of 2.5 and 3.0 inches and an assortment of lengths were tested to determine the appropriate specimen dimensions. Longitudinal specimens of 1.5 inch length and radial and tangential specimens of 0.5 inch length were found to be most applicable to SHPB testing. Stress/strain curves were generated from the SHPB data and validated with 6061-T6 aluminum and wood specimens. Stress was indirectly corroborated with gaged aluminum specimens. Specimen strain was assessed with strain gages, digital image analysis, and measurement of residual strain to confirm the strain calculated from SHPB data. The SHPB was found to be a useful tool in accurately assessing the material properties of wood under high strain rates (70 to 340 1/s) and short load durations (70 to 150 μs to compressive failure).
Resumo:
The South Florida Water Management District (SFWMD) manages and operates numerous water control structures that are subject to scour. In an effort to reduce scour downstream of these gated structures, laboratory experiments were performed to investigate the effect of active air-injection downstream of the terminal structure of a gated spillway on the depth of the scour hole. A literature review involving similar research revealed significant variables such as the ratio of headwater-to-tailwater depths, the diffuser angle, sediment uniformity, and the ratio of air-to-water volumetric discharge values. The experimental design was based on the analysis of several of these non-dimensional parameters. Bed scouring at stilling basins downstream of gated spillways has been identified as posing a serious risk to the spillway’s structural stability. Although this type of scour has been studied in the past, it continues to represent a real threat to water control structures and requires additional attention. A hydraulic scour channel comprised of a head tank, flow straightening section, gated spillway, stilling basin, scour section, sediment trap, and tail-tank was used to further this analysis. Experiments were performed in a laboratory channel consisting of a 1:30 scale model of the SFWMD S65E spillway structure. To ascertain the feasibility of air injection for scour reduction a proof-of-concept study was performed. Experiments were conducted without air entrainment and with high, medium, and low air entrainment rates for high and low headwater conditions. For the cases with no air entrainment it was found that there was excessive scour downstream of the structure due to a downward roller formed upon exiting the downstream sill of the stilling basin. When air was introduced vertically just downstream of, and at the same level as, the stilling basin sill, it was found that air entrainment does reduce scour depth by up to 58% depending on the air flow rate, but shifts the deepest scour location to the sides of the channel bed instead of the center. Various hydraulic flow conditions were tested without air injection to verify which scenario caused more scour. That scenario, uncontrolled free, in which water does not contact the gate and the water elevation in the stilling basin is lower than the spillway crest, would be used for the remainder of experiments testing air injection. Various air flow rates, diffuser elevations, air hole diameters, air hole spacings, diffuser angles and widths were tested in over 120 experiments. Optimal parameters include air injection at a rate that results in a water-to-air ratio of 0.28, air holes 1.016mm in diameter the entire width of the stilling basin, and a vertically orientated injection pattern. Detailed flow measurements were collected for one case using air injection and one without. An identical flow scenario was used for each experiment, namely that of a high flow rate and upstream headwater depth and a low tailwater depth. Equilibrium bed scour and velocity measurements were taken using an Acoustic Doppler Velocimeter at nearly 3000 points. Velocity data was used to construct a vector plot in order to identify which flow components contribute to the scour hole. Additionally, turbulence parameters were calculated in an effort to help understand why air-injection reduced bed scour. Turbulence intensities, normalized mean flow, normalized kinetic energy, and anisotropy of turbulence plots were constructed. A clear trend emerged that showed air-injection reduces turbulence near the bed and therefore reduces scour potential.
Resumo:
During the project, managers encounter numerous contingencies and are faced with the challenging task of making decisions that will effectively keep the project on track. This task is very challenging because construction projects are non-prototypical and the processes are irreversible. Therefore, it is critical to apply a methodological approach to develop a few alternative management decision strategies during the planning phase, which can be deployed to manage alternative scenarios resulting from expected and unexpected disruptions in the as-planned schedule. Such a methodology should have the following features but are missing in the existing research: (1) looking at the effects of local decisions on the global project outcomes, (2) studying how a schedule responds to decisions and disruptive events because the risk in a schedule is a function of the decisions made, (3) establishing a method to assess and improve the management decision strategies, and (4) developing project specific decision strategies because each construction project is unique and the lessons from a particular project cannot be easily applied to projects that have different contexts. The objective of this dissertation is to develop a schedule-based simulation framework to design, assess, and improve sequences of decisions for the execution stage. The contribution of this research is the introduction of applying decision strategies to manage a project and the establishment of iterative methodology to continuously assess and improve decision strategies and schedules. The project managers or schedulers can implement the methodology to develop and identify schedules accompanied by suitable decision strategies to manage a project at the planning stage. The developed methodology also lays the foundation for an algorithm towards continuously automatically generating satisfactory schedule and strategies through the construction life of a project. Different from studying isolated daily decisions, the proposed framework introduces the notion of {em decision strategies} to manage construction process. A decision strategy is a sequence of interdependent decisions determined by resource allocation policies such as labor, material, equipment, and space policies. The schedule-based simulation framework consists of two parts, experiment design and result assessment. The core of the experiment design is the establishment of an iterative method to test and improve decision strategies and schedules, which is based on the introduction of decision strategies and the development of a schedule-based simulation testbed. The simulation testbed used is Interactive Construction Decision Making Aid (ICDMA). ICDMA has an emulator to duplicate the construction process that has been previously developed and a random event generator that allows the decision-maker to respond to disruptions in the emulation. It is used to study how the schedule responds to these disruptions and the corresponding decisions made over the duration of the project while accounting for cascading impacts and dependencies between activities. The dissertation is organized into two parts. The first part presents the existing research, identifies the departure points of this work, and develops a schedule-based simulation framework to design, assess, and improve decision strategies. In the second part, the proposed schedule-based simulation framework is applied to investigate specific research problems.