6 resultados para flood frequency evaluation
em Digital Commons - Michigan Tech
Resumo:
United States federal agencies assess flood risk using Bulletin 17B procedures which assume annual maximum flood series are stationary. This represents a significant limitation of current flood frequency models as the flood distribution is thereby assumed to be unaffected by trends or periodicity of atmospheric/climatic variables and/or anthropogenic activities. The validity of this assumption is at the core of this thesis, which aims to improve understanding of the forms and potential causes of non-stationarity in flood series for moderately impaired watersheds in the Upper Midwest and Northeastern US. Prior studies investigated non-stationarity in flood series for unimpaired watersheds; however, as the majority of streams are located in areas of increasing human activity, relative and coupled impacts of natural and anthropogenic factors need to be considered such that non-stationary flood frequency models can be developed for flood risk forecasting over relevant planning horizons for large scale water resources planning and management.
Resumo:
Regional flood frequency techniques are commonly used to estimate flood quantiles when flood data is unavailable or the record length at an individual gauging station is insufficient for reliable analyses. These methods compensate for limited or unavailable data by pooling data from nearby gauged sites. This requires the delineation of hydrologically homogeneous regions in which the flood regime is sufficiently similar to allow the spatial transfer of information. It is generally accepted that hydrologic similarity results from similar physiographic characteristics, and thus these characteristics can be used to delineate regions and classify ungauged sites. However, as currently practiced, the delineation is highly subjective and dependent on the similarity measures and classification techniques employed. A standardized procedure for delineation of hydrologically homogeneous regions is presented herein. Key aspects are a new statistical metric to identify physically discordant sites, and the identification of an appropriate set of physically based measures of extreme hydrological similarity. A combination of multivariate statistical techniques applied to multiple flood statistics and basin characteristics for gauging stations in the Southeastern U.S. revealed that basin slope, elevation, and soil drainage largely determine the extreme hydrological behavior of a watershed. Use of these characteristics as similarity measures in the standardized approach for region delineation yields regions which are more homogeneous and more efficient for quantile estimation at ungauged sites than those delineated using alternative physically-based procedures typically employed in practice. The proposed methods and key physical characteristics are also shown to be efficient for region delineation and quantile development in alternative areas composed of watersheds with statistically different physical composition. In addition, the use of aggregated values of key watershed characteristics was found to be sufficient for the regionalization of flood data; the added time and computational effort required to derive spatially distributed watershed variables does not increase the accuracy of quantile estimators for ungauged sites. This dissertation also presents a methodology by which flood quantile estimates in Haiti can be derived using relationships developed for data rich regions of the U.S. As currently practiced, regional flood frequency techniques can only be applied within the predefined area used for model development. However, results presented herein demonstrate that the regional flood distribution can successfully be extrapolated to areas of similar physical composition located beyond the extent of that used for model development provided differences in precipitation are accounted for and the site in question can be appropriately classified within a delineated region.
Resumo:
Global climate change is predicted to have impacts on the frequency and severity of flood events. In this study, output from Global Circulation Models (GCMs) for a range of possible future climate scenarios was used to force hydrologic models for four case study watersheds built using the Soil and Water Assessment Tool (SWAT). GCM output was applied with either the "delta change" method or a bias correction. Potential changes in flood risk are assessed based on modeling results and possible relationships to watershed characteristics. Differences in model outputs when using the two different methods of adjusting GCM output are also compared. Preliminary results indicate that watersheds exhibiting higher proportions of runoff in streamflow are more vulnerable to changes in flood risk. The delta change method appears to be more useful when simulating extreme events as it better preserves daily climate variability as opposed to using bias corrected GCM output.
Resumo:
Standard procedures for forecasting flood risk (Bulletin 17B) assume annual maximum flood (AMF) series are stationary, meaning the distribution of flood flows is not significantly affected by climatic trends/cycles, or anthropogenic activities within the watershed. Historical flood events are therefore considered representative of future flood occurrences, and the risk associated with a given flood magnitude is modeled as constant over time. However, in light of increasing evidence to the contrary, this assumption should be reconsidered, especially as the existence of nonstationarity in AMF series can have significant impacts on planning and management of water resources and relevant infrastructure. Research presented in this thesis quantifies the degree of nonstationarity evident in AMF series for unimpaired watersheds throughout the contiguous U.S., identifies meteorological, climatic, and anthropogenic causes of this nonstationarity, and proposes an extension of the Bulletin 17B methodology which yields forecasts of flood risk that reflect climatic influences on flood magnitude. To appropriately forecast flood risk, it is necessary to consider the driving causes of nonstationarity in AMF series. Herein, large-scale climate patterns—including El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), and Atlantic Multidecadal Oscillation (AMO)—are identified as influencing factors on flood magnitude at numerous stations across the U.S. Strong relationships between flood magnitude and associated precipitation series were also observed for the majority of sites analyzed in the Upper Midwest and Northeastern regions of the U.S. Although relationships between flood magnitude and associated temperature series are not apparent, results do indicate that temperature is highly correlated with the timing of flood peaks. Despite consideration of watersheds classified as unimpaired, analyses also suggest that identified change-points in AMF series are due to dam construction, and other types of regulation and diversion. Although not explored herein, trends in AMF series are also likely to be partially explained by changes in land use and land cover over time. Results obtained herein suggest that improved forecasts of flood risk may be obtained using a simple modification of the Bulletin 17B framework, wherein the mean and standard deviation of the log-transformed flows are modeled as functions of climate indices associated with oceanic-atmospheric patterns (e.g. AMO, ENSO, NAO, and PDO) with lead times between 3 and 9 months. Herein, one-year ahead forecasts of the mean and standard deviation, and subsequently flood risk, are obtained by applying site specific multivariate regression models, which reflect the phase and intensity of a given climate pattern, as well as possible impacts of coupling of the climate cycles. These forecasts of flood risk are compared with forecasts derived using the existing Bulletin 17B model; large differences in the one-year ahead forecasts are observed in some locations. The increased knowledge of the inherent structure of AMF series and an improved understanding of physical and/or climatic causes of nonstationarity gained from this research should serve as insight for the formulation of a physical-casual based statistical model, incorporating both climatic variations and human impacts, for flood risk over longer planning horizons (e.g., 10-, 50, 100-years) necessary for water resources design, planning, and management.
Resumo:
The complexity and challenge created by asphalt material motivates researchers and engineers to investigate the behavior of this material to develop a better understanding, and improve the performance of asphalt pavement. Over decades, a wide range of modification at macro, meso, micro and nano scales have been conducted to improve the performance of asphalt pavement. This study was initiated to utilize the newly developed asphalt modifier pellets. These pellets consisted of different combinations of calcium carbonate (CaCO3), linear low-density polyethylene (LLDPE) and titanate coupling agent (CA) to improve the asphalt binder as well as pavement performance across a wide range of temperature and loading pace. These materials were used due to their unique characteristics and promising findings from various industries, especially as modifiers in pavement material. The challenge is to make sure the CaCO3 disperses very well in the mixture. The rheological properties of neat asphalt binder PG58-28 and modified asphalt binder (PG58-28/LLDPE, PG58-28/CaCO3, PG58-28/CaCO3/LLDPE, and PG58-28/CaCO3/LLDPE/CA), were determined using rotational viscometer (RV) test, dynamic shear rheometer (DSR) test and bending beam rheometer test. In the DSR test, the specimens were evaluated using frequency sweep and multiple shear creep recovery (MSCR). The asphalt mixtures (aggregate/PG58-28, aggregate/ PG58-28/LLDPE, aggregate/PG58-28/CaCO3, aggregate/PG58-28/LLDPE/CaCO3 and aggregate/PG58-28/LLDPE/CaCO3/CA) were evaluated using the four point beam fatigue test, the dynamic modulus (E*) test, and tensile strength test (to determines tensile strength ratio, TSR). The RV test results show that all modified asphalt binders have a higher viscosity compared to the neat asphalt binder (PG58-28). Based on the Jnr results (using MSCR test), all the modified asphalt binders have a better resistance to rutting compared to the neat asphalt binder. A higher modifier contents have resulted in a better recovery percentage of asphalt binder (higher resistance to rutting), except the specimens prepared using PECC’s modified asphalt binder (PG58-28/CaCO3/LLDPE). The BBR test results show that all the modified asphalt binders have shown comparable performance in term of resistance to low temperature cracking, except the specimen prepared using the LLDPE modifier. Overall, 5 wt% LLDPE modified asphalt binder was found to be the best asphalt binder in terms of resistance to rutting. Meanwhile, 3 wt% PECC-1CA’s modified asphalt binder can be considered as the best (in terms of resistance to thermal cracking) with the lowest mean critical cracking temperature. The appearance of CaCO3 was found useful merely in improving the resistance to fatigue cracking of asphalt mixture. However, application of LLDPE has undermined the fatigue life of asphalt mixtures. Adding LLDPE and coupling agent throughout this study does not sufficiently help in terms of elastic behavior which essential to enhance the resistance to fatigue cracking. In contrast, application of LLDPE has increased the indirect tensile strength values and TSR of asphalt mixtures, indicates a better resistance to moisture damage. The usage of the coupling agent does not change the behavior of the asphalt mixture, which could be due to imbalance effects resulted by combination of LLDPE and CaCO3 in asphalt binder. Further investigations without incorporating CaCO3 should be conducted further. To investigate the feasibility of using LLDPE and coupling agent as modifiers in asphalt pavements, more research should be conducted on different percentages of LLDPE (less than 3 wt%), and at the higher and w wider range of coupling agent content, from 3 wt% to 7 wt% based on the polymer mass.
Resumo:
Bluetooth wireless technology is a robust short-range communications system designed for low power (10 meter range) and low cost. It operates in the 2.4 GHz Industrial Scientific Medical (ISM) band and it employs two techniques for minimizing interference: a frequency hopping scheme which nominally splits the 2.400 - 2.485 GHz band in 79 frequency channels and a time division duplex (TDD) scheme which is used to switch to a new frequency channel on 625 μs boundaries. During normal operation a Bluetooth device will be active on a different frequency channel every 625 μs, thus minimizing the chances of continuous interference impacting the performance of the system. The smallest unit of a Bluetooth network is called a piconet, and can have a maximum of eight nodes. Bluetooth devices must assume one of two roles within a piconet, master or slave, where the master governs quality of service and the frequency hopping schedule within the piconet and the slave follows the master’s schedule. A piconet must have a single master and up to 7 active slaves. By allowing devices to have roles in multiple piconets through time multiplexing, i.e. slave/slave or master/slave, the Bluetooth technology allows for interconnecting multiple piconets into larger networks called scatternets. The Bluetooth technology is explored in the context of enabling ad-hoc networks. The Bluetooth specification provides flexibility in the scatternet formation protocol, outlining only the mechanisms necessary for future protocol implementations. A new protocol for scatternet formation and maintenance - mscat - is presented and its performance is evaluated using a Bluetooth simulator. The free variables manipulated in this study include device activity and the probabilities of devices performing discovery procedures. The relationship between the role a device has in the scatternet and it’s probability of performing discovery was examined and related to the scatternet topology formed. The results show that mscat creates dense network topologies for networks of 30, 50 and 70 nodes. The mscat protocol results in approximately a 33% increase in slaves/piconet and a reduction of approximately 12.5% of average roles/node. For 50 node scenarios the set of parameters which creates the best determined outcome is unconnected node inquiry probability (UP) = 10%, master node inquiry probability (MP) = 80% and slave inquiry probability (SP) = 40%. The mscat protocol extends the Bluetooth specification for formation and maintenance of scatternets in an ad-hoc network.