30 resultados para vermont civil union
Resumo:
This study presents an overview of seismic microzonation and existing methodologies with a newly proposed methodology covering all aspects. Earlier seismic microzonation methods focused on parameters that affect the structure or foundation related problems. But seismic microzonation has generally been recognized as an important component of urban planning and disaster management. So seismic microzonation should evaluate all possible hazards due to earthquake and represent the same by spatial distribution. This paper presents a new methodology for seismic microzonation which has been generated based on location of study area and possible associated hazards. This new method consists of seven important steps with defined output for each step and these steps are linked with each other. Addressing one step and respective result may not be seismic microzonation, which is practiced widely. This paper also presents importance of geotechnical aspects in seismic microzonation and how geotechnical aspects affect the final map. For the case study, seismic hazard values at rock level are estimated considering the seismotectonic parameters of the region using deterministic and probabilistic seismic hazard analysis. Surface level hazard values are estimated considering site specific study and local site effects based on site classification/characterization. The liquefaction hazard is estimated using standard penetration test data. These hazard parameters are integrated in Geographical Information System (GIS) using Analytic Hierarchy Process (AHP) and used to estimate hazard index. Hazard index is arrived by following a multi-criteria evaluation technique - AHP, in which each theme and features have been assigned weights and then ranked respectively according to a consensus opinion about their relative significance to the seismic hazard. The hazard values are integrated through spatial union to obtain the deterministic microzonation map and probabilistic microzonation map for a specific return period. Seismological parameters are widely used for microzonation rather than geotechnical parameters. But studies show that the hazard index values are based on site specific geotechnical parameters.
Resumo:
Climate change impact on a groundwater-dependent small urban town has been investigated in the semiarid hard rock aquifer in southern India. A distributed groundwater model was used to simulate the groundwater levels in the study region for the projected future rainfall (2012-32) obtained from a general circulation model (GCM) to estimate the impacts of climate change and management practices on groundwater system. Management practices were based on the human-induced changes on the urban infrastructure such as reduced recharge from the lakes, reduced recharge from water and wastewater utility due to an operational and functioning underground drainage system, and additional water extracted by the water utility for domestic purposes. An assessment of impacts on the groundwater levels was carried out by calibrating a groundwater model using comprehensive data gathered during the period 2008-11 and then simulating the future groundwater level changes using rainfall from six GCMs Institute of Numerical Mathematics Coupled Model, version 3.0 (INM-CM. 3.0); L'Institut Pierre-Simon Laplace Coupled Model, version 4 (IPSL-CM4); Model for Interdisciplinary Research on Climate, version 3.2 (MIROC3.2); ECHAM and the global Hamburg Ocean Primitive Equation (ECHO-G); Hadley Centre Coupled Model, version 3 (HadCM3); and Hadley Centre Global Environment Model, version 1 (HadGEM1)] that were found to show good correlation to the historical rainfall in the study area. The model results for the present condition indicate that the annual average discharge (sum of pumping and natural groundwater outflow) was marginally or moderately higher at various locations than the recharge and further the recharge is aided from the recharge from the lakes. Model simulations showed that groundwater levels were vulnerable to the GCM rainfall and a scenario of moderate reduction in recharge from lakes. Hence, it is important to sustain the induced recharge from lakes by ensuring that sufficient runoff water flows to these lakes.
Resumo:
An integratedm odel is developed,b asedo n seasonailn puts of reservoiri nflow and rainfall in the irrigated area, to determine the optimal reservoir release policies and irrigation allocationst o multiple crops.T he model is conceptuallym ade up of two modules. Module 1 is an intraseasonal allocation model to maximize the sum of relative yieldso f all crops,f or a givens tateo f the systemu, singl inear programming(L P). The module takes into account reservoir storage continuity, soil moisture balance, and crop root growthw ith time. Module 2 is a seasonaal llocationm odel to derive the steadys tate reservoiro peratingp olicyu sings tochastidc ynamicp rogramming(S DP). Reservoir storage, seasonal inflow, and seasonal rainfall are the state variables in the SDP. The objective in SDP is to maximize the expected sum of relative yields of all crops in a year.The resultso f module 1 and the transitionp robabilitieso f seasonailn flow and rainfall form the input for module 2. The use of seasonailn puts coupledw ith the LP-SDP solution strategy in the present formulation facilitates in relaxing the limitations of an earlier study,w hile affectinga dditionali mprovementsT. he model is applied to an existing reservoir in Karnataka State, India.
Resumo:
Recession flows in a basin are controlled by the temporal evolution of its active drainage network (ADN). The geomorphological recession flow model (GRFM) assumes that both the rate of flow generation per unit ADN length (q) and the speed at which ADN heads move downstream (c) remain constant during a recession event. Thereby, it connects the power law exponent of -dQ/dt versus Q (discharge at the outlet at time t) curve, , with the structure of the drainage network, a fixed entity. In this study, we first reformulate the GRFM for Horton-Strahler networks and show that the geomorphic ((g)) is equal to D/(D-1), where D is the fractal dimension of the drainage network. We then propose a more general recession flow model by expressing both q and c as functions of Horton-Strahler stream order. We show that it is possible to have = (g) for a recession event even when q and c do not remain constant. The modified GRFM suggests that is controlled by the spatial distribution of subsurface storage within the basin. By analyzing streamflow data from 39 U.S. Geological Survey basins, we show that is having a power law relationship with recession curve peak, which indicates that the spatial distribution of subsurface storage varies across recession events. Key Points The GRFM is reformulated for Horton-Strahler networks. The GRFM is modified by allowing its parameters to vary along streams. Sub-surface storage distribution controls recession flow characteristics.
Resumo:
Estimation of design quantiles of hydrometeorological variables at critical locations in river basins is necessary for hydrological applications. To arrive at reliable estimates for locations (sites) where no or limited records are available, various regional frequency analysis (RFA) procedures have been developed over the past five decades. The most widely used procedure is based on index-flood approach and L-moments. It assumes that values of scale and shape parameters of frequency distribution are identical across all the sites in a homogeneous region. In real-world scenario, this assumption may not be valid even if a region is statistically homogeneous. To address this issue, a novel mathematical approach is proposed. It involves (i) identification of an appropriate frequency distribution to fit the random variable being analyzed for homogeneous region, (ii) use of a proposed transformation mechanism to map observations of the variable from original space to a dimensionless space where the form of distribution does not change, and variation in values of its parameters is minimal across sites, (iii) construction of a growth curve in the dimensionless space, and (iv) mapping the curve to the original space for the target site by applying inverse transformation to arrive at required quantile(s) for the site. Effectiveness of the proposed approach (PA) in predicting quantiles for ungauged sites is demonstrated through Monte Carlo simulation experiments considering five frequency distributions that are widely used in RFA, and by case study on watersheds in conterminous United States. Results indicate that the PA outperforms methods based on index-flood approach.
Resumo:
Recent focus of flood frequency analysis (FFA) studies has been on development of methods to model joint distributions of variables such as peak flow, volume, and duration that characterize a flood event, as comprehensive knowledge of flood event is often necessary in hydrological applications. Diffusion process based adaptive kernel (D-kernel) is suggested in this paper for this purpose. It is data driven, flexible and unlike most kernel density estimators, always yields a bona fide probability density function. It overcomes shortcomings associated with the use of conventional kernel density estimators in FFA, such as boundary leakage problem and normal reference rule. The potential of the D-kernel is demonstrated by application to synthetic samples of various sizes drawn from known unimodal and bimodal populations, and five typical peak flow records from different parts of the world. It is shown to be effective when compared to conventional Gaussian kernel and the best of seven commonly used copulas (Gumbel-Hougaard, Frank, Clayton, Joe, Normal, Plackett, and Student's T) in estimating joint distribution of peak flow characteristics and extrapolating beyond historical maxima. Selection of optimum number of bins is found to be critical in modeling with D-kernel.
Resumo:
The amount of water stored and moving through the surface water bodies of large river basins (river, floodplains, wetlands) plays a major role in the global water and biochemical cycles and is a critical parameter for water resources management. However, the spatiotemporal variations of these freshwater reservoirs are still widely unknown at the global scale. Here, we propose a hypsographic curve approach to estimate surface freshwater storage variations over the Amazon basin combining surface water extent from a multi-satellite-technique with topographic data from the Global Digital Elevation Model (GDEM) from Advance Spaceborne Thermal Emission and Reflection Radiometer (ASTER). Monthly surface water storage variations for 1993-2007 are presented, showing a strong seasonal and interannual variability, and are evaluated against in situ river discharge and precipitation. The basin-scale mean annual amplitude of similar to 1200 km(3) is in the range of previous estimates and contributes to about half of the Gravity Recovery And Climate Experiment (GRACE) total water storage variations. For the first time, we map the surface water volume anomaly during the extreme droughts of 1997 (October-November) and 2005 (September-October) and found that during these dry events the water stored in the river and floodplains of the Amazon basin was, respectively, similar to 230 (similar to 40%) and 210 (similar to 50%) km(3) below the 1993-2007 average. This new 15 year data set of surface water volume represents an unprecedented source of information for future hydrological or climate modeling of the Amazon. It is also a first step toward the development of such database at the global scale.
Resumo:
[1] Evaporative fraction (EF) is a measure of the amount of available energy at the earth surface that is partitioned into latent heat flux. The currently operational thermal sensors like the Moderate Resolution Imaging Spectroradiometer (MODIS) on satellite platforms provide data only at 1000 m, which constraints the spatial resolution of EF estimates. A simple model (disaggregation of evaporative fraction (DEFrac)) based on the observed relationship between EF and the normalized difference vegetation index is proposed to spatially disaggregate EF. The DEFrac model was tested with EF estimated from the triangle method using 113 clear sky data sets from the MODIS sensor aboard Terra and Aqua satellites. Validation was done using the data at four micrometeorological tower sites across varied agro-climatic zones possessing different land cover conditions in India using Bowen ratio energy balance method. The root-mean-square error (RMSE) of EF estimated at 1000 m resolution using the triangle method was 0.09 for all the four sites put together. The RMSE of DEFrac disaggregated EF was 0.09 for 250 m resolution. Two models of input disaggregation were also tried with thermal data sharpened using two thermal sharpening models DisTrad and TsHARP. The RMSE of disaggregated EF was 0.14 for both the input disaggregation models for 250 m resolution. Moreover, spatial analysis of disaggregation was performed using Landsat-7 (Enhanced Thematic Mapper) ETM+ data over four grids in India for contrasted seasons. It was observed that the DEFrac model performed better than the input disaggregation models under cropped conditions while they were marginally similar under non-cropped conditions.
Resumo:
Regionalization approaches are widely used in water resources engineering to identify hydrologically homogeneous groups of watersheds that are referred to as regions. Pooled information from sites (depicting watersheds) in a region forms the basis to estimate quantiles associated with hydrological extreme events at ungauged/sparsely gauged sites in the region. Conventional regionalization approaches can be effective when watersheds (data points) corresponding to different regions can be separated using straight lines or linear planes in the space of watershed related attributes. In this paper, a kernel-based Fuzzy c-means (KFCM) clustering approach is presented for use in situations where such linear separation of regions cannot be accomplished. The approach uses kernel-based functions to map the data points from the attribute space to a higher-dimensional space where they can be separated into regions by linear planes. A procedure to determine optimal number of regions with the KFCM approach is suggested. Further, formulations to estimate flood quantiles at ungauged sites with the approach are developed. Effectiveness of the approach is demonstrated through Monte-Carlo simulation experiments and a case study on watersheds in United States. Comparison of results with those based on conventional Fuzzy c-means clustering, Region-of-influence approach and a prior study indicate that KFCM approach outperforms the other approaches in forming regions that are closer to being statistically homogeneous and in estimating flood quantiles at ungauged sites. Key Points
Resumo:
Several statistical downscaling models have been developed in the past couple of decades to assess the hydrologic impacts of climate change by projecting the station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs). This paper presents and compares different statistical downscaling models that use multiple linear regression (MLR), positive coefficient regression (PCR), stepwise regression (SR), and support vector machine (SVM) techniques for estimating monthly rainfall amounts in the state of Florida. Mean sea level pressure, air temperature, geopotential height, specific humidity, U wind, and V wind are used as the explanatory variables/predictors in the downscaling models. Data for these variables are obtained from the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis dataset and the Canadian Centre for Climate Modelling and Analysis (CCCma) Coupled Global Climate Model, version 3 (CGCM3) GCM simulations. The principal component analysis (PCA) and fuzzy c-means clustering method (FCM) are used as part of downscaling model to reduce the dimensionality of the dataset and identify the clusters in the data, respectively. Evaluation of the performances of the models using different error and statistical measures indicates that the SVM-based model performed better than all the other models in reproducing most monthly rainfall statistics at 18 sites. Output from the third-generation CGCM3 GCM for the A1B scenario was used for future projections. For the projection period 2001-10, MLR was used to relate variables at the GCM and NCEP grid scales. Use of MLR in linking the predictor variables at the GCM and NCEP grid scales yielded better reproduction of monthly rainfall statistics at most of the stations (12 out of 18) compared to those by spatial interpolation technique used in earlier studies.
Resumo:
Nanoparticle deposition behavior observed at the Darcy scale represents an average of the processes occurring at the pore scale. Hence, the effect of various pore-scale parameters on nanoparticle deposition can be understood by studying nanoparticle transport at pore scale and upscaling the results to the Darcy scale. In this work, correlation equations for the deposition rate coefficients of nanoparticles in a cylindrical pore are developed as a function of nine pore-scale parameters: the pore radius, nanoparticle radius, mean flow velocity, solution ionic strength, viscosity, temperature, solution dielectric constant, and nanoparticle and collector surface potentials. Based on dominant processes, the pore space is divided into three different regions, namely, bulk, diffusion, and potential regions. Advection-diffusion equations for nanoparticle transport are prescribed for the bulk and diffusion regions, while the interaction between the diffusion and potential regions is included as a boundary condition. This interaction is modeled as a first-order reversible kinetic adsorption. The expressions for the mass transfer rate coefficients between the diffusion and the potential regions are derived in terms of the interaction energy profile. Among other effects, we account for nanoparticle-collector interaction forces on nanoparticle deposition. The resulting equations are solved numerically for a range of values of pore-scale parameters. The nanoparticle concentration profile obtained for the cylindrical pore is averaged over a moving averaging volume within the pore in order to get the 1-D concentration field. The latter is fitted to the 1-D advection-dispersion equation with an equilibrium or kinetic adsorption model to determine the values of the average deposition rate coefficients. In this study, pore-scale simulations are performed for three values of Peclet number, Pe = 0.05, 5, and 50. We find that under unfavorable conditions, the nanoparticle deposition at pore scale is best described by an equilibrium model at low Peclet numbers (Pe = 0.05) and by a kinetic model at high Peclet numbers (Pe = 50). But, at an intermediate Pe (e.g., near Pe = 5), both equilibrium and kinetic models fit the 1-D concentration field. Correlation equations for the pore-averaged nanoparticle deposition rate coefficients under unfavorable conditions are derived by performing a multiple-linear regression analysis between the estimated deposition rate coefficients for a single pore and various pore-scale parameters. The correlation equations, which follow a power law relation with nine pore-scale parameters, are found to be consistent with the column-scale and pore-scale experimental results, and qualitatively agree with the colloid filtration theory. These equations can be incorporated into pore network models to study the effect of pore-scale parameters on nanoparticle deposition at larger length scales such as Darcy scale.