982 resultados para Engineering, Civil|Meteorology
Resumo:
Hysteresis models that eliminate the artificial pumping errors associated with the Kool-Parker (KP) soil moisture hysteresis model, such as the Parker-Lenhard (PL) method, can be computationally demanding in unsaturated transport models since they need to retain the wetting-drying history of the system. The pumping errors in these models need to be eliminated for correct simulation of cyclical systems (e.g. transport above a tidally forced watertable, infiltration and redistribution under periodic irrigation) if the soils exhibit significant hysteresis. A modification is made here to the PL method that allows it to be more readily applied to numerical models by eliminating the need to store a large number of soil moisture reversal points. The modified-PL method largely eliminates any artificial pumping error and so essentially retains the accuracy of the original PL approach. The modified-PL method is implemented in HYDRUS-1D (version 2.0), which is then used to simulate cyclic capillary fringe dynamics to show the influence of removing artificial pumping errors and to demonstrate the ease of implementation. Artificial pumping errors are shown to be significant for the soils and system characteristics used here in numerical experiments of transport above a fluctuating watertable. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
An Australian natural zeolite was collected, characterised and employed for basic dye adsorption in aqueous solution. The natural zeolite is mainly composed of clinoptiloite, quartz and mordenite and has cation-exchange capacity of 120 meq/100 g. The natural zeolite presents higher adsorption capacity for methylene blue than rhodamine B with the maximal adsorption capacity of 2.8 x 10(-5) and 7.9 x 10(-5) Mot/g at 50 degrees C for rhodamine B and methylene blue, respectively. Kinetic studies indicated that the adsorption followed the pseudo second-order kinetics and could be described as two-stage diffusion process. The adsorption isotherm could be fitted by the Langmuir and Freundlich models. Thermodynamic calculations showed that the adsorption is endothermic process with Delta H degrees at 2.0 and 8.7 kJ/mol for rhodamine B and methylene blue. It has also found that the regenerated zeolites by high-temperature calcination and Fenton oxidation showed similar adsorption capacity but lower than the fresh sample. Only 60% capacity could be recovered by the two regeneration techniques. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Fly ash was modified by hydrothermal treatment using NaOH solutions under various conditions for zeolite synthesis. The XRD patterns are presented. The results indicated that the samples obtained after treatment are much different. The XRD profiles revealed a number of new reflexes, suggesting a phase transformation probably occurred. Both heat treatment and chemical treatment increased the surface area and pore volume. It was found that zeolite P would be formed at the conditions of higher NaOH concentration and temperature. The treated fly ash was tested for adsorption of heavy metal ions and dyes in aqueous solution. It was shown that fly ash and the modified forms could effectively absorb heavy metals and methylene blue but not effectively adsorb rhodamine B. Modifying fly ash with NaOH solution would significantly enhance the adsorption capacity depending on the treatment temperature, time, and base concentration. The adsorption capacity of methylene blue would increases with pH of the dye solution and the sorption capacity of FA-NaOH could reach 5 x 10(-5) mol/g. The adsorption isotherm could be described by the Langmuir and Freundlich isotherm equations. Removal of copper and nickel ions could also be achieved on those treated fly ash. The removal efficiency for copper and nickel ions could be from 30% to 90% depending on the initial concentrations. The increase in adsorption temperature will enhance the adsorption efficiency for both heavy metals. The pseudo second-order kinetics would be better for fitting the dynamic adsorption of Cu and Ni ions. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The Gauss-Marquardt-Levenberg (GML) method of computer-based parameter estimation, in common with other gradient-based approaches, suffers from the drawback that it may become trapped in local objective function minima, and thus report optimized parameter values that are not, in fact, optimized at all. This can seriously degrade its utility in the calibration of watershed models where local optima abound. Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use. It is also easily adapted to maintain this efficiency in the face of potential numerical problems (that adversely affect all parameter estimation methodologies) caused by parameter insensitivity and/or parameter correlation. The present paper presents two algorithmic enhancements to the GML method that retain its strengths, but which overcome its weaknesses in the face of local optima. Using the first of these methods an intelligent search for better parameter sets is conducted in parameter subspaces of decreasing dimensionality when progress of the parameter estimation process is slowed either by numerical instability incurred through problem ill-posedness, or when a local objective function minimum is encountered. The second methodology minimizes the chance of successive GML parameter estimation runs finding the same objective function minimum by starting successive runs at points that are maximally removed from previous parameter trajectories. As well as enhancing the ability of a GML-based method to find the global objective function minimum, the latter technique can also be used to find the locations of many non-global optima (should they exist) in parameter space. This can provide a useful means of inquiring into the well-posedness of a parameter estimation problem, and for detecting the presence of bimodal parameter and predictive probability distributions. The new methodologies are demonstrated by calibrating a Hydrological Simulation Program-FORTRAN (HSPF) model against a time series of daily flows. Comparison with the SCE-UA method in this calibration context demonstrates a high level of comparative model run efficiency for the new method. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The integrated chemical-biological degradation combining advanced oxidation by UV/H2O2 followed by aerobic biodegradation was used to degrade C.I. Reactive Azo Red 195A, commonly used in the textile industry in Australia. An experimental design based on the response surface method was applied to evaluate the interactive effects of influencing factors (UV irradiation time, initial hydrogen peroxide dosage and recirculation ratio of the system) on decolourisation efficiency and optimizing the operating conditions of the treatment process. The effects were determined by the measurement of dye concentration and soluble chemical oxygen demand (S-COD). The results showed that the dye and S-COD removal were affected by all factors individually and interactively. Maximal colour degradation performance was predicted, and experimentally validated, with no recirculation, 30 min UV irradiation and 500 mg H2O2/L. The model predictions for colour removal, based on a three-factor/five-level Box-Wilson central composite design and the response surface method analysis, were found to be very close to additional experimental results obtained under near optimal conditions. This demonstrates the benefits of this approach in achieving good predictions while minimising the number of experiments required. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Groundwater-dependent riparian biota is known to be sensitive to changes in soil and groundwater salinity in estuarine systems. The groundwater flow and salinity behaviour in a phreatic aquifer adjoining a partially penetrating, tidal. estuary is investigated through two-dimensional numerical experiments for a lateral cross-section, which explore the influence of factors, such as aquifer and soil materials, tidal amptitudes, and regional groundwater hydraulic gradients. The density contrast between estuarine water and the fresh groundwater drives saltwater penetration of the aquifer even in the case of a marked groundwater hydraulic gradient towards the estuary. We show that tidal fluctuations in estuaries can significantly affect the groundwater salinity distribution in adjacent density-stratified phreatic aquifers. This has consequences for the expected distribution of salinity-sensitive biota in the hyporheic zone as well as vegetation and fauna dependent on water in the riparian soil and aquifer. The shape of the dense saltwater wedge propagating into the adjacent groundwater system is also modified by the estuarine tidal signal, although this effect appears to have only minor influence on the maximum distance penetrated into the aquifer (i.e., location of the 'toe' of the wedge). Tide-induced changes to riparian groundwater salinity are advection-driven, as evidenced by the modified time-averaged groundwater flow dynamics. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
An extended refraction-diffraction equation [Massel, S.R., 1993. Extended refraction-diffraction equation for surface waves. Coastal Eng. 19, 97-126] has been applied to predict wave transformation and breaking as well as wave-induced set-up on two-dimensional reef profiles of various shapes. A free empirical coefficient alpha in a formula for the average rate of energy dissipation [epsilon(b)] = (alpha rho g omega/8 pi)(root gh/C)(H-3/h) in the modified periodic bore model was found to be a function of the dimensionless parameter F-c0 = (g(1.25)H(0)(0.5)T(2.5))/h(r)(1.75), proposed by Gourlay [Gourlayl M.R., 1994. Wave transformation on a coral reef. Coastal Eng. 23, 17-42]. The applicability of the developed model has been demonstrated for reefs of various shapes subjected to various incident wave conditions. Assuming proposed relationships of the coefficient alpha and F-c0, the model provides results on wave height attenuation and set-up elevation which compare well with experimental data. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
The major barrier to practical optimization of pavement preservation programming has always been that for formulations where the identity of individual projects is preserved, the solution space grows exponentially with the problem size to an extent where it can become unmanageable by the traditional analytical optimization techniques within reasonable limit. This has been attributed to the problem of combinatorial explosion that is, exponential growth of the number of combinations. The relatively large number of constraints often presents in a real-life pavement preservation programming problems and the trade-off considerations required between preventive maintenance, rehabilitation and reconstruction, present yet another factor that contributes to the solution complexity. In this research study, a new integrated multi-year optimization procedure was developed to solve network level pavement preservation programming problems, through cost-effectiveness based evolutionary programming analysis, using the Shuffled Complex Evolution (SCE) algorithm.^ A case study problem was analyzed to illustrate the robustness and consistency of the SCE technique in solving network level pavement preservation problems. The output from this program is a list of maintenance and rehabilitation treatment (M&R) strategies for each identified segment of the network in each programming year, and the impact on the overall performance of the network, in terms of the performance levels of the recommended optimal M&R strategy. ^ The results show that the SCE is very efficient and consistent in the simultaneous consideration of the trade-off between various pavement preservation strategies, while preserving the identity of the individual network segments. The flexibility of the technique is also demonstrated, in the sense that, by suitably coding the problem parameters, it can be used to solve several forms of pavement management programming problems. It is recommended that for large networks, some sort of decomposition technique should be applied to aggregate sections, which exhibit similar performance characteristics into links, such that whatever M&R alternative is recommended for a link can be applied to all the sections connected to it. In this way the problem size, and hence the solution time, can be greatly reduced to a more manageable solution space. ^ The study concludes that the robust search characteristics of SCE are well suited for solving the combinatorial problems in long-term network level pavement M&R programming and provides a rich area for future research. ^
Resumo:
The predictions contained within this dissertation suggest further rapid growth of the cruise industry and the requirement for additional cruise ship berthing worldwide. The factors leading to the tremendous growth in the cruise marketplace are identified and individually addressed. Unfortunately, planning factors associated with the design and construction of cruise ship seaports are not readily available and methods to manage this growth have not been addressed. This dissertation provides accurate and consolidated planning factors essential for comprehensive consideration of cruise ship requirements and design of growing cruise ship ports. The consolidation of these factors results in faster and better informed choices for the port owner/operator with regard to port expansion. Furthermore, this dissertation proposes development of new systems to better manage increasing passenger and ship traffic. If implemented, this will result in optimized port systems providing a greater level of service to passengers and port authorities while simultaneously minimizing environmental and economic impact. ^
Resumo:
Pavement performance is one of the most important components of the pavement management system. Prediction of the future performance of a pavement section is important in programming maintenance and rehabilitation needs. Models for predicting pavement performance have been developed on the basis of traffic and age. The purpose of this research is to extend the use of a relatively new approach to performance prediction in pavement performance modeling using adaptive logic networks (ALN). Adaptive logic networks have recently emerged as an effective alternative to artificial neural networks for machine learning tasks. ^ The ALN predictive methodology is applicable to a wide variety of contexts including prediction of roughness based indices, composite rating indices and/or individual pavement distresses. The ALN program requires key information about a pavement section, including the current distress indexes, pavement age, climate region, traffic and other variables to predict yearly performance values into the future. ^ This research investigates the effect of different learning rates of the ALN in pavement performance modeling. It can be used at both the network and project level for predicting the long term performance of a road network. Results indicate that the ALN approach is well suited for pavement performance prediction modeling and shows a significant improvement over the results obtained from other artificial intelligence approaches. ^
Resumo:
Highways are generally designed to serve a mixed traffic flow that consists of passenger cars, trucks, buses, recreational vehicles, etc. The fact that the impacts of these different vehicle types are not uniform creates problems in highway operations and safety. A common approach to reducing the impacts of truck traffic on freeways has been to restrict trucks to certain lane(s) to minimize the interaction between trucks and other vehicles and to compensate for their differences in operational characteristics. ^ The performance of different truck lane restriction alternatives differs under different traffic and geometric conditions. Thus, a good estimate of the operational performance of different truck lane restriction alternatives under prevailing conditions is needed to help make informed decisions on truck lane restriction alternatives. This study develops operational performance models that can be applied to help identify the most operationally efficient truck lane restriction alternative on a freeway under prevailing conditions. The operational performance measures examined in this study include average speed, throughput, speed difference, and lane changes. Prevailing conditions include number of lanes, interchange density, free-flow speeds, volumes, truck percentages, and ramp volumes. ^ Recognizing the difficulty of collecting sufficient data for an empirical modeling procedure that involves a high number of variables, the simulation approach was used to estimate the performance values for various truck lane restriction alternatives under various scenarios. Both the CORSIM and VISSIM simulation models were examined for their ability to model truck lane restrictions. Due to a major problem found in the CORSIM model for truck lane modeling, the VISSIM model was adopted as the simulator for this study. ^ The VISSIM model was calibrated mainly to replicate the capacity given in the 2000 Highway Capacity Manual (HCM) for various free-flow speeds under the ideal basic freeway section conditions. Non-linear regression models for average speed, throughput, average number of lane changes, and speed difference between the lane groups were developed. Based on the performance models developed, a simple decision procedure was recommended to select the desired truck lane restriction alternative for prevailing conditions. ^
Resumo:
Land use and transportation interaction has been a research topic for several decades. There have been efforts to identify impacts of transportation on land use from several different perspectives. One focus has been the role of transportation improvements in encouraging new land developments or relocation of activities due to improved accessibility. The impacts studied have included property values and increased development. Another focus has been on the changes in travel behavior due to better mobility and accessibility. Most studies to date have been conducted in metropolitan level, thus unable to account for interactions spatially and temporally at smaller geographic scales. ^ In this study, a framework for studying the temporal interactions between transportation and land use was proposed and applied to three selected corridor areas in Miami-Dade County, Florida. The framework consists of two parts: one is developing of temporal data and the other is applying time series analysis to this temporal data to identify their dynamic interactions. Temporal GIS databases were constructed and used to compile building permit data and transportation improvement projects. Two types of time series analysis approaches were utilized: univariate models and multivariate models. Time series analysis is designed to describe the dynamic consequences of time series by developing models and forecasting the future of the system based on historical trends. Model estimation results from the selected corridors were then compared. ^ It was found that the time series models predicted residential development better than commercial development. It was also found that results from three study corridors varied in terms of the magnitude of impacts, length of lags, significance of the variables, and the model structure. Long-run effect or cumulated impact of transportation improvement on land developments was also measured with time series techniques. The study offered evidence that congestion negatively impacted development and transportation investments encouraged land development. ^
Resumo:
Annual average daily traffic (AADT) is important information for many transportation planning, design, operation, and maintenance activities, as well as for the allocation of highway funds. Many studies have attempted AADT estimation using factor approach, regression analysis, time series, and artificial neural networks. However, these methods are unable to account for spatially variable influence of independent variables on the dependent variable even though it is well known that to many transportation problems, including AADT estimation, spatial context is important. ^ In this study, applications of geographically weighted regression (GWR) methods to estimating AADT were investigated. The GWR based methods considered the influence of correlations among the variables over space and the spatially non-stationarity of the variables. A GWR model allows different relationships between the dependent and independent variables to exist at different points in space. In other words, model parameters vary from location to location and the locally linear regression parameters at a point are affected more by observations near that point than observations further away. ^ The study area was Broward County, Florida. Broward County lies on the Atlantic coast between Palm Beach and Miami-Dade counties. In this study, a total of 67 variables were considered as potential AADT predictors, and six variables (lanes, speed, regional accessibility, direct access, density of roadway length, and density of seasonal household) were selected to develop the models. ^ To investigate the predictive powers of various AADT predictors over the space, the statistics including local r-square, local parameter estimates, and local errors were examined and mapped. The local variations in relationships among parameters were investigated, measured, and mapped to assess the usefulness of GWR methods. ^ The results indicated that the GWR models were able to better explain the variation in the data and to predict AADT with smaller errors than the ordinary linear regression models for the same dataset. Additionally, GWR was able to model the spatial non-stationarity in the data, i.e., the spatially varying relationship between AADT and predictors, which cannot be modeled in ordinary linear regression. ^
Resumo:
Choosing between Light Rail Transit (LRT) and Bus Rapid Transit (BRT) systems is often controversial and not an easy task for transportation planners who are contemplating the upgrade of their public transportation services. These two transit systems provide comparable services for medium-sized cities from the suburban neighborhood to the Central Business District (CBD) and utilize similar right-of-way (ROW) categories. The research is aimed at developing a method to assist transportation planners and decision makers in determining the most feasible system between LRT and BRT. ^ Cost estimation is a major factor when evaluating a transit system. Typically, LRT is more expensive to build and implement than BRT, but has significantly lower Operating and Maintenance (OM) costs than BRT. This dissertation examines the factors impacting capacity and costs, and develops cost models, which are a capacity-based cost estimate for the LRT and BRT systems. Various ROW categories and alignment configurations of the systems are also considered in the developed cost models. Kikuchi's fleet size model (1985) and cost allocation method are used to develop the cost models to estimate the capacity and costs. ^ The comparison between LRT and BRT are complicated due to many possible transportation planning and operation scenarios. In the end, a user-friendly computer interface integrated with the established capacity-based cost models, the LRT and BRT Cost Estimator (LBCostor), was developed by using Microsoft Visual Basic language to facilitate the process and will guide the users throughout the comparison operations. The cost models and the LBCostor can be used to analyze transit volumes, alignments, ROW configurations, number of stops and stations, headway, size of vehicle, and traffic signal timing at the intersections. The planners can make the necessary changes and adjustments depending on their operating practices. ^