872 resultados para Hydrologic Modeling Catchment and Runoff Computations
Resumo:
Some of the most valued natural and cultural landscapes on Earth lie in river basins that are poorly gauged and have incomplete historical climate and runoff records. The Mara River Basin of East Africa is such a basin. It hosts the internationally renowned Mara-Serengeti landscape as well as a rich mixture of indigenous cultures. The Mara River is the sole source of surface water to the landscape during the dry season and periods of drought. During recent years, the flow of the Mara River has become increasingly erratic, especially in the upper reaches, and resource managers are hampered by a lack of understanding of the relative influence of different sources of flow alteration. Uncertainties about the impacts of future climate change compound the challenges. We applied the Soil Water Assessment Tool (SWAT) to investigate the response of the headwater hydrology of the Mara River to scenarios of continued land use change and projected climate change. Under the data-scarce conditions of the basin, model performance was improved using satellite-based estimated rainfall data, which may also improve the usefulness of runoff models in other parts of East Africa. The results of the analysis indicate that any further conversion of forests to agriculture and grassland in the basin headwaters is likely to reduce dry season flows and increase peak flows, leading to greater water scarcity at critical times of the year and exacerbating erosion on hillslopes. Most climate change projections for the region call for modest and seasonally variable increases in precipitation (5–10 %) accompanied by increases in temperature (2.5–3.5 °C). Simulated runoff responses to climate change scenarios were non-linear and suggest the basin is highly vulnerable under low (−3 %) and high (+25 %) extremes of projected precipitation changes, but under median projections (+7 %) there is little impact on annual water yields or mean discharge. Modest increases in precipitation are partitioned largely to increased evapotranspiration. Overall, model results support the existing efforts of Mara water resource managers to protect headwater forests and indicate that additional emphasis should be placed on improving land management practices that enhance infiltration and aquifer recharge as part of a wider program of climate change adaptation.
Resumo:
This paper is a continuation of the paper titled “Concurrent multi-scale modeling of civil infrastructure for analyses on structural deteriorating—Part I: Modeling methodology and strategy” with the emphasis on model updating and verification for the developed concurrent multi-scale model. The sensitivity-based parameter updating method was applied and some important issues such as selection of reference data and model parameters, and model updating procedures on the multi-scale model were investigated based on the sensitivity analysis of the selected model parameters. The experimental modal data as well as static response in terms of component nominal stresses and hot-spot stresses at the concerned locations were used for dynamic response- and static response-oriented model updating, respectively. The updated multi-scale model was further verified to act as the baseline model which is assumed to be finite-element model closest to the real situation of the structure available for the subsequent arbitrary numerical simulation. The comparison of dynamic and static responses between the calculated results by the final model and measured data indicated the updating and verification methods applied in this paper are reliable and accurate for the multi-scale model of frame-like structure. The general procedures of multi-scale model updating and verification were finally proposed for nonlinear physical-based modeling of large civil infrastructure, and it was applied to the model verification of a long-span bridge as an actual engineering practice of the proposed procedures.
Resumo:
This paper presents a new approach to improving the effectiveness of autonomous systems that deal with dynamic environments. The basis of the approach is to find repeating patterns of behavior in the dynamic elements of the system, and then to use predictions of the repeating elements to better plan goal directed behavior. It is a layered approach involving classifying, modeling, predicting and exploiting. Classifying involves using observations to place the moving elements into previously defined classes. Modeling involves recording features of the behavior on a coarse grained grid. Exploitation is achieved by integrating predictions from the model into the behavior selection module to improve the utility of the robot's actions. This is in contrast to typical approaches that use the model to select between different strategies or plays. Three methods of adaptation to the dynamic features of the environment are explored. The effectiveness of each method is determined using statistical tests over a number of repeated experiments. The work is presented in the context of predicting opponent behavior in the highly dynamic and multi-agent robot soccer domain (RoboCup).
Resumo:
Engineering asset management (EAM) is a broad discipline and the EAM functions and processes are characterized by its distributed nature. However, engineering asset nowadays mostly relies on self-maintained experiential rule bases and periodic maintenance, which is lacking a collaborative engineering approach. This research proposes a collaborative environment integrated by a service center with domain expertise such as diagnosis, prognosis, and asset operations. The collaborative maintenance chain combines asset operation sites, service center (i.e., maintenance operation coordinator), system provider, first tier collaborators, and maintenance part suppliers. Meanwhile, to realize the automation of communication and negotiation among organizations, multiagent system (MAS) technique is applied to enhance the entire service level. During the MAS design processes, this research combines Prometheus MAS modeling approach with Petri-net modeling methodology and unified modeling language to visualize and rationalize the design processes of MAS. The major contributions of this research include developing a Petri-net enabled Prometheus MAS modeling methodology and constructing a collaborative agent-based maintenance chain framework for integrated EAM.
Resumo:
Osmotic treatments are often applied prior to convective drying of foods to impart sensory appeal aspects. During this process a multicomponent mass flow, composed mainly of water and osmotic agent, takes place. In this work, a heat and mass transfer model for the osmo-convective drying of yacon was developed and solved by the Finite Element Method using COMSOL Multiphysics®, considering a 2-D axisymmetric geometry and moisture dependent thermophysical properties. Yacon slices were osmotically dehydrated for 2 hours in a solution of sucralose and then dried in a tray dryer for 3 hours. The model was validated by experimental data of temperature, moisture content and sucralose uptake (R²> 0.90).
Resumo:
In this paper, a novel data-driven approach to monitoring of systems operating under variable operating conditions is described. The method is based on characterizing the degradation process via a set of operation-specific hidden Markov models (HMMs), whose hidden states represent the unobservable degradation states of the monitored system while its observable symbols represent the sensor readings. Using the HMM framework, modeling, identification and monitoring methods are detailed that allow one to identify a HMM of degradation for each operation from mixed-operation data and perform operation-specific monitoring of the system. Using a large data set provided by a major manufacturer, the new methods are applied to a semiconductor manufacturing process running multiple operations in a production environment.
Resumo:
Business Process Management describes a holistic management approach for the systematic design, modeling, execution, validation, monitoring and improvement of organizational business processes. Traditionally, most attention within this community has been given to control-flow aspects, i.e., the ordering and sequencing of business activities, oftentimes in isolation with regards to the context in which these activities occur. In this paper, we propose an approach that allows executable process models to be integrated with Geographic Information Systems. This approach enables process models to take geospatial and other geographic aspects into account in an explicit manner both during the modeling phase and the execution phase. We contribute a structured modeling methodology, based on the well-known Business Process Model and Notation standard, which is formalized by means of a mapping to executable Colored Petri nets. We illustrate the feasibility of our approach by means of a sustainability-focused case example of a process with important ecological concerns.
Resumo:
Organizational and technological systems analysis and design practices such as process modeling have received much attention in recent years. However, while knowledge about related artifacts such as models, tools, or grammars has substantially matured, little is known about the actual tasks and interaction activities that are conducted as part of analysis and design acts. In particular, key role of the facilitator has not been researched extensively to date. In this paper, we propose a new conceptual framework that can be used to examine facilitation behaviors in process modeling projects. The framework distinguishes four behavioral styles in facilitation (the driving engineer, the driving artist, the catalyzing engineer, and the catalyzing artist) that a facilitator can adopt. To distinguish between the four styles, we provide a set of ten behavioral anchors that underpin facilitation behaviors. We also report on a preliminary empirical exploration of our framework through interviews with experienced analysts in six modeling cases. Our research provides a conceptual foundation for an emerging theory for describing and explaining different behaviors associated with process modeling facilitation, provides first preliminary empirical results about facilitation in modeling projects, and provides a fertile basis for examining facilitation in other conceptual modeling activities.
Resumo:
Forty-four study sites were established in remnant woodland in the Burdekin River catchment in tropical north-east Queensland, Australia, to assess recent (decadal) vegetation change. The aim of this study was further to evaluate whether wide-scale vegetation 'thickening' (proliferation of woody plants in formerly more open woodlands) had occurred during the last century, coinciding with significant changes in land management. Soil samples from several depth intervals were size separated into different soil organic carbon (SOC) fractions, which differed from one another by chemical composition and turnover times. Tropical (C4) grasses dominate in the Burdekin catchment, and thus δ13C analyses of SOC fractions with different turnover times can be used to assess whether the relative proportion of trees (C3) and grasses (C4) had changed over time. However, a method was required to permit standardized assessment of the δ13C data for the individual sites within the 13 Mha catchment, which varied in soil and vegetation characteristics. Thus, an index was developed using data from three detailed study sites and global literature to standardize individual isotopic data from different soil depths and SOC fractions to reflect only the changed proportion of trees (C3) to grasses (C3) over decadal timescales. When applied to the 44 individual sites distributed throughout the Burdekin catchment, 64% of the sites were shown to have experienced decadal vegetation thickening, while 29% had remained stable and the remaining 7% had thinned. Thus, the development of this index enabled regional scale assessment and comparison of decadal vegetation patterns without having to rely on prior knowledge of vegetation changes or aerial photography.
Resumo:
In Australia communities are concerned about atrazine being detected in drinking water supplies. It is important to understand mechanisms by which atrazine is transported from paddocks to waterways if we are to reduce movement of agricultural chemicals from the site of application. Two paddocks cropped with grain sorghum on a Black Vertosol were monitored for atrazine, potassium chloride (KCl) extractable atrazine, desethylatrazine (DEA), and desisopropylatrazine (DIA) at 4 soil depths (0-0.05, 0.05-0.10, 0.10-0.20, and 0.20-0.30 m) and in runoff water and runoff sediment. Atrazine + DEA + DIA (total atrazine) had a half-life in soil of 16-20 days, more rapid dissipation than in many earlier reports. Atrazine extracted in dilute potassium chloride, considered available for weed control, was initially 34% of the total and had a half-life of 15-20 days until day 30, after which it dissipated rapidly with a half life of 6 days. We conclude that, in this region, atrazine may not pose a risk for groundwater contamination, as only 0.5% of applied atrazine moved deeper than 0.20 m into the soil, where it dissipated rapidly. In runoff (including suspended sediment) atrazine concentrations were greatest during the first runoff event (57 days after application) (85 μg/L) and declined with time. After 160 days, the total atrazine lost in runoff was 0.4% of the initial application. The total atrazine concentration in runoff was strongly related to the total concentration in soil, as expected. Even after 98% of the KCl-extractable atrazine had dissipated (and no longer provided weed control), runoff concentrations still exceeded the human health guideline value of 40 μg/L. For total atrazine in soil (0-0.05 m), the range for coefficient of soil sorption (Kd) was 1.9-28.4 mL/g and for soil organic carbon sorption (KOC) was 100-2184 mL/g, increasing with time of contact with the soil and rapid dissipation of the more soluble, available phase. Partition coefficients in runoff for total atrazine were initially 3, increasing to 32 and 51 with time, values for DEA being half these. To minimise atrazine losses, cultural practices that maximise rain infiltration, and thereby minimise runoff, and minimise concentrations in the soil surface should be adopted.
Resumo:
Surface losses of nitrogen from horticulture farms in coastal Queensland, Australia, may have the potential to eutrophy sensitive coastal marine habitats nearby. A case-study of the potential extent of such losses was investigated in a coastal macadamia plantation. Nitrogen losses were quantified in 5 consecutive runoff events during the 13-month study. Irrigation did not contribute to surface flows. Runoff was generated by storms at combined intensities and durations that were 20–40 mm/h for >9 min. These intensities and durations were within expected short-term (1 year) and long-term (up to 20 years) frequencies of rainfall in the study area. Surface flow volumes were 5.3 ± 1.1% of the episodic rainfall generated by such storms. Therefore, the largest part of each rainfall event was attributed to infiltration and drainage in this farm soil (Kandosol). The estimated annual loss of total nitrogen in runoff was 0.26 kg N/ha.year, representing a minimal loading of nitrogen in surface runoff when compared to other studies. The weighted average concentrations of total sediment nitrogen (TSN) and total dissolved nitrogen (TDN) generated in the farm runoff were 2.81 ± 0.77% N and 1.11 ± 0.27 mg N/L, respectively. These concentrations were considerably greater than ambient levels in an adjoining catchment waterway. Concentrations of TSN and TDN in the waterway were 0.11 ± 0.02% N and 0.50 ± 0.09 mg N/L, respectively. The steep concentration gradient of TSN and TDN between the farm runoff and the waterway demonstrated the occurrence of nutrient loading from the farming landscapes to the waterway. The TDN levels in the stream exceeded the current specified threshold of 0.2–0.3 mg N/L for eutrophication of such a waterway. Therefore, while the estimate of annual loading of N from runoff losses was comparatively low, it was evident that the stream catchment and associated agricultural land uses were already characterised by significant nitrogen loadings that pose eutrophication risks. The reported levels of nitrogen and the proximity of such waterways (8 km) to the coastline may have also have implications for the nearshore (oligotrophic) marine environment during periods of turbulent flow.
Resumo:
Targets for improvements in water quality entering the Great Barrier Reef (GBR) have been set through the Reef Water Quality Protection Plan (Reef Plan). To measure and report on progress towards the targets set a program has been established that combines monitoring and modelling at paddock through to catchment and reef scales; the Paddock to Reef Integrated Monitoring, Modelling and Reporting Program (Paddock to Reef Program). This program aims to provide evidence of links between land management activities, water quality and reef health. Five lines of evidence are used: the effectiveness of management practices to improve water quality; the prevalence of management practice adoption and change in catchment indicators; long-term monitoring of catchment water quality; paddock & catchment modelling to provide a relative assessment of progress towards meeting targets; and finally marine monitoring of GBR water quality and reef ecosystem health. This paper outlines the first four lines of evidence. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
Finite element modeling can be a useful tool for predicting the behavior of composite materials and arriving at desirable filler contents for maximizing mechanical performance. In the present study, to corroborate finite element analysis results, quantitative information on the effect of reinforcing polypropylene (PP) with various proportions of nanoclay (in the range of 3-9% by weight) is obtained through experiments; in particular, attention is paid to the Young's modulus, tensile strength and failure strain. Micromechanical finite element analysis combined with Monte Carlo simulation have been carried out to establish the validity of the modeling procedure and accuracy of prediction by comparing against experimentally determined stiffness moduli of nanocomposites. In the same context, predictions of Young's modulus yielded by theoretical micromechanics-based models are compared with experimental results. Macromechanical modeling was done to capture the non-linear stress-strain behavior including failure observed in experiments as this is deemed to be a more viable tool for analyzing products made of nanocomposites including applications of dynamics. (C) 2011 Elsevier Ltd. All rights reserved.