940 resultados para Hydrologic Modeling Processes and River Flows
Resumo:
Canada releases over 150 billion litres of untreated and undertreated wastewater into the water environment every year1. To clean up urban wastewater, new Federal Wastewater Systems Effluent Regulations (WSER) on establishing national baseline effluent quality standards that are achievable through secondary wastewater treatment were enacted on July 18, 2012. With respect to the wastewater from the combined sewer overflows (CSO), the Regulations require the municipalities to report the annual quantity and frequency of effluent discharges. The City of Toronto currently has about 300 CSO locations within an area of approximately 16,550 hectares. The total sewer length of the CSO area is about 3,450 km and the number of sewer manholes is about 51,100. A system-wide monitoring of all CSO locations has never been undertaken due to the cost and practicality. Instead, the City has relied on estimation methods and modelling approaches in the past to allow funds that would otherwise be used for monitoring to be applied to the reduction of the impacts of the CSOs. To fulfill the WSER requirements, the City is now undertaking a study in which GIS-based hydrologic and hydraulic modelling is the approach. Results show the usefulness of this for 1) determining the flows contributing to the combined sewer system in the local and trunk sewers for dry weather flow, wet weather flow, and snowmelt conditions; 2) assessing hydraulic grade line and surface water depth in all the local and trunk sewers under heavy rain events; 3) analysis of local and trunk sewer capacities for future growth; and 4) reporting of the annual quantity and frequency of CSOs as per the requirements in the new Regulations. This modelling approach has also allowed funds to be applied toward reducing and ultimately eliminating the adverse impacts of CSOs rather than expending resources on unnecessary and costly monitoring.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In fluvial systems, the relationship between a dominant variable (e.g. flood pulse) and its dependent ones (e.g. riparian vegetation) is called connectivity. This paper analyzes the connectivity elements and processes controlling riparian vegetation for a reach of the upper Paraná River (Brazil) and estimates the future changes in channel-vegetation relationship as a consequence of the managing of a large dam. The studied reach is situated 30km downstream from the Porto Primavera Dam (construction finished in 1999). Through aerial photography (1:25,000, 1996), RGB-CBERS satellite imagery and a previous field botany survey it was possible to elaborate a map with the five major morpho-vegetation units: 1) Tree-dominated natural levee, 2) Shrubby upper floodplain, 3) Shrub-herbaceous mid floodplain, 4) Grass-herbaceous lower floodplain and 5) Shrub-herbaceous flood runoff channel units. By use of a detailed topographic survey and statistical tools each morpho-vegetation type was analyzed according to its connectivity parameters (frequency, recurrence, permanence, seasonality, potamophase, limnophase and FCQ index) in the pre- and post-dam closure periods of the historical series. Data showed that most of the morpho-vegetation units were predicted to present changes in connectivity parameters values after dam closing and the new regime could affect, in different intensity, the river ecology and particularly the riparian vegetation. The methods used in this study can be useful for dam impact studies in other South American tropical rivers. © 2012 Elsevier Ltd.
Resumo:
Design parameters, process flows, electro-thermal-fluidic simulations and experimental characterizations of Micro-Electro-Mechanical-Systems (MEMS) suited for gas-chromatographic (GC) applications are presented and thoroughly described in this thesis, whose topic belongs to the research activities the Institute for Microelectronics and Microsystems (IMM)-Bologna is involved since several years, i.e. the development of micro-systems for chemical analysis, based on silicon micro-machining techniques and able to perform analysis of complex gaseous mixtures, especially in the field of environmental monitoring. In this regard, attention has been focused on the development of micro-fabricated devices to be employed in a portable mini-GC system for the analysis of aromatic Volatile Organic Compounds (VOC) like Benzene, Toluene, Ethyl-benzene and Xylene (BTEX), i.e. chemical compounds which can significantly affect environment and human health because of their demonstrated carcinogenicity (benzene) or toxicity (toluene, xylene) even at parts per billion (ppb) concentrations. The most significant results achieved through the laboratory functional characterization of the mini-GC system have been reported, together with in-field analysis results carried out in a station of the Bologna air monitoring network and compared with those provided by a commercial GC system. The development of more advanced prototypes of micro-fabricated devices specifically suited for FAST-GC have been also presented (silicon capillary columns, Ultra-Low-Power (ULP) Metal OXide (MOX) sensor, Thermal Conductivity Detector (TCD)), together with the technological processes for their fabrication. The experimentally demonstrated very high sensitivity of ULP-MOX sensors to VOCs, coupled with the extremely low power consumption, makes the developed ULP-MOX sensor the most performing metal oxide sensor reported up to now in literature, while preliminary test results proved that the developed silicon capillary columns are capable of performances comparable to those of the best fused silica capillary columns. Finally, the development and the validation of a coupled electro-thermal Finite Element Model suited for both steady-state and transient analysis of the micro-devices has been described, and subsequently implemented with a fluidic part to investigate devices behaviour in presence of a gas flowing with certain volumetric flow rates.
Resumo:
Oxidation processes can be used to treat industrial wastewater containing non-biodegradable organic compounds. However, the presence of dissolved salts may inhibit or retard the treatment process. In this study, wastewater desalination by electrodialysis (ED) associated with an advanced oxidation process (photo-Fenton) was applied to an aqueous NaCl solution containing phenol. The influence of process variables on the demineralization factor was investigated for ED in pilot scale and a correlation was obtained between the phenol, salt and water fluxes with the driving force. The oxidation process was investigated in a laboratory batch reactor and a model based on artificial neural networks was developed by fitting the experimental data describing the reaction rate as a function of the input variables. With the experimental parameters of both processes, a dynamic model was developed for ED and a continuous model, using a plug flow reactor approach, for the oxidation process. Finally, the hybrid model simulation could validate different scenarios of the integrated system and can be used for process optimization.
Resumo:
A new modeling approach-multiple mapping conditioning (MMC)-is introduced to treat mixing and reaction in turbulent flows. The model combines the advantages of the probability density function and the conditional moment closure methods and is based on a certain generalization of the mapping closure concept. An equivalent stochastic formulation of the MMC model is given. The validity of the closuring hypothesis of the model is demonstrated by a comparison with direct numerical simulation results for the three-stream mixing problem. (C) 2003 American Institute of Physics.
Resumo:
MOTIVATION: Understanding gene regulation in biological processes and modeling the robustness of underlying regulatory networks is an important problem that is currently being addressed by computational systems biologists. Lately, there has been a renewed interest in Boolean modeling techniques for gene regulatory networks (GRNs). However, due to their deterministic nature, it is often difficult to identify whether these modeling approaches are robust to the addition of stochastic noise that is widespread in gene regulatory processes. Stochasticity in Boolean models of GRNs has been addressed relatively sparingly in the past, mainly by flipping the expression of genes between different expression levels with a predefined probability. This stochasticity in nodes (SIN) model leads to over representation of noise in GRNs and hence non-correspondence with biological observations. RESULTS: In this article, we introduce the stochasticity in functions (SIF) model for simulating stochasticity in Boolean models of GRNs. By providing biological motivation behind the use of the SIF model and applying it to the T-helper and T-cell activation networks, we show that the SIF model provides more biologically robust results than the existing SIN model of stochasticity in GRNs. AVAILABILITY: Algorithms are made available under our Boolean modeling toolbox, GenYsis. The software binaries can be downloaded from http://si2.epfl.ch/ approximately garg/genysis.html.
Resumo:
Particle fluxes (including major components and grain size), and oceanographic parameters (near-bottom water temperature, current speed and suspended sediment concentration) were measured along the Cap de Creus submarine canyon in the Gulf of Lions (GoL; NW Mediterranean Sea) during two consecutive winter-spring periods (2009 2010 and 2010 2011). The comparison of data obtained with the measurements of meteorological and hydrological parameters (wind speed, turbulent heat flux, river discharge) have shown the important role of atmospheric forcings in transporting particulate matter through the submarine canyon and towards the deep sea. Indeed, atmospheric forcing during 2009 2010 and 2010 2011 winter months showed differences in both intensity and persistence that led to distinct oceanographic responses. Persistent dry northern winds caused strong heat losses (14.2 × 103 W m−2) in winter 2009 2010 that triggered a pronounced sea surface cooling compared to winter 2010 2011 (1.6 × 103 W m−2 lower). As a consequence, a large volume of dense shelf water formed in winter 2009 2010, which cascaded at high speed (up to ∼1 m s−1) down Cap de Creus Canyon as measured by a current-meter in the head of the canyon. The lower heat losses recorded in winter 2010 2011, together with an increased river discharge, resulted in lowered density waters over the shelf, thus preventing the formation and downslope transport of dense shelf water. High total mass fluxes (up to 84.9 g m−2 d−1) recorded in winter-spring 2009 2010 indicate that dense shelf water cascading resuspended and transported sediments at least down to the middle canyon. Sediment fluxes were lower (28.9 g m−2 d−1) under the quieter conditions of winter 2010 2011. The dominance of the lithogenic fraction in mass fluxes during the two winter-spring periods points to a resuspension origin for most of the particles transported down canyon. The variability in organic matter and opal contents relates to seasonally controlled inputs associated with the plankton spring bloom during March and April of both years.
Resumo:
This work describes techniques for modeling, optimizing and simulating calibration processes of robots using off-line programming. The identification of geometric parameters of the nominal kinematic model is optimized using techniques of numerical optimization of the mathematical model. The simulation of the actual robot and the measurement system is achieved by introducing random errors representing their physical behavior and its statistical repeatability. An evaluation of the corrected nominal kinematic model brings about a clear perception of the influence of distinct variables involved in the process for a suitable planning, and indicates a considerable accuracy improvement when the optimized model is compared to the non-optimized one.
Resumo:
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.
Resumo:
Energy production from biomass and the conservation of ecologically valuable grassland habitats are two important issues of agriculture today. The combination of a bioenergy production, which minimises environmental impacts and competition with food production for land with a conversion of semi-natural grasslands through new utilization alternatives for the biomass, led to the development of the IFBB process. Its basic principle is the separation of biomass into a liquid fraction (press fluid, PF) for the production of electric and thermal energy after anaerobic digestion to biogas and a solid fraction (press cake, PC) for the production of thermal energy through combustion. This study was undertaken to explore mass and energy flows as well as quality aspects of energy carriers within the IFBB process and determine their dependency on biomass-related and technical parameters. Two experiments were conducted, in which biomass from semi-natural grassland was conserved as silage and subjected to a hydrothermal conditioning and a subsequent mechanical dehydration with a screw press. Methane yield of the PF and the untreated silage was determined in anaerobic digestion experiments in batch fermenters at 37°C with a fermentation time of 13-15 and 27-35 days for the PF and the silage, respectively. Concentrations of dry matter (DM), ash, crude protein (CP), crude fibre (CF), ether extract (EE), neutral detergent fibre (NDF), acid detergent fibre (ADF), acid detergent ligning (ADL) and elements (K, Mg, Ca, Cl, N, S, P, C, H, N) were determined in the untreated biomass and the PC. Higher heating value (HHV) and ash softening temperature (AST) were calculated based on elemental concentration. Chemical composition of the PF and mass flows of all plant compounds into the PF were calculated. In the first experiment, biomass from five different semi-natural grassland swards (Arrhenaterion I and II, Caricion fuscae, Filipendulion ulmariae, Polygono-Trisetion) was harvested at one late sampling (19 July or 31 August) and ensiled. Each silage was subjected to three different temperature treatments (5°C, 60°C, 80°C) during hydrothermal conditioning. Based on observed methane yields and HHV as energy output parameters as well as literature-based and observed energy input parameters, energy and green house gas (GHG) balances were calculated for IFBB and two reference conversion processes, whole-crop digestion of untreated silage (WCD) and combustion of hay (CH). In the second experiment, biomass from one single semi-natural grassland sward (Arrhenaterion) was harvested at eight consecutive dates (27/04, 02/05, 09/05, 16/05, 24/05, 31/05, 11/06, 21/06) and ensiled. Each silage was subjected to six different treatments (no hydrothermal conditioning and hydrothermal conditioning at 10°C, 30°C, 50°C, 70°C, 90°C). Energy balance was calculated for IFBB and WCD. Multiple regression models were developed to predict mass flows, concentrations of elements in the PC, concentration of organic compounds in the PF and energy conversion efficiency of the IFBB process from temperature of hydrothermal conditioning as well as NDF and DM concentration in the silage. Results showed a relative reduction of ash and all elements detrimental for combustion in the PC compared to the untreated biomass of 20-90%. Reduction was highest for K and Cl and lowest for N. HHV of PC and untreated biomass were in a comparable range (17.8-19.5 MJ kg-1 DM), but AST of PC was higher (1156-1254°C). Methane yields of PF were higher compared to those of WCD when the biomass was harvested late (end of May and later) and in a comparable range when the biomass was harvested early and ranged from 332 to 458 LN kg-1 VS. Regarding energy and GHG balances, IFBB, with a net energy yield of 11.9-14.1 MWh ha-1, a conversion efficiency of 0.43-0.51, and GHG mitigation of 3.6-4.4 t CO2eq ha-1, performed better than WCD, but worse than CH. WCD produces thermal and electric energy with low efficiency, CH produces only thermal energy with a low quality solid fuel with high efficiency, IFBB produces thermal and electric energy with a solid fuel of high quality with medium efficiency. Regression models were able to predict target parameters with high accuracy (R2=0.70-0.99). The influence of increasing temperature of hydrothermal conditioning was an increase of mass flows, a decrease of element concentrations in the PC and a differing effect on energy conversion efficiency. The influence of increasing NDF concentration of the silage was a differing effect on mass flows, a decrease of element concentrations in the PC and an increase of energy conversion efficiency. The influence of increasing DM concentration of the silage was a decrease of mass flows, an increase of element concentrations in the PC and an increase of energy conversion efficiency. Based on the models an optimised IFBB process would be obtained with a medium temperature of hydrothermal conditioning (50°C), high NDF concentrations in the silage and medium DM concentrations of the silage.
Resumo:
Interest in the impacts of climate change is ever increasing. This is particularly true of the water sector where understanding potential changes in the occurrence of both floods and droughts is important for strategic planning. Climate variability has been shown to have a significant impact on UK climate and accounting for this in future climate cahgne projections is essential to fully anticipate potential future impacts. In this paper a new resampling methodology is developed which includes the variability of both baseline and future precipitation. The resampling methodology is applied to 13 CMIP3 climate models for the 2080s, resulting in an ensemble of monthly precipitation change factors. The change factors are applied to the Eden catchment in eastern Scotland with analysis undertaken for the sensitivity of future river flows to the changes in precipitation. Climate variability is shown to influence the magnitude and direction of change of both precipitation and in turn river flow, which are not apparent without the use of the resampling methodology. The transformation of precipitation changes to river flow changes display a degree of non-linearity due to the catchment's role in buffering the response. The resampling methodology developed in this paper provides a new technique for creating climate change scenarios which incorporate the important issue of climate variability.
Resumo:
This paper compares the effects of two indicative climate mitigation policies on river flows in six catchments in the UK with two scenarios representing un-mitigated emissions. It considers the consequences of uncertainty in both the pattern of catchment climate change as represented by different climate models and hydrological model parameterisation on the effects of mitigation policy. Mitigation policy has little effect on estimated flow magnitudes in 2030. By 2050 a mitigation policy which achieves a 2oC temperature rise target reduces impacts on low flows by 20-25% compared to a business-as-usual emissions scenario which increases temperatures by 4oC by the end of the 21st century, but this is small compared to the range in impacts between different climate model scenarios. However, the analysis also demonstrates that an early peak in emissions would reduce impacts by 40-60% by 2080 (compared with the 4oC pathway), easing the adaptation challenge over the long term, and can delay by several decades the impacts that would be experienced from around 2050 in the absence of policy. The estimated proportion of impacts avoided varies between climate model patterns and, to a lesser extent, hydrological model parameterisations, due to variations in the projected shape of the relationship between climate forcing and hydrological response.