145 resultados para water balance modelling
Resumo:
A large number of urban surface energy balance models now exist with different assumptions about the important features of the surface and exchange processes that need to be incorporated. To date, no com- parison of these models has been conducted; in contrast, models for natural surfaces have been compared extensively as part of the Project for Intercomparison of Land-surface Parameterization Schemes. Here, the methods and first results from an extensive international comparison of 33 models are presented. The aim of the comparison overall is to understand the complexity required to model energy and water exchanges in urban areas. The degree of complexity included in the models is outlined and impacts on model performance are discussed. During the comparison there have been significant developments in the models with resulting improvements in performance (root-mean-square error falling by up to two-thirds). Evaluation is based on a dataset containing net all-wave radiation, sensible heat, and latent heat flux observations for an industrial area in Vancouver, British Columbia, Canada. The aim of the comparison is twofold: to identify those modeling ap- proaches that minimize the errors in the simulated fluxes of the urban energy balance and to determine the degree of model complexity required for accurate simulations. There is evidence that some classes of models perform better for individual fluxes but no model performs best or worst for all fluxes. In general, the simpler models perform as well as the more complex models based on all statistical measures. Generally the schemes have best overall capability to model net all-wave radiation and least capability to model latent heat flux.
Resumo:
This paper seeks to illustrate the point that physical inconsistencies between thermodynamics and dynamics usually introduce nonconservative production/destruction terms in the local total energy balance equation in numerical ocean general circulation models (OGCMs). Such terms potentially give rise to undesirable forces and/or diabatic terms in the momentum and thermodynamic equations, respectively, which could explain some of the observed errors in simulated ocean currents and water masses. In this paper, a theoretical framework is developed to provide a practical method to determine such nonconservative terms, which is illustrated in the context of a relatively simple form of the hydrostatic Boussinesq primitive equation used in early versions of OGCMs, for which at least four main potential sources of energy nonconservation are identified; they arise from: (1) the “hanging” kinetic energy dissipation term; (2) assuming potential or conservative temperature to be a conservative quantity; (3) the interaction of the Boussinesq approximation with the parameterizations of turbulent mixing of temperature and salinity; (4) some adiabatic compressibility effects due to the Boussinesq approximation. In practice, OGCMs also possess spurious numerical energy sources and sinks, but they are not explicitly addressed here. Apart from (1), the identified nonconservative energy sources/sinks are not sign definite, allowing for possible widespread cancellation when integrated globally. Locally, however, these terms may be of the same order of magnitude as actual energy conversion terms thought to occur in the oceans. Although the actual impact of these nonconservative energy terms on the overall accuracy and physical realism of the oceans is difficult to ascertain, an important issue is whether they could impact on transient simulations, and on the transition toward different circulation regimes associated with a significant reorganization of the different energy reservoirs. Some possible solutions for improvement are examined. It is thus found that the term (2) can be substantially reduced by at least one order of magnitude by using conservative temperature instead of potential temperature. Using the anelastic approximation, however, which was initially thought as a possible way to greatly improve the accuracy of the energy budget, would only marginally reduce the term (4) with no impact on the terms (1), (2) and (3).
Resumo:
Terahertz (THz) frequency radiation, 0.1 THz to 20 THz, is being investigated for biomedical imaging applications following the introduction of pulsed THz sources that produce picosecond pulses and function at room temperature. Owing to the broadband nature of the radiation, spectral and temporal information is available from radiation that has interacted with a sample; this information is exploited in the development of biomedical imaging tools and sensors. In this work, models to aid interpretation of broadband THz spectra were developed and evaluated. THz radiation lies on the boundary between regions best considered using a deterministic electromagnetic approach and those better analysed using a stochastic approach incorporating quantum mechanical effects, so two computational models to simulate the propagation of THz radiation in an absorbing medium were compared. The first was a thin film analysis and the second a stochastic Monte Carlo model. The Cole–Cole model was used to predict the variation with frequency of the physical properties of the sample and scattering was neglected. The two models were compared with measurements from a highly absorbing water-based phantom. The Monte Carlo model gave a prediction closer to experiment over 0.1 to 3 THz. Knowledge of the frequency-dependent physical properties, including the scattering characteristics, of the absorbing media is necessary. The thin film model is computationally simple to implement but is restricted by the geometry of the sample it can describe. The Monte Carlo framework, despite being initially more complex, provides greater flexibility to investigate more complicated sample geometries.
Resumo:
A novel technique for the noninvasive continuous measurement of leaf water content is presented. The technique is based on transmission measurements of terahertz radiation with a null-balance quasi-optical transmissometer operating at 94 GHz. A model for the propagation of terahertz radiation through leaves is presented. This, in conjunction with leaf thickness information determined separately, may be used to quantitatively relate transmittance measurements to leaf water content. Measurements using a dispersive Fourier transform spectrometer in the range of 100 GHz-500 GHz using Phormium tenax and Fatsia japonica leaves are also reported.
Resumo:
Public water supplies in England and Wales are provided by around 25 private-sector companies, regulated by an economic regulator (Ofwat) and and environmental regulator (Environment Agency). As part of the regulatory process, companies are required periodically to review their investment needs to maintain safe and secure supplies, and this involves an assessment of the future balance between water supply and demand. The water industry and regulators have developed an agreed set of procedures for this assessment. Climate change has been incorporated into these procedures since the late 1990s, although has been included increasingly seriously over time and it has been an effective legal requirement to consider climate change since the 2003 Water Act. In the most recent assessment in 2009, companies were required explicitly to plan for a defined amount of climate change, taking into account climate change uncertainty. A “medium” climate change scenario was defined, together with “wet” and “dry” extremes, based on scenarios developed from a number of climate models. The water industry and its regulators are now gearing up to exploit the new UKCP09 probabilistic climate change projections – but these pose significant practical and conceptual challenges. This paper outlines how the procedures for incorporating climate change information into water resources planning have evolved, and explores the issues currently facing the industry in adapting to climate change.
Resumo:
This study uses an analytical model, based on the cooling-to-space approximation, and a fixed dynamical heating model to investigate the structure of the stratospheric cooling that occurs in response to a uniform increase in stratospheric water vapour (SWV). At all latitudes, the largest cooling occurs in the lower stratosphere and decreases in magnitude with height. The cooling is strongly enhanced in the Extratropics compared to the Tropics. This is markedly different to the case of an increase in CO2, which causes maximum cooling near the stratopause and a small warming in the tropical lower stratosphere. The qualitative differences in the structure of the cooling can be explained by the smaller opacity of water vapour bands in the stratosphere compared to CO2. The small opacity means that the magnitude of the initial heating rate perturbation only decreases by a factor of four between the upper and lower stratosphere for a SWV perturbation. Therefore, to balance the heating rate perturbation, the largest temperature change is required in the lower stratosphere. Increasing the background concentration of SWV causes the water vapour bands to become more opaque. For a SWV perturbation applied to a background SWV concentration ≥30 ppmv, the heating rate perturbation and temperature change structurally resemble those from an increase in CO2. In the Extratropics, the lower height of the tropopause is found to cause the enhancement in the cooling at those latitudes. By controlling the depth of atmosphere which adjusts to the SWV perturbation, the tropopause height affects the net exchange of radiation between the layers in the stratosphere as they cool. The latitudinal gradient in upwelling infrared radiation at the tropopause and variations in the background temperature are found to have only a minor effect on the structure of the stratospheric temperature response to a change in SWV.
Resumo:
We review the procedures and challenges that must be considered when using geoid data derived from the Gravity and steady-state Ocean Circulation Explorer (GOCE) mission in order to constrain the circulation and water mass representation in an ocean 5 general circulation model. It covers the combination of the geoid information with timemean sea level information derived from satellite altimeter data, to construct a mean dynamic topography (MDT), and considers how this complements the time-varying sea level anomaly, also available from the satellite altimeter. We particularly consider the compatibility of these different fields in their spatial scale content, their temporal rep10 resentation, and in their error covariances. These considerations are very important when the resulting data are to be used to estimate ocean circulation and its corresponding errors. We describe the further steps needed for assimilating the resulting dynamic topography information into an ocean circulation model using three different operational fore15 casting and data assimilation systems. We look at methods used for assimilating altimeter anomaly data in the absence of a suitable geoid, and then discuss different approaches which have been tried for assimilating the additional geoid information. We review the problems that have been encountered and the lessons learned in order the help future users. Finally we present some results from the use of GRACE geoid in20 formation in the operational oceanography community and discuss the future potential gains that may be obtained from a new GOCE geoid.
Resumo:
Coupled photosynthesis–stomatal conductance (A–gs) models are commonly used in ecosystem models to represent the exchange rate of CO2 and H2O between vegetation and the atmosphere. The ways these models account for water stress differ greatly among modelling schemes. This study provides insight into the impact of contrasting model configurations of water stress on the simulated leaf-level values of net photosynthesis (A), stomatal conductance (gs), the functional relationship among them and their ratio, the intrinsic water use efficiency (A/gs), as soil dries. A simple, yet versatile, normalized soil moisture dependent function was used to account for the effects of water stress on gs, on mesophyll conductance (gm) and on the biochemical capacity. Model output was compared to leaf-level values obtained from the literature. The sensitivity analyses emphasized the necessity to combine both stomatal and non-stomatal limitations of A in coupled A–gs models to accurately capture the observed functional relationships A vs. gs and A/gsvs. gs in response to drought. Accounting for water stress in coupled A–gs models by imposing either stomatal or biochemical limitations of A, as commonly practiced in most ecosystem models, failed to reproduce the observed functional relationship between key leaf gas exchange attributes. A quantitative limitation analysis revealed that the general pattern of C3 photosynthetic response to water stress may be well represented in coupled A–gs models by imposing the highest limitation strength to gm, then to gs and finally to the biochemical capacity.
Resumo:
A partial differential equation model is developed to understand the effect that nutrient and acidosis have on the distribution of proliferating and quiescent cells and dead cell material (necrotic and apopotic) within a multicellular tumour spheroid. The rates of cell quiescence and necrosis depend upon the local nutrient and acid concentrations and quiescent cells are assumed to consume less nutrient and produce less acid than proliferating cells. Analysis of the differences in nutrient consumption and acid production by quiescent and proliferating cells shows low nutrient levels do not necessarily lead to increased acid concentration via anaerobic metabolism. Rather, it is the balance between proliferating and quiescent cells within the tumour which is important; decreased nutrient levels lead to more quiescent cells, which produce less acid than proliferating cells. We examine this effect via a sensitivity analysis which also includes a quantification of the effect that nutrient and acid concentrations have on the rates of cell quiescence and necrosis.
Resumo:
This paper critically explores the politics that mediate the use of environmental science assessments as the basis of resource management policy. Drawing on recent literature in the political ecology tradition that has emphasised the politicised nature of the production and use of scientific knowledge in environmental management, the paper analyses a hydrological assessment in a small river basin in Chile, undertaken in response to concerns over the possible overexploitation of groundwater resources. The case study illustrates the limitations of an approach based predominantly on hydrogeological modelling to ascertain the effects of increased groundwater abstraction. In particular, it identifies the subjective ways in which the assessment was interpreted and used by the state water resources agency to underpin water allocation decisions in accordance with its own interests, and the role that a desocialised assessment played in reproducing unequal patterns of resource use and configuring uneven waterscapes. Nevertheless, as Chile’s ‘neoliberal’ political-economic framework privileges the role of science and technocracy, producing other forms of environmental knowledge to complement environmental science is likely to be contentious. In conclusion, the paper considers the potential of mobilising the concept of the hydrosocial cycle to further critically engage with environmental science.
Resumo:
High rates of nutrient loading from agricultural and urban development have resulted in surface water eutrophication and groundwater contamination in regions of Ontario. In Lake Simcoe (Ontario, Canada), anthropogenic nutrient contributions have contributed to increased algal growth, low hypolimnetic oxygen concentrations, and impaired fish reproduction. An ambitious programme has been initiated to reduce phosphorus loads to the lake, aiming to achieve at least a 40% reduction in phosphorus loads by 2045. Achievement of this target necessitates effective remediation strategies, which will rely upon an improved understanding of controls on nutrient export from tributaries of Lake Simcoe as well as improved understanding of the importance of phosphorus cycling within the lake. In this paper, we describe a new model structure for the integrated dynamic and process-based model INCA-P, which allows fully-distributed applications, suited to branched river networks. We demonstrate application of this model to the Black River, a tributary of Lake Simcoe, and use INCA-P to simulate the fluxes of P entering the lake system, apportion phosphorus among different sources in the catchment, and explore future scenarios of land-use change and nutrient management to identify high priority sites for implementation of watershed best management practises.
Resumo:
The observed decline in summer sea ice extent since the 1970s is predicted to continue until the Arctic Ocean is seasonally ice free during the 21st Century. This will lead to a much perturbed Arctic climate with large changes in ocean surface energy flux. Svalbard, located on the present day sea ice edge, contains many low lying ice caps and glaciers and is expected to experience rapid warming over the 21st Century. The total sea level rise if all the land ice on Svalbard were to melt completely is 0.02 m. The purpose of this study is to quantify the impact of climate change on Svalbard’s surface mass balance (SMB) and to determine, in particular, what proportion of the projected changes in precipitation and SMB are a result of changes to the Arctic sea ice cover. To investigate this a regional climate model was forced with monthly mean climatologies of sea surface temperature (SST) and sea ice concentration for the periods 1961–1990 and 2061–2090 under two emission scenarios. In a novel forcing experiment, 20th Century SSTs and 21st Century sea ice were used to force one simulation to investigate the role of sea ice forcing. This experiment results in a 3.5 m water equivalent increase in Svalbard’s SMB compared to the present day. This is because over 50 % of the projected increase in winter precipitation over Svalbard under the A1B emissions scenario is due to an increase in lower atmosphere moisture content associated with evaporation from the ice free ocean. These results indicate that increases in precipitation due to sea ice decline may act to moderate mass loss from Svalbard’s glaciers due to future Arctic warming.
Resumo:
This study focuses on the mechanisms underlying water and heat transfer in upper soil layers, and their effects on soil physical prognostic variables and the individual components of the energy balance. The skill of the JULES (Joint UK Land Environment Simulator) land surface model (LSM) to simulate key soil variables, such as soil moisture content and surface temperature, and fluxes such as evaporation, is investigated. The Richards equation for soil water transfer, as used in most LSMs, was updated by incorporating isothermal and thermal water vapour transfer. The model was tested for three sites representative of semi-arid and temperate arid climates: the Jornada site (New Mexico, USA), Griffith site (Australia) and Audubon site (Arizona, USA). Water vapour flux was found to contribute significantly to the water and heat transfer in the upper soil layers. This was mainly due to isothermal vapour diffusion; thermal vapour flux also played a role at the Jornada site just after rainfall events. Inclusion of water vapour flux had an effect on the diurnal evolution of evaporation, soil moisture content and surface temperature. The incorporation of additional processes, such as water vapour flux among others, into LSMs may improve the coupling between the upper soil layers and the atmosphere, which in turn could increase the reliability of weather and climate predictions.
Resumo:
The Arctic is a region particularly susceptible to rapid climate change. General circulation models (GCMs) suggest a polar amplification of any global warming signal by a factor of about 1.5 due, in part, to sea ice feedbacks. The dramatic recent decline in multi-year sea ice cover lies outside the standard deviation of the CMIP3 ensemble GCM predictions. Sea ice acts as a barrier between cold air and warmer oceans during winter, as well as inhibiting evaporation from the ocean surface water during the summer. An ice free Arctic would likely have an altered hydrological cycle with more evaporation from the ocean surface leading to changes in precipitation distribution and amount. Using the U.K. Met Office Regional Climate Model (RCM), HadRM3, the atmospheric effects of the observed and projected reduction in Arctic sea ice are investigated. The RCM is driven by the atmospheric GCM HadAM3. Both models are forced with sea surface temperature and sea ice for the period 2061-2090 from the CMIP3 HadGEM1 experiments. Here we use an RCM at 50km resolution over the Arctic and 25km over Svalbard, which captures well the present-day pattern of precipitation and provides a detailed picture of the projected changes in the behaviour of the oceanic-atmosphere moisture fluxes and how they affect precipitation. These experiments show that the projected 21stCentury sea ice decline alone causes large impacts to the surface mass balance (SMB) on Svalbard. However Greenland’s SMB is not significantly affected by sea ice decline alone, but responds with a strongly negative shift in SMB when changes to SST are incorporated into the experiments. This is the first study to characterise the impact of changes in future sea ice to Arctic terrestrial cryosphere mass balance.
Resumo:
A recent paper published in this journal considers the numerical integration of the shallow-water equations using the leapfrog time-stepping scheme [Sun Wen-Yih, Sun Oliver MT. A modified leapfrog scheme for shallow water equations. Comput Fluids 2011;52:69–72]. The authors of that paper propose using the time-averaged height in the numerical calculation of the pressure-gradient force, instead of the instantaneous height at the middle time step. The authors show that this modification doubles the maximum Courant number (and hence the maximum time step) at which the integrations are stable, doubling the computational efficiency. Unfortunately, the pressure-averaging technique proposed by the authors is not original. It was devised and published by Shuman [5] and has been widely used in the atmosphere and ocean modelling community for over 40 years.