960 resultados para Data Flows
Resumo:
Dissertação de mestrado, Ecohidrologia, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2015
Resumo:
Tese de doutoramento, Ciências Geofísicas e da Geoinformação (Oceanografia), Universidade de Lisboa, Faculdade de Ciências, 2014
Resumo:
Water use invariably results in major impacts on river flows. Environmental Flows (EF) are defined as the quantity and quality of water that is needed to preserve the structure and the function of the river and riparian zone ecosystem and sufficient quantity of water to enable the survival and reproduction of aquatic organisms in different hydraulic habitats. This paper describes the criteria and methods used to determine EF and experiences with their application in Slovenia. The diversity of running waters of Slovenia demand special treatment and determination of EF for each individual section of the river system. Using hydrological, morphological and ecological criteria, two different approaches are used for the determination of EF in Slovenia, a rapid assessment method and a detailed assessment method. For both methods, data are then analyzed by an expert panel in order to determine an EF. Since 1994, more than 180 study sites have been examined for research and application of EF in Slovenia. Determination of EF for existing users has prioritized their water requirements so they can remain economically viable. Where new schemes are proposed, there has been much greater scope to prioritize ecosystem requirements. EF determination is receiving growing attention and will continue to increase in importance, driven by research that aids our understanding of flow-biota relationships and recent environmental policy and legislation at both the national and European level.
Resumo:
In recent decades, all over the world, competition in the electric power sector has deeply changed the way this sector’s agents play their roles. In most countries, electric process deregulation was conducted in stages, beginning with the clients of higher voltage levels and with larger electricity consumption, and later extended to all electrical consumers. The sector liberalization and the operation of competitive electricity markets were expected to lower prices and improve quality of service, leading to greater consumer satisfaction. Transmission and distribution remain noncompetitive business areas, due to the large infrastructure investments required. However, the industry has yet to clearly establish the best business model for transmission in a competitive environment. After generation, the electricity needs to be delivered to the electrical system nodes where demand requires it, taking into consideration transmission constraints and electrical losses. If the amount of power flowing through a certain line is close to or surpasses the safety limits, then cheap but distant generation might have to be replaced by more expensive closer generation to reduce the exceeded power flows. In a congested area, the optimal price of electricity rises to the marginal cost of the local generation or to the level needed to ration demand to the amount of available electricity. Even without congestion, some power will be lost in the transmission system through heat dissipation, so prices reflect that it is more expensive to supply electricity at the far end of a heavily loaded line than close to an electric power generation. Locational marginal pricing (LMP), resulting from bidding competition, represents electrical and economical values at nodes or in areas that may provide economical indicator signals to the market agents. This article proposes a data-mining-based methodology that helps characterize zonal prices in real power transmission networks. To test our methodology, we used an LMP database from the California Independent System Operator for 2009 to identify economical zones. (CAISO is a nonprofit public benefit corporation charged with operating the majority of California’s high-voltage wholesale power grid.) To group the buses into typical classes that represent a set of buses with the approximate LMP value, we used two-step and k-means clustering algorithms. By analyzing the various LMP components, our goal was to extract knowledge to support the ISO in investment and network-expansion planning.
Resumo:
Street-level mean flow and turbulence govern the dispersion of gases away from their sources in urban areas. A suitable reference measurement in the driving flow above the urban canopy is needed to both understand and model complex street-level flow for pollutant dispersion or emergency response purposes. In vegetation canopies, a reference at mean canopy height is often used, but it is unclear whether this is suitable for urban canopies. This paper presents an evaluation of the quality of reference measurements at both roof-top (height = H) and at height z = 9H = 190 m, and their ability to explain mean and turbulent variations of street-level flow. Fast response wind data were measured at street canyon and reference sites during the six-week long DAPPLE project field campaign in spring 2004, in central London, UK, and an averaging time of 10 min was used to distinguish recirculation-type mean flow patterns from turbulence. Flow distortion at each reference site was assessed by considering turbulence intensity and streamline deflection. Then each reference was used as the dependent variable in the model of Dobre et al. (2005) which decomposes street-level flow into channelling and recirculating components. The high reference explained more of the variability of the mean flow. Coupling of turbulent kinetic energy was also stronger between street-level and the high reference flow rather than the roof-top. This coupling was weaker when overnight flow was stratified, and turbulence was suppressed at the high reference site. However, such events were rare (<1% of data) over the six-week long period. The potential usefulness of a centralised, high reference site in London was thus demonstrated with application to emergency response and air quality modelling.
Resumo:
We report on the results of a laboratory investigation using a rotating two-layer annulus experiment, which exhibits both large-scale vortical modes and short-scale divergent modes. A sophisticated visualization method allows us to observe the flow at very high spatial and temporal resolution. The balanced long-wavelength modes appear only when the Froude number is supercritical (i.e. $F\,{>}\,F_\mathrm{critical}\,{\equiv}\, \upi^2/2$), and are therefore consistent with generation by a baroclinic instability. The unbalanced short-wavelength modes appear locally in every single baroclinically unstable flow, providing perhaps the first direct experimental evidence that all evolving vortical flows will tend to emit freely propagating inertia–gravity waves. The short-wavelength modes also appear in certain baroclinically stable flows. We infer the generation mechanisms of the short-scale waves, both for the baro-clinically unstable case in which they co-exist with a large-scale wave, and for the baroclinically stable case in which they exist alone. The two possible mechanisms considered are spontaneous adjustment of the large-scale flow, and Kelvin–Helmholtz shear instability. Short modes in the baroclinically stable regime are generated only when the Richardson number is subcritical (i.e. $\hbox{\it Ri}\,{<}\,\hbox{\it Ri}_\mathrm{critical}\,{\equiv}\, 1$), and are therefore consistent with generation by a Kelvin–Helmholtz instability. We calculate five indicators of short-wave generation in the baroclinically unstable regime, using data from a quasi-geostrophic numerical model of the annulus. There is excellent agreement between the spatial locations of short-wave emission observed in the laboratory, and regions in which the model Lighthill/Ford inertia–gravity wave source term is large. We infer that the short waves in the baroclinically unstable fluid are freely propagating inertia–gravity waves generated by spontaneous adjustment of the large-scale flow.
Resumo:
Two-dimensional flood inundation modelling is a widely used tool to aid flood risk management. In urban areas, where asset value and population density are greatest, the model spatial resolution required to represent flows through a typical street network (i.e. < 10m) often results in impractical computational cost at the whole city scale. Explicit diffusive storage cell models become very inefficient at such high resolutions, relative to shallow water models, because the stable time step in such schemes scales as a quadratic of resolution. This paper presents the calibration and evaluation of a recently developed new formulation of the LISFLOOD-FP model, where stability is controlled by the Courant–Freidrichs–Levy condition for the shallow water equations, such that, the stable time step instead scales linearly with resolution. The case study used is based on observations during the summer 2007 floods in Tewkesbury, UK. Aerial photography is available for model evaluation on three separate days from the 24th to the 31st of July. The model covered a 3.6 km by 2 km domain and was calibrated using gauge data from high flows during the previous month. The new formulation was benchmarked against the original version of the model at 20 m and 40 m resolutions, demonstrating equally accurate performance given the available validation data but at 67x faster computation time. The July event was then simulated at the 2 m resolution of the available airborne LiDAR DEM. This resulted in a significantly more accurate simulation of the drying dynamics compared to that simulated by the coarse resolution models, although estimates of peak inundation depth were similar.
Resumo:
An extensive statistical ‘downscaling’ study is done to relate large-scale climate information from a general circulation model (GCM) to local-scale river flows in SW France for 51 gauging stations ranging from nival (snow-dominated) to pluvial (rainfall-dominated) river-systems. This study helps to select the appropriate statistical method at a given spatial and temporal scale to downscale hydrology for future climate change impact assessment of hydrological resources. The four proposed statistical downscaling models use large-scale predictors (derived from climate model outputs or reanalysis data) that characterize precipitation and evaporation processes in the hydrological cycle to estimate summary flow statistics. The four statistical models used are generalized linear (GLM) and additive (GAM) models, aggregated boosted trees (ABT) and multi-layer perceptron neural networks (ANN). These four models were each applied at two different spatial scales, namely at that of a single flow-gauging station (local downscaling) and that of a group of flow-gauging stations having the same hydrological behaviour (regional downscaling). For each statistical model and each spatial resolution, three temporal resolutions were considered, namely the daily mean flows, the summary statistics of fortnightly flows and a daily ‘integrated approach’. The results show that flow sensitivity to atmospheric factors is significantly different between nival and pluvial hydrological systems which are mainly influenced, respectively, by shortwave solar radiations and atmospheric temperature. The non-linear models (i.e. GAM, ABT and ANN) performed better than the linear GLM when simulating fortnightly flow percentiles. The aggregated boosted trees method showed higher and less variable R2 values to downscale the hydrological variability in both nival and pluvial regimes. Based on GCM cnrm-cm3 and scenarios A2 and A1B, future relative changes of fortnightly median flows were projected based on the regional downscaling approach. The results suggest a global decrease of flow in both pluvial and nival regimes, especially in spring, summer and autumn, whatever the considered scenario. The discussion considers the performance of each statistical method for downscaling flow at different spatial and temporal scales as well as the relationship between atmospheric processes and flow variability.
Resumo:
Two-dimensional flood inundation modelling is a widely used tool to aid flood risk management. In urban areas, the model spatial resolution required to represent flows through a typical street network often results in an impractical computational cost at the city scale. This paper presents the calibration and evaluation of a recently developed formulation of the LISFLOOD-FP model, which is more computationally efficient at these resolutions. Aerial photography was available for model evaluation on 3 days from the 24 to the 31 of July. The new formulation was benchmarked against the original version of the model at 20 and 40 m resolutions, demonstrating equally accurate simulation, given the evaluation data but at a 67 times faster computation time. The July event was then simulated at the 2 m resolution of the available airborne LiDAR DEM. This resulted in more accurate simulation of the floodplain drying dynamics compared with the coarse resolution models, although maximum inundation levels were simulated equally well at all resolutions tested.
Resumo:
The rapid expansion of the TMT sector in the late 1990s and more recent growing regulatory and corporate focus on business continuity and security have raised the profile of data centres. Data centres offer a unique blend of occupational, physical and technological characteristics compared to conventional real estate assets. Limited trading and heterogeneity of data centres also causes higher levels of appraisal uncertainty. In practice, the application of conventional discounted cash flow approaches requires information about a wide range of inputs that is difficult to derive from limited market signals or estimate analytically. This paper outlines an approach that uses pricing signals from similar traded cash flows is proposed. Based upon ‘the law of one price’, the method draws upon the premise that two identical future cash flows must have the same value now. Given the difficulties of estimating exit values, an alternative is that the expected cash flows of data centre are analysed over the life cycle of the building, with corporate bond yields used to provide a proxy for the appropriate discount rates for lease income. Since liabilities are quite diverse, a number of proxies are suggested as discount and capitalisation rates including indexed-linked, fixed interest and zero-coupon bonds. Although there are rarely assets that have identical cash flows and some approximation is necessary, the level of appraiser subjectivity is dramatically reduced.
Resumo:
This paper analyses the appraisal of a specialized form of real estate - data centres - that has a unique blend of locational, physical and technological characteristics that differentiate it from conventional real estate assets. Market immaturity, limited trading and a lack of pricing signals enhance levels of appraisal uncertainty and disagreement relative to conventional real estate assets. Given the problems of applying standard discounted cash flow, an approach to appraisal is proposed that uses pricing signals from traded cash flows that are similar to the cash flows generated from data centres. Based upon ‘the law of one price’, it is assumed that two assets that are expected to generate identical cash flows in the future must have the same value now. It is suggested that the expected cash flow of assets should be analysed over the life cycle of the building. Corporate bond yields are used to provide a proxy for the appropriate discount rates for lease income. Since liabilities are quite diverse, a number of proxies are suggested as discount and capitalisation rates including indexed-linked, fixed interest and zero-coupon bonds.
Resumo:
Nitrogen flows from European watersheds to coastal marine waters Executive summary Nature of the problem • Most regional watersheds in Europe constitute managed human territories importing large amounts of new reactive nitrogen. • As a consequence, groundwater, surface freshwater and coastal seawater are undergoing severe nitrogen contamination and/or eutrophication problems. Approaches • A comprehensive evaluation of net anthropogenic inputs of reactive nitrogen (NANI) through atmospheric deposition, crop N fixation,fertiliser use and import of food and feed has been carried out for all European watersheds. A database on N, P and Si fluxes delivered at the basin outlets has been assembled. • A number of modelling approaches based on either statistical regression analysis or mechanistic description of the processes involved in nitrogen transfer and transformations have been developed for relating N inputs to watersheds to outputs into coastal marine ecosystems. Key findings/state of knowledge • Throughout Europe, NANI represents 3700 kgN/km2/yr (range, 0–8400 depending on the watershed), i.e. five times the background rate of natural N2 fixation. • A mean of approximately 78% of NANI does not reach the basin outlet, but instead is stored (in soils, sediments or ground water) or eliminated to the atmosphere as reactive N forms or as N2. • N delivery to the European marine coastal zone totals 810 kgN/km2/yr (range, 200–4000 depending on the watershed), about four times the natural background. In areas of limited availability of silica, these inputs cause harmful algal blooms. Major uncertainties/challenges • The exact dimension of anthropogenic N inputs to watersheds is still imperfectly known and requires pursuing monitoring programmes and data integration at the international level. • The exact nature of ‘retention’ processes, which potentially represent a major management lever for reducing N contamination of water resources, is still poorly understood. • Coastal marine eutrophication depends to a large degree on local morphological and hydrographic conditions as well as on estuarine processes, which are also imperfectly known. Recommendations • Better control and management of the nitrogen cascade at the watershed scale is required to reduce N contamination of ground- and surface water, as well as coastal eutrophication. • In spite of the potential of these management measures, there is no choice at the European scale but to reduce the primary inputs of reactive nitrogen to watersheds, through changes in agriculture, human diet and other N flows related to human activity.
Resumo:
A method is suggested for the calculation of the friction velocity for stable turbulent boundary-layer flow over hills. The method is tested using a continuous upstream mean velocity profile compatible with the propagation of gravity waves, and is incorporated into the linear model of Hunt, Leibovich and Richards with the modification proposed by Hunt, Richards and Brighton to include the effects of stability, and the reformulated solution of Weng for the near-surface region. Those theoretical results are compared with results from simulations using a non-hydrostatic microscale-mesoscale two-dimensional numerical model, and with field observations for different values of stability. These comparisons show a considerable improvement in the behaviour of the theoretical model when the friction velocity is calculated using the method proposed here, leading to a consistent variation of the boundary-layer structure with stability, and better agreement with observational and numerical data.
Resumo:
Synoptic climatology relates the atmospheric circulation with the surface environment. The aim of this study is to examine the variability of the surface meteorological patterns, which are developing under different synoptic scale categories over a suburban area with complex topography. Multivariate Data Analysis techniques were performed to a data set with surface meteorological elements. Three principal components related to the thermodynamic status of the surface environment and the two components of the wind speed were found. The variability of the surface flows was related with atmospheric circulation categories by applying Correspondence Analysis. Similar surface thermodynamic fields develop under cyclonic categories, which are contrasted with the anti-cyclonic category. A strong, steady wind flow characterized by high shear values develops under the cyclonic Closed Low and the anticyclonic H–L categories, in contrast to the variable weak flow under the anticyclonic Open Anticyclone category.
Resumo:
The nature of private commercial real estate markets presents difficulties for monitoring market performance. Assets are heterogeneous and spatially dispersed, trading is infrequent and there is no central marketplace in which prices and cash flows of properties can be easily observed. Appraisal based indices represent one response to these issues. However, these have been criticised on a number of grounds: that they may understate volatility, lag turning points and be affected by client influence issues. Thus, this paper reports econometrically derived transaction based indices of the UK commercial real estate market using Investment Property Databank (IPD) data, comparing them with published appraisal based indices. The method is similar to that presented by Fisher, Geltner, and Pollakowski (2007) and used by Massachusett, Institute of Technology (MIT) on National Council of Real Estate Investment Fiduciaries (NCREIF) data, although it employs value rather than equal weighting. The results show stronger growth from the transaction based indices in the run up to the peak in the UK market in 2007. They also show that returns from these series are more volatile and less autocorrelated than their appraisal based counterparts, but, surprisingly, differences in turning points were not found. The conclusion then debates the applications and limitations these series have as measures of market performance.