974 resultados para built-up litter
Resumo:
Eddy-covariance measurements of carbon dioxide fluxes were taken semi-continuously between October 2006 and May 2008 at 190 m height in central London (UK) to quantify emissions and study their controls. Inner London, with a population of 8.2 million (~5000 inhabitants per km2) is heavily built up with 8% vegetation cover within the central boroughs. CO2 emissions were found to be mainly controlled by fossil fuel combustion (e.g. traffic, commercial and domestic heating). The measurement period allowed investigation of both diurnal patterns and seasonal trends. Diurnal averages of CO2 fluxes were found to be highly correlated to traffic. However changes in heating-related natural gas consumption and, to a lesser extent, photosynthetic activity that controlled the seasonal variability. Despite measurements being taken at ca. 22 times the mean building height, coupling with street level was adequate, especially during daytime. Night-time saw a higher occurrence of stable or neutral stratification, especially in autumn and winter, which resulted in data loss in post-processing. No significant difference was found between the annual estimate of net exchange of CO2 for the expected measurement footprint and the values derived from the National Atmospheric Emissions Inventory (NAEI), with daytime fluxes differing by only 3%. This agreement with NAEI data also supported the use of the simple flux footprint model which was applied to the London site; this also suggests that individual roughness elements did not significantly affect the measurements due to the large ratio of measurement height to mean building height.
Resumo:
The High Resolution Dynamics Limb Sounder is described, with particular reference to the atmospheric measurements to be made and the rationale behind the measurement strategy. The demands this strategy places on the filters to be used in the instrument and the designs to which this leads to are described. A second set of filters at an intermediate image plane to reduce "Ghost Imaging" is discussed together with their required spectral properties. A method of combining the spectral characteristics of the primary and secondary filters in each channel are combined together with the spectral response of the detectors and other optical elements to obtain the system spectral response weighted appropriately for the Planck function and atmospheric limb absorption. This method is used to demonstrate whether the out-of-band spectral blocking requirement for a channel is being met and an example calculation is demonstrated showing how the blocking is built up for a representative channel. Finally, the techniques used to produce filters of the necessary sub-millimetre sizes together with the testing methods and procedures used to assess the environmental durability and establish space flight quality are discussed.
Resumo:
It has been shown through a number of experiments that neural networks can be used for a phonetic typewriter. Algorithms can be looked on as producing self-organizing feature maps which correspond to phonemes. In the Chinese language the utterance of a Chinese character consists of a very simple string of Chinese phonemes. With this as a starting point, a neural network feature map for Chinese phonemes can be built up. In this paper, feature map structures for Chinese phonemes are discussed and tested. This research on a Chinese phonetic feature map is important both for Chinese speech recognition and for building a Chinese phonetic typewriter.
Resumo:
We present simulations of London's meteorology using the Met Office Unified Model with a new, sophisticated surface energy-balance scheme to represent the urban surfaces, called MORUSES. Simulations are performed with the urban surfaces represented and with the urban surfaces replaced with grass in order to calculate the urban increment on the local meteorology. The local urban effects were moderated to some extent by the passage of an onshore flow that propagated up the Thames estuary and across the city, cooling London slightly in the afternoon. Validations of screen-level temperature show encouraging agreement to within 1–2 K, when the urban increment is up to 5 K. The model results are then used to examine factors shaping the spatial and temporal structure of London's atmospheric boundary layer. The simulations reconcile the differences in the temporal evolution of the urban heat island (UHI) shown in various studies and demonstrate that the variation of UHI with time depends strongly on the urban fetch. The UHI at a location downwind of the city centre shows a decrease in UHI during the night, while the UHI at the city centre stays constant. Finally, the UHI at a location upwind of the city centre increases continuously. The magnitude of the UHI by the time of the evening transition increases with urban fetch. The urban increments are largest at night, when the boundary layer is shallow. The boundary layer experiences continued warming after sunset, as the heat from the urban fabric is released, and a weakly convective boundary layer develops across the city. The urban land-use fraction is the dominant control on the spatial structure in the sensible heat flux and the resulting urban increment, although even the weak advection present in this case study is sufficient to advect the peak temperature increments downwind of the most built-up areas. Copyright © 2011 Royal Meteorological Society and British Crown Copyright, the Met Office
Resumo:
In terrestrial television transmission multiple paths of various lengths can occur between the transmitter and the receiver. Such paths occur because of reflections from objects outside the direct transmission path. The multipath signals arriving at the receiver are all detected along with the intended signal causing time displaced replicas called 'ghosts' to appear on the television picture. With an increasing number of people living within built up areas, ghosting is becoming commonplace and therefore deghosting is becoming increasingly important. This thesis uses a deterministic time domain approach to deghosting, resulting in a simple solution to the problem of removing ghosts. A new video detector is presented which reduces the synchronous detector local oscillator phase error, caused by any practical size of ghost, to a lower level than has ever previously been achieved. From the new detector, dispersion of the video signal is minimised and a known closed-form time domain description of the individual ghost components within the detected video is subsequently obtained. Developed from mathematical descriptions of the detected video, a new specific deghoster filter structure is presented which is capable of removing both inphase (I) and also the phase quadrature (Q) induced ghost signals derived from the VSB operation. The new deghoster filter requires much less hardware than any previous deghoster which is capable of removing both I and Q ghost components. A new channel identification algorithm was also required and written which is based upon simple correlation techniques to find the delay and complex amplitude characteristics of individual ghosts. The result of the channel identification is then passed to the new I and Q deghoster filter for ghost cancellation. Generated from the research work performed for this thesis, five papers have been published. D
Resumo:
The authors present an active vision system which performs a surveillance task in everyday dynamic scenes. The system is based around simple, rapid motion processors and a control strategy which uses both position and velocity information. The surveillance task is defined in terms of two separate behavioral subsystems, saccade and smooth pursuit, which are demonstrated individually on the system. It is shown how these and other elementary responses to 2D motion can be built up into behavior sequences, and how judicious close cooperation between vision and control results in smooth transitions between the behaviors. These ideas are demonstrated by an implementation of a saccade to smooth pursuit surveillance system on a high-performance robotic hand/eye platform.
Resumo:
This project is concerned with the way that illustrations, photographs, diagrams and graphs, and typographic elements interact to convey ideas on the book page. A framework for graphic description is proposed to elucidate this graphic language of ‘complex texts’. The model is built up from three main areas of study, with reference to a corpus of contemporary children’s science books. First, a historical survey puts the subjects for study in context. Then a multidisciplinary discussion of graphic communication provides a theoretical underpinning for the model; this leads to various proposals, such as the central importance of ratios and relationships among parts in creating meaning in graphic communication. Lastly a series of trials in description contribute to the structure of the model itself. At the heart of the framework is an organising principle that integrates descriptive models from fields of design, literary criticism, art history, and linguistics, among others, as well as novel categories designed specifically for book design. Broadly, design features are described in terms of elemental component parts (micro-level), larger groupings of these (macro-level), and finally in terms of overarching, ‘whole book’ qualities (meta-level). Various features of book design emerge at different levels; for instance, the presence of nested discursive structures, a form of graphic recursion in editorial design, is proposed at the macro-level. Across these three levels are the intersecting categories of ‘rule’ and ‘context’, offering different perspectives with which to describe graphic characteristics. Contextbased features are contingent on social and cultural environment, the reader’s previous knowledge, and the actual conditions of reading; rule-based features relate to the systematic or codified aspects of graphic language. The model aims to be a frame of reference for graphic description, of use in different forms of qualitative or quantitative research and as a heuristic tool in practice and teaching.
Resumo:
Very high-resolution Synthetic Aperture Radar sensors represent an alternative to aerial photography for delineating floods in built-up environments where flood risk is highest. However, even with currently available SAR image resolutions of 3 m and higher, signal returns from man-made structures hamper the accurate mapping of flooded areas. Enhanced image processing algorithms and a better exploitation of image archives are required to facilitate the use of microwave remote sensing data for monitoring flood dynamics in urban areas. In this study a hybrid methodology combining radiometric thresholding, region growing and change detection is introduced as an approach enabling the automated, objective and reliable flood extent extraction from very high-resolution urban SAR images. The method is based on the calibration of a statistical distribution of “open water” backscatter values inferred from SAR images of floods. SAR images acquired during dry conditions enable the identification of areas i) that are not “visible” to the sensor (i.e. regions affected by ‘layover’ and ‘shadow’) and ii) that systematically behave as specular reflectors (e.g. smooth tarmac, permanent water bodies). Change detection with respect to a pre- or post flood reference image thereby reduces over-detection of inundated areas. A case study of the July 2007 Severn River flood (UK) observed by the very high-resolution SAR sensor on board TerraSAR-X as well as airborne photography highlights advantages and limitations of the proposed method. We conclude that even though the fully automated SAR-based flood mapping technique overcomes some limitations of previous methods, further technological and methodological improvements are necessary for SAR-based flood detection in urban areas to match the flood mapping capability of high quality aerial photography.
Resumo:
Currently there are few observations of the urban wind field at heights other than rooftop level. Remote sensing instruments such as Doppler lidars provide wind speed data at many heights, which would be useful in determining wind loadings of tall buildings, and predicting local air quality. Studies comparing remote sensing with traditional anemometers carried out in flat, homogeneous terrain often use scan patterns which take several minutes. In an urban context the flow changes quickly in space and time, so faster scans are required to ensure little change in the flow over the scan period. We compare 3993 h of wind speed data collected using a three-beam Doppler lidar wind profiling method with data from a sonic anemometer (190 m). Both instruments are located in central London, UK; a highly built-up area. Based on wind profile measurements every 2 min, the uncertainty in the hourly mean wind speed due to the sampling frequency is 0.05–0.11 m s−1. The lidar tended to overestimate the wind speed by ≈0.5 m s−1 for wind speeds below 20 m s−1. Accuracy may be improved by increasing the scanning frequency of the lidar. This method is considered suitable for use in urban areas.
Resumo:
Urbanization, the expansion of built-up areas, is an important yet less-studied aspect of land use/land cover change in climate science. To date, most global climate models used to evaluate effects of land use/land cover change on climate do not include an urban parameterization. Here, the authors describe the formulation and evaluation of a parameterization of urban areas that is incorporated into the Community Land Model, the land surface component of the Community Climate System Model. The model is designed to be simple enough to be compatible with structural and computational constraints of a land surface model coupled to a global climate model yet complex enough to explore physically based processes known to be important in determining urban climatology. The city representation is based upon the “urban canyon” concept, which consists of roofs, sunlit and shaded walls, and canyon floor. The canyon floor is divided into pervious (e.g., residential lawns, parks) and impervious (e.g., roads, parking lots, sidewalks) fractions. Trapping of longwave radiation by canyon surfaces and solar radiation absorption and reflection is determined by accounting for multiple reflections. Separate energy balances and surface temperatures are determined for each canyon facet. A one-dimensional heat conduction equation is solved numerically for a 10-layer column to determine conduction fluxes into and out of canyon surfaces. Model performance is evaluated against measured fluxes and temperatures from two urban sites. Results indicate the model does a reasonable job of simulating the energy balance of cities.
Resumo:
We examine the recovery of Arctic sea ice from prescribed ice-free summer conditions in simulations of 21st century climate in an atmosphere–ocean general circulation model. We find that ice extent recovers typically within two years. The excess oceanic heat that had built up during the ice-free summer is rapidly returned to the atmosphere during the following autumn and winter, and then leaves the Arctic partly through increased longwave emission at the top of the atmosphere and partly through reduced atmospheric heat advection from lower latitudes. Oceanic heat transport does not contribute significantly to the loss of the excess heat. Our results suggest that anomalous loss of Arctic sea ice during a single summer is reversible, as the ice–albedo feedback is alleviated by large-scale recovery mechanisms. Hence, hysteretic threshold behavior (or a “tipping point”) is unlikely to occur during the decline of Arctic summer sea-ice cover in the 21st century.
Resumo:
The CHARMe project enables the annotation of climate data with key pieces of supporting information that we term “commentary”. Commentary reflects the experience that has built up in the user community, and can help new or less-expert users (such as consultants, SMEs, experts in other fields) to understand and interpret complex data. In the context of global climate services, the CHARMe system will record, retain and disseminate this commentary on climate datasets, and provide a means for feeding back this experience to the data providers. Based on novel linked data techniques and standards, the project has developed a core system, data model and suite of open-source tools to enable this information to be shared, discovered and exploited by the community.
Resumo:
Scintillometry, a form of ground-based remote sensing, provides the capability to estimate surface heat fluxes over scales of a few hundred metres to kilometres. Measurements are spatial averages, making this technique particularly valuable over areas with moderate heterogeneity such as mixed agricultural or urban environments. In this study, we present the structure parameters of temperature and humidity, which can be related to the sensible and latent heat fluxes through similarity theory, for a suburban area in the UK. The fluxes are provided in the second paper of this two-part series. A millimetre-wave scintillometer was combined with an infrared scintillometer along a 5.5 km path over northern Swindon. The pairing of these two wavelengths offers sensitivity to both temperature and humidity fluctuations, and the correlation between wavelengths is also used to retrieve the path-averaged temperature–humidity correlation. Comparison is made with structure parameters calculated from an eddy covariance station located close to the centre of the scintillometer path. The performance of the measurement techniques under different conditions is discussed. Similar behaviour is seen between the two data sets at sub-daily timescales. For the two summer-to-winter periods presented here, similar evolution is displayed across the seasons. A higher vegetation fraction within the scintillometer source area is consistent with the lower Bowen ratio observed (midday Bowen ratio < 1) compared with more built-up areas around the eddy covariance station. The energy partitioning is further explored in the companion paper.
Resumo:
Long term meteorological records (> 100 years) from stations associated with villages are generally classified as rural and assumed to have no urban influence. Using networks installed in two European villages, the local and microclimatic variations around two of these rural-village sites are examined. An annual average temperature difference ($\Delta{T}$) of 0.6 and 0.4 K was observed between the built-up village area and the current meteorological station in Geisenheim (Germany) and Haparanda (Sweden), respectively. Considerably larger values were recorded for the minimum temperatures and during summer. The spatial variations in temperature within the villages are of the same order as recorded over the past 100+ years in these villages (0.06 to 0.17 K/10 years). This suggests that the potential biases in the long records of rural-villages also warrant careful consideration like those of the more commonly studied large urban areas effects.
Resumo:
Whey proteins are becoming an increasingly popular functional food ingredient. There are, however, sensory properties associated with whey protein beverages that may hinder the consumption of quantities sufficient to gain the desired nutritional benefits. One such property is mouth drying. The influence of protein structure on the mouthfeel properties of milk proteins has been previously reported. This paper investigates the effect of thermal denaturation of whey proteins on physicochemical properties (viscosity, particle size, zeta-potential, pH), and relates this to the observed sensory properties measured by qualitative descriptive analysis and sequential profiling. Mouthcoating, drying and chalky attributes built up over repeated consumption, with higher intensities for samples subjected to longer heating times (p < 0.05). Viscosity, pH, and zeta-potential were found to be similar for all samples, however particle size increased with longer heating times. As the pH of all samples was close to neutral, this implies that neither the precipitation of whey proteins at low pH, nor their acidity, as reported in previous literature, can be the drying mechanisms in this case. The increase in mouth drying with increased heating time suggests that protein denaturation is a contributing factor and a possible mucoadhesive mechanism is discussed.