201 resultados para ENZYMATIC RESOLUTION
Resumo:
The anthropogenic heat emissions generated by human activities in London are analysed in detail for 2005–2008 and considered in context of long-term past and future trends (1970–2025). Emissions from buildings, road traffic and human metabolism are finely resolved in space (30 min) and time (200 × 200 m2). Software to compute and visualize the results is provided. The annual mean anthropogenic heat flux for Greater London is 10.9 W m−2 for 2005–2008, with the highest peaks in the central activities zone (CAZ) associated with extensive service industry activities. Towards the outskirts of the city, emissions from the domestic sector and road traffic dominate. Anthropogenic heat is mostly emitted as sensible heat, with a latent heat fraction of 7.3% and a heat-to-wastewater fraction of 12%; the implications related to the use of evaporative cooling towers are briefly addressed. Projections indicate a further increase of heat emissions within the CAZ in the next two decades related to further intensification of activities within this area.
Resumo:
In this study, we examine seasonal and geographical variability of marine aerosol fine-mode fraction ( fm) and its impacts on deriving the anthropogenic component of aerosol optical depth (ta) and direct radiative forcing from multispectral satellite measurements. A proxy of fm, empirically derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 5 data, shows large seasonal and geographical variations that are consistent with the Goddard Chemistry Aerosol Radiation Transport (GOCART) and Global Modeling Initiative (GMI) model simulations. The so-derived seasonally and spatially varying fm is then implemented into a method of estimating ta and direct radiative forcing from the MODIS measurements. It is found that the use of a constant value for fm as in previous studies would have overestimated ta by about 20% over global ocean, with the overestimation up to �45% in some regions and seasons. The 7-year (2001–2007) global ocean average ta is 0.035, with yearly average ranging from 0.031 to 0.039. Future improvement in measurements is needed to better separate anthropogenic aerosol from natural ones and to narrow down the wide range of aerosol direct radiative forcing.
Resumo:
This paper presents a new method to calculate sky view factors (SVFs) from high resolution urban digital elevation models using a shadow casting algorithm. By utilizing weighted annuli to derive SVF from hemispherical images, the distance light source positions can be predefined and uniformly spread over the whole hemisphere, whereas another method applies a random set of light source positions with a cosine-weighted distribution of sun altitude angles. The 2 methods have similar results based on a large number of SVF images. However, when comparing variations at pixel level between an image generated using the new method presented in this paper with the image from the random method, anisotropic patterns occur. The absolute mean difference between the 2 methods is 0.002 ranging up to 0.040. The maximum difference can be as much as 0.122. Since SVF is a geometrically derived parameter, the anisotropic errors created by the random method must be considered as significant.
Effects of temporal resolution of input precipitation on the performance of hydrological forecasting
Resumo:
Flood prediction systems rely on good quality precipitation input data and forecasts to drive hydrological models. Most precipitation data comes from daily stations with a good spatial coverage. However, some flood events occur on sub-daily time scales and flood prediction systems could benefit from using models calibrated on the same time scale. This study compares precipitation data aggregated from hourly stations (HP) and data disaggregated from daily stations (DP) with 6-hourly forecasts from ECMWF over the time period 1 October 2006–31 December 2009. The HP and DP data sets were then used to calibrate two hydrological models, LISFLOOD-RR and HBV, and the latter was used in a flood case study. The HP scored better than the DP when evaluated against the forecast for lead times up to 4 days. However, this was not translated in the same way to the hydrological modelling, where the models gave similar scores for simulated runoff with the two datasets. The flood forecasting study showed that both datasets gave similar hit rates whereas the HP data set gave much smaller false alarm rates (FAR). This indicates that using sub-daily precipitation in the calibration and initiation of hydrological models can improve flood forecasting.
Resumo:
The dependence of the annual mean tropical precipitation on horizontal resolution is investigated in the atmospheric version of the Hadley Centre General Environment Model (HadGEM1). Reducing the grid spacing from about 350 km to 110 km improves the precipitation distribution in most of the tropics. In particular, characteristic dry biases over South and Southeast Asia including the Maritime Continent as well as wet biases over the western tropical oceans are reduced. The annual-mean precipitation bias is reduced by about one third over the Maritime Continent and the neighbouring ocean basins associated with it via the Walker circulation. Sensitivity experiments show that much of the improvement with resolution in the Maritime Continent region is due to the specification of better resolved surface boundary conditions (land fraction, soil and vegetation parameters) at the higher resolution. It is shown that in particular the formulation of the coastal tiling scheme may cause resolution sensitivity of the mean simulated climate. The improvement in the tropical mean precipitation in this region is not primarily associated with the better representation of orography at the higher resolution, nor with changes in the eddy transport of moisture. Sizeable sensitivity to changes in the surface fields may be one of the reasons for the large variation of the mean tropical precipitation distribution seen across climate models.
Resumo:
We describe a fluorometric assay for heme synthetase, the enzyme that is genetically deficient in erythropoietic protoporphyria. The method, which can readily detect activity in 1 microliter of packed human lymphocytes, is based on the formation of zinc protoheme from protoporphyrin IX. That zinc chelatase and ferrochelatase activities reside in the same enzyme was shown by the competitive action of ferrous ions and the inhibitory effects of N-methyl protoporphyrin (a specific inhibitor of heme synthetase) on zinc chelatase. The Km for zinc was 11 micrograms and that for protoporphyrin IX was 6 microM. The Ki fro ferrous ions was 14 microM. Zinc chelatase was reduced to 15.3% of the mean control activity in lymphocytes obtained from patients with protoporphyria, thus confirming the defect of heme biosynthesis in this disorder. The assay should prove to be useful for determining heme synthetase in tissues with low specific activity and to investigate further the enzymatic defect in protoporphyria.
Resumo:
Erythropoietic protoporphyria (EPP) is associated with a deficiency of protohaem ferrolyase. We have used a novel assay for this enzyme based on its ability to utilize zinc as a substrate to investigate the inheritance of EPP in nine affected families. Zinc chelatase activity was markedly reduced in peripheral blood mononuclear cells from 14 EPP patients (mean, 3.3 nmol Zn protohaem/h/mg protein; range, 0.3-8.0) when compared with 41 controls (16.8 +/- 3.6) p less than 0.01. In three families with parent-to-child transmission of disease, the asymptomatic parent had an enzymatic activity within the normal range. In three pedigrees where the parents were asymptomatic, enzymatic activities were below the 95% confidence limits in both. Zinc chelatase activity was below the mean control value in 17 of the 18 parents in nine affected pedigrees, and six of seven asymptomatic offspring of patients with protoporphyria. The findings suggest that EPP is not transmitted as a simple dominant trait and that inheritance of more than one gene may be required for disease expression.
Resumo:
Flooding is a particular hazard in urban areas worldwide due to the increased risks to life and property in these regions. Synthetic Aperture Radar (SAR) sensors are often used to image flooding because of their all-weather day-night capability, and now possess sufficient resolution to image urban flooding. The flood extents extracted from the images may be used for flood relief management and improved urban flood inundation modelling. A difficulty with using SAR for urban flood detection is that, due to its side-looking nature, substantial areas of urban ground surface may not be visible to the SAR due to radar layover and shadow caused by buildings and taller vegetation. This paper investigates whether urban flooding can be detected in layover regions (where flooding may not normally be apparent) using double scattering between the (possibly flooded) ground surface and the walls of adjacent buildings. The method estimates double scattering strengths using a SAR image in conjunction with a high resolution LiDAR (Light Detection and Ranging) height map of the urban area. A SAR simulator is applied to the LiDAR data to generate maps of layover and shadow, and estimate the positions of double scattering curves in the SAR image. Observations of double scattering strengths were compared to the predictions from an electromagnetic scattering model, for both the case of a single image containing flooding, and a change detection case in which the flooded image was compared to an un-flooded image of the same area acquired with the same radar parameters. The method proved successful in detecting double scattering due to flooding in the single-image case, for which flooded double scattering curves were detected with 100% classification accuracy (albeit using a small sample set) and un-flooded curves with 91% classification accuracy. The same measures of success were achieved using change detection between flooded and un-flooded images. Depending on the particular flooding situation, the method could lead to improved detection of flooding in urban areas.
Resumo:
Considerable effort is presently being devoted to producing high-resolution sea surface temperature (SST) analyses with a goal of spatial grid resolutions as low as 1 km. Because grid resolution is not the same as feature resolution, a method is needed to objectively determine the resolution capability and accuracy of SST analysis products. Ocean model SST fields are used in this study as simulated “true” SST data and subsampled based on actual infrared and microwave satellite data coverage. The subsampled data are used to simulate sampling errors due to missing data. Two different SST analyses are considered and run using both the full and the subsampled model SST fields, with and without additional noise. The results are compared as a function of spatial scales of variability using wavenumber auto- and cross-spectral analysis. The spectral variance at high wavenumbers (smallest wavelengths) is shown to be attenuated relative to the true SST because of smoothing that is inherent to both analysis procedures. Comparisons of the two analyses (both having grid sizes of roughly ) show important differences. One analysis tends to reproduce small-scale features more accurately when the high-resolution data coverage is good but produces more spurious small-scale noise when the high-resolution data coverage is poor. Analysis procedures can thus generate small-scale features with and without data, but the small-scale features in an SST analysis may be just noise when high-resolution data are sparse. Users must therefore be skeptical of high-resolution SST products, especially in regions where high-resolution (~5 km) infrared satellite data are limited because of cloud cover.
Resumo:
We perform simulations of several convective events over the southern UK with the Met Office Unified Model (UM) at horizontal grid lengths ranging from 1.5 km to 200 m. Comparing the simulated storms on these days with the Met Office rainfall radar network allows us to apply a statistical approach to evaluate the properties and evolution of the simulated storms over a range of conditions. Here we present results comparing the storm morphology in the model and reality which show that the simulated storms become smaller as grid length decreases and that the grid length that fits the observations best changes with the size of the observed cells. We investigate the sensitivity of storm morphology in the model to the mixing length used in the subgrid turbulence scheme. As the subgrid mixing length is decreased, the number of small storms with high area-averaged rain rates increases. We show that by changing the mixing length we can produce a lower resolution simulation that produces similar morphologies to a higher resolution simulation.
Resumo:
High resolution surface wind fields covering the global ocean, estimated from remotely sensed wind data and ECMWF wind analyses, have been available since 2005 with a spatial resolution of 0.25 degrees in longitude and latitude, and a temporal resolution of 6h. Their quality is investigated through various comparisons with surface wind vectors from 190 buoys moored in various oceanic basins, from research vessels and from QuikSCAT scatterometer data taken during 2005-2006. The NCEP/NCAR and NCDC blended wind products are also considered. The comparisons performed during January-December 2005 show that speeds and directions compare well to in-situ observations, including from moored buoys and ships, as well as to the remotely sensed data. The root-mean-squared differences of the wind speed and direction for the new blended wind data are lower than 2m/s and 30 degrees, respectively. These values are similar to those estimated in the comparisons of hourly buoy measurements and QuickSCAT near real time retrievals. At global scale, it is found that the new products compare well with the wind speed and wind vector components observed by QuikSCAT. No significant dependencies on the QuikSCAT wind speed or on the oceanic region considered are evident.Evaluation of high-resolution surface wind products at global and regional scales
Resumo:
Dynamical downscaling is frequently used to investigate the dynamical variables of extra-tropical cyclones, for example, precipitation, using very high-resolution models nested within coarser resolution models to understand the processes that lead to intense precipitation. It is also used in climate change studies, using long timeseries to investigate trends in precipitation, or to look at the small-scale dynamical processes for specific case studies. This study investigates some of the problems associated with dynamical downscaling and looks at the optimum configuration to obtain the distribution and intensity of a precipitation field to match observations. This study uses the Met Office Unified Model run in limited area mode with grid spacings of 12, 4 and 1.5 km, driven by boundary conditions provided by the ECMWF Operational Analysis to produce high-resolution simulations for the Summer of 2007 UK flooding events. The numerical weather prediction model is initiated at varying times before the peak precipitation is observed to test the importance of the initialisation and boundary conditions, and how long the simulation can be run for. The results are compared to raingauge data as verification and show that the model intensities are most similar to observations when the model is initialised 12 hours before the peak precipitation is observed. It was also shown that using non-gridded datasets makes verification more difficult, with the density of observations also affecting the intensities observed. It is concluded that the simulations are able to produce realistic precipitation intensities when driven by the coarser resolution data.
Resumo:
In plankton ecology, it is a fundamental question as to how a large number of competing phytoplankton species coexist in marine ecosystems under a seemingly-limited variety of resources. This ever-green question was first proposed by Hutchinson [Hutchinson, G.E., 1961. The paradox of the plankton. Am. Nat. 95, 137–145] as ‘the paradox of the plankton’. Starting from Hutchinson [Hutchinson, G.E., 1961. The paradox of the plankton. Am. Nat. 95, 137–145], over more than four decades several investigators have put forward varieties of mechanisms for the extreme diversity of phytoplankton species. In this article, within the boundary of our knowledge, we review the literature of the proposed solutions and give a brief overview of the mechanisms proposed so far. The proposed mechanisms that we discuss mainly include spatial and temporal heterogeneity in physical and biological environment, externally imposed or self-generated spatial segregation, horizontal mesoscale turbulence of ocean characterized by coherent vortices, oscillation and chaos generated by several internal and external causes, stable coexistence and compensatory dynamics under fluctuating temperature in resource competition, and finally the role of toxin-producing phytoplankton in maintaining the coexistence and biodiversity of the overall plankton population that we have proposed recently. We find that, although the different mechanisms proposed so far is potentially applicable to specific ecosystems, a universally accepted theory for explaining plankton diversity in natural waters is still an unachieved goal.