32 resultados para Metrics (Quantitative assessment).


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a multicriteria decision-making model for lifespan energy efficiency assessment of intelligent buildings (IBs). The decision-making model called IBAssessor is developed using an analytic network process (ANP) method and a set of lifespan performance indicators for IBs selected by a new quantitative approach called energy-time consumption index (ETI). In order to improve the quality of decision-making, the authors of this paper make use of previous research achievements including a lifespan sustainable business model, the Asian IB Index, and a number of relevant publications. Practitioners can use the IBAssessor ANP model at different stages of an IB lifespan for either engineering or business oriented assessments. Finally, this paper presents an experimental case study to demonstrate how to use IBAssessor ANP model to solve real-world design tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – For many academics in UK universities the nature and orientation of their research is overwhelmingly determined by considerations of how that work will be graded in research assessment exercises (RAEs). The grades awarded to work in a particular subject area can have a considerable impact on the individual and their university. There is a need to better understand those factors which may influence these grades. The paper seeks to address this issue. Design/methodology/approach – The paper considers relationships between the grades awarded and the quantitative information provided to the assessment panels for the 1996 and 2001 RAEs for two subject areas, built environment and town and country planning, and for three other subject areas, civil engineering, geography and archaeology, in the 2001 RAE. Findings – A simple model demonstrating strong and consistent relationships is established. RAE performance relates to numbers of research active staff, the production of books and journal papers, numbers of research studentships and graduations, and research income. Important differences between subject areas are identified. Research limitations/implications – Important issues are raised about the extent to which the new assessment methodology to be adopted for the 2008 RAE will capture the essence of good quality research in architecture and built environment. Originality/value – The findings provide a developmental perspective of RAEs and show how, despite a changed methodology, various research activities might be valued in the 2008 RAE. The basis for a methodology for reviewing the credibility of the judgements of panels is proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The transport sector emits a wide variety of gases and aerosols, with distinctly different characteristics which influence climate directly and indirectly via chemical and physical processes. Tools that allow these emissions to be placed on some kind of common scale in terms of their impact on climate have a number of possible uses such as: in agreements and emission trading schemes; when considering potential trade-offs between changes in emissions resulting from technological or operational developments; and/or for comparing the impact of different environmental impacts of transport activities. Many of the non-CO2 emissions from the transport sector are short-lived substances, not currently covered by the Kyoto Protocol. There are formidable difficulties in developing metrics and these are particularly acute for such short-lived species. One difficulty concerns the choice of an appropriate structure for the metric (which may depend on, for example, the design of any climate policy it is intended to serve) and the associated value judgements on the appropriate time periods to consider; these choices affect the perception of the relative importance of short- and long-lived species. A second difficulty is the quantification of input parameters (due to underlying uncertainty in atmospheric processes). In addition, for some transport-related emissions, the values of metrics (unlike the gases included in the Kyoto Protocol) depend on where and when the emissions are introduced into the atmosphere – both the regional distribution and, for aircraft, the distribution as a function of altitude, are important. In this assessment of such metrics, we present Global Warming Potentials (GWPs) as these have traditionally been used in the implementation of climate policy. We also present Global Temperature Change Potentials (GTPs) as an alternative metric, as this, or a similar metric may be more appropriate for use in some circumstances. We use radiative forcings and lifetimes from the literature to derive GWPs and GTPs for the main transport-related emissions, and discuss the uncertainties in these estimates. We find large variations in metric (GWP and GTP) values for NOx, mainly due to the dependence on location of emissions but also because of inter-model differences and differences in experimental design. For aerosols we give only global-mean values due to an inconsistent picture amongst available studies regarding regional dependence. The uncertainty in the presented metric values reflects the current state of understanding; the ranking of the various components with respect to our confidence in the given metric values is also given. While the focus is mostly on metrics for comparing the climate impact of emissions, many of the issues are equally relevant for stratospheric ozone depletion metrics, which are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report on the consistency of water vapour line intensities in selected spectral regions between 800–12,000 cm−1 under atmospheric conditions using sun-pointing Fourier transform infrared spectroscopy. Measurements were made across a number of days at both a low and high altitude field site, sampling a relatively moist and relatively dry atmosphere. Our data suggests that across most of the 800–12,000 cm−1 spectral region water vapour line intensities in recent spectral line databases are generally consistent with what was observed. However, we find that HITRAN-2008 water vapour line intensities are systematically lower by up to 20% in the 8000–9200 cm−1 spectral interval relative to other spectral regions. This discrepancy is essentially removed when two new linelists (UCL08, a compilation of linelists and ab-initio calculations, and one based on recent laboratory measurements by Oudot et al. (2010) [10] in the 8000–9200 cm−1 spectral region) are used. This strongly suggests that the H2O line strengths in the HITRAN-2008 database are indeed underestimated in this spectral region and in need of revision. The calculated global-mean clear-sky absorption of solar radiation is increased by about 0.3 W m−2 when using either the UCL08 or Oudot line parameters in the 8000–9200 cm−1 region, instead of HITRAN-2008. We also found that the effect of isotopic fractionation of HDO is evident in the 2500–2900 cm−1 region in the observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Impact Assessments (IAs) were introduced at the EU level under the rhetorical facade of ‘better regulation’. The actual aim was to improve not only the quality but also the reputation of EU regulation before stakeholders. However, evidence brought forward by a number of evaluations pointed out that IAs are yet to achieve acceptable quality standards. The paper offers an overview of different disciplinary approaches for looking at IAs. It suggests that risk regulation encompasses the theoretical foundations to help understand the role of IAs in the EU decisionmaking process. The analysis of 60 early days preliminary IAs provides empirical evidence regarding policy alternatives, methodology of consultation and use of quantitative techniques. Findings suggest that dawn period IAs were used mainly to provide some empirical evidence for regulatory intervention in front of stakeholders. The paper concludes with assumptions about the future role of IAs at EU level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The requirement to forecast volcanic ash concentrations was amplified as a response to the 2010 Eyjafjallajökull eruption when ash safety limits for aviation were introduced in the European area. The ability to provide accurate quantitative forecasts relies to a large extent on the source term which is the emissions of ash as a function of time and height. This study presents source term estimations of the ash emissions from the Eyjafjallajökull eruption derived with an inversion algorithm which constrains modeled ash emissions with satellite observations of volcanic ash. The algorithm is tested with input from two different dispersion models, run on three different meteorological input data sets. The results are robust to which dispersion model and meteorological data are used. Modeled ash concentrations are compared quantitatively to independent measurements from three different research aircraft and one surface measurement station. These comparisons show that the models perform reasonably well in simulating the ash concentrations, and simulations using the source term obtained from the inversion are in overall better agreement with the observations (rank correlation = 0.55, Figure of Merit in Time (FMT) = 25–46%) than simulations using simplified source terms (rank correlation = 0.21, FMT = 20–35%). The vertical structures of the modeled ash clouds mostly agree with lidar observations, and the modeled ash particle size distributions agree reasonably well with observed size distributions. There are occasionally large differences between simulations but the model mean usually outperforms any individual model. The results emphasize the benefits of using an ensemble-based forecast for improved quantification of uncertainties in future ash crises.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measuring the retention, or residence time, of dosage forms to biological tissue is commonly a qualitative measurement, where no real values to describe the retention can be recorded. The result of this is an assessment that is dependent upon a user's interpretation of visual observation. This research paper outlines the development of a methodology to quantitatively measure, both by image analysis and by spectrophotometric techniques, the retention of material to biological tissues, using the retention of polymer solutions to ocular tissue as an example. Both methods have been shown to be repeatable, with the spectrophotometric measurement generating data reliably and quickly for further analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The estimation of prediction quality is important because without quality measures, it is difficult to determine the usefulness of a prediction. Currently, methods for ligand binding site residue predictions are assessed in the function prediction category of the biennial Critical Assessment of Techniques for Protein Structure Prediction (CASP) experiment, utilizing the Matthews Correlation Coefficient (MCC) and Binding-site Distance Test (BDT) metrics. However, the assessment of ligand binding site predictions using such metrics requires the availability of solved structures with bound ligands. Thus, we have developed a ligand binding site quality assessment tool, FunFOLDQA, which utilizes protein feature analysis to predict ligand binding site quality prior to the experimental solution of the protein structures and their ligand interactions. The FunFOLDQA feature scores were combined using: simple linear combinations, multiple linear regression and a neural network. The neural network produced significantly better results for correlations to both the MCC and BDT scores, according to Kendall’s τ, Spearman’s ρ and Pearson’s r correlation coefficients, when tested on both the CASP8 and CASP9 datasets. The neural network also produced the largest Area Under the Curve score (AUC) when Receiver Operator Characteristic (ROC) analysis was undertaken for the CASP8 dataset. Furthermore, the FunFOLDQA algorithm incorporating the neural network, is shown to add value to FunFOLD, when both methods are employed in combination. This results in a statistically significant improvement over all of the best server methods, the FunFOLD method (6.43%), and one of the top manual groups (FN293) tested on the CASP8 dataset. The FunFOLDQA method was also found to be competitive with the top server methods when tested on the CASP9 dataset. To the best of our knowledge, FunFOLDQA is the first attempt to develop a method that can be used to assess ligand binding site prediction quality, in the absence of experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is evidence that consumption of fish, especially oily fish, has substantial beneficial effects on health. In particular an inverse relationship of oily fish intake to coronary heart disease incidence has been established. These beneficial effects are ascribed to fish oil components including long chain ω-3 polyunsaturated fatty acids. On the other hand it should be noted that oily fish also contains hazardous substances such as dioxins, PCBs and methylmercury. Soy consumption has been associated with potential beneficial and adverse effects. The claimed benefits include reduced risk of cardiovascular disease; osteoporosis, breast and prostate cancer whereas potential adverse effects include impaired thyroid function, disruption of sex hormone levels, changes in reproductive function and increased breast cancer risk The two cases of natural foods highlight the need to consider both risks and benefits in order to establish the net health impact associated to the consumption of specific food products. Within the Sixth Framework programme of the European Commission, the BRAFO project was funded to develop a framework that allows for the quantitative comparison of human health risks and benefits in relation to foods and food compounds. This paper describes the application of the developed framework to two natural foods, farmed salmon and soy protein. We conclude that the BRAFO methodology is highly applicable to natural foods. It will help the benefit-risk managers in selecting the appropriate dietary recommendations for the population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Canopy leaf area index (LAI), defined as the single-sided leaf area per unit ground area, is a quantitative measure of canopy foliar area. LAI is a controlling biophysical property of vegetation function, and quantifying LAI is thus vital for understanding energy, carbon and water fluxes between the land surface and the atmosphere. LAI is routinely available from Earth Observation (EO) instruments such as MODIS. However EO-derived estimates of LAI require validation before they are utilised by the ecosystem modelling community. Previous validation work on the MODIS collection 4 (c4) product suggested considerable error especially in forested biomes, and as a result significant modification of the MODIS LAI algorithm has been made for the most recent collection 5 (c5). As a result of these changes the current MODIS LAI product has not been widely validated. We present a validation of the MODIS c5 LAI product over a 121 km2 area of mixed coniferous forest in Oregon, USA, based on detailed ground measurements which we have upscaled using high resolution EO data. Our analysis suggests that c5 shows a much more realistic temporal LAI dynamic over c4 values for the site we examined. We find improved spatial consistency between the MODIS c5 LAI product and upscaled in situ measurements. However results also suggest that the c5 LAI product underestimates the upper range of upscaled in situ LAI measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A favoured method of assimilating information from state-of-the-art climate models into integrated assessment models of climate impacts is to use the transient climate response (TCR) of the climate models as an input, sometimes accompanied by a pattern matching approach to provide spatial information. More recent approaches to the problem use TCR with another independent piece of climate model output: the land-sea surface warming ratio (φ). In this paper we show why the use of φ in addition to TCR has such utility. Multiple linear regressions of surface temperature change onto TCR and φ in 22 climate models from the CMIP3 multi-model database show that the inclusion of φ explains a much greater fraction of the inter-model variance than using TCR alone. The improvement is particularly pronounced in North America and Eurasia in the boreal summer season, and in the Amazon all year round. The use of φ as the second metric is beneficial for three reasons: firstly it is uncorrelated with TCR in state-of-the-art climate models and can therefore be considered as an independent metric; secondly, because of its projected time-invariance, the magnitude of φ is better constrained than TCR in the immediate future; thirdly, the use of two variables is much simpler than approaches such as pattern scaling from climate models. Finally we show how using the latest estimates of φ from climate models with a mean value of 1.6—as opposed to previously reported values of 1.4—can significantly increase the mean time-integrated discounted damage projections in a state-of-the-art integrated assessment model by about 15 %. When compared to damages calculated without the inclusion of the land-sea warming ratio, this figure rises to 65 %, equivalent to almost 200 trillion dollars over 200 years.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of 18 coupled Chemistry Climate Models (CCMs) in the Tropical Tropopause Layer (TTL) is evaluated using qualitative and quantitative diagnostics. Trends in tropopause quantities in the tropics and the extratropical Upper Troposphere and Lower Stratosphere (UTLS) are analyzed. A quantitative grading methodology for evaluating CCMs is extended to include variability and used to develop four different grades for tropical tropopause temperature and pressure, water vapor and ozone. Four of the 18 models and the multi-model mean meet quantitative and qualitative standards for reproducing key processes in the TTL. Several diagnostics are performed on a subset of the models analyzing the Tropopause Inversion Layer (TIL), Lagrangian cold point and TTL transit time. Historical decreases in tropical tropopause pressure and decreases in water vapor are simulated, lending confidence to future projections. The models simulate continued decreases in tropopause pressure in the 21st century, along with ∼1K increases per century in cold point tropopause temperature and 0.5–1 ppmv per century increases in water vapor above the tropical tropopause. TTL water vapor increases below the cold point. In two models, these trends are associated with 35% increases in TTL cloud fraction. These changes indicate significant perturbations to TTL processes, specifically to deep convective heating and humidity transport. Ozone in the extratropical lowermost stratosphere has significant and hemispheric asymmetric trends. O3 is projected to increase by nearly 30% due to ozone recovery in the Southern Hemisphere (SH) and due to enhancements in the stratospheric circulation. These UTLS ozone trends may have significant effects in the TTL and the troposphere.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The impact of stratospheric ozone on the tropospheric general circulation of the Southern Hemisphere (SH) is examined with a set of chemistry‐climate models participating in the Stratospheric Processes and their Role in Climate (SPARC)/Chemistry‐Climate Model Validation project phase 2 (CCMVal‐2). Model integrations of both the past and future climates reveal the crucial role of stratospheric ozone in driving SH circulation change: stronger ozone depletion in late spring generally leads to greater poleward displacement and intensification of the tropospheric midlatitude jet, and greater expansion of the SH Hadley cell in the summer. These circulation changes are systematic as poleward displacement of the jet is typically accompanied by intensification of the jet and expansion of the Hadley cell. Overall results are compared with coupled models participating in the Intergovernmental Panel on Climate Change Fourth Assessment Report (IPCC AR4), and possible mechanisms are discussed. While the tropospheric circulation response appears quasi‐linearly related to stratospheric ozone changes, the quantitative response to a given forcing varies considerably from one model to another. This scatter partly results from differences in model climatology. It is shown that poleward intensification of the westerly jet is generally stronger in models whose climatological jet is biased toward lower latitudes. This result is discussed in the context of quasi‐geostrophic zonal mean dynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Black carbon aerosol plays a unique and important role in Earth’s climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that is quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption; influence on liquid, mixed phase, and ice clouds; and deposition on snow and ice. These effects are calculated with climate models, but when possible, they are evaluated with both microphysical measurements and field observations. Predominant sources are combustion related, namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg yr�-1 in the year 2000 with an uncertainty range of 2000 to 29000. However, global atmospheric absorption attributable to black carbon is too low in many models and should be increased by a factor of almost 3. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of atmospheric black carbon is +0.71 W m�-2 with 90% uncertainty bounds of (+0.08, +1.27)Wm�-2. Total direct forcing by all black carbon sources, without subtracting the preindustrial background, is estimated as +0.88 (+0.17, +1.48) W m�-2. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings, including rapid adjustments. The best estimate of industrial-era climate forcing of black carbon through all forcing mechanisms, including clouds and cryosphere forcing, is +1.1 W m�-2 with 90% uncertainty bounds of +0.17 to +2.1 W m�-2. Thus, there is a very high probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. We estimate that black carbon, with a total climate forcing of +1.1 W m�-2, is the second most important human emission in terms of its climate forcing in the present-day atmosphere; only carbon dioxide is estimated to have a greater forcing. Sources that emit black carbon also emit other short-lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of short-lived co-emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil fuel and biofuel) have an industrial-era climate forcing of +0.22 (�-0.50 to +1.08) W m-�2 during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all short-lived emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all short-lived species from black-carbon-rich sources becomes slightly negative (�-0.06 W m�-2 with 90% uncertainty bounds of �-1.45 to +1.29 W m�-2). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Bollène-2002 Experiment was aimed at developing the use of a radar volume-scanning strategy for conducting radar rainfall estimations in the mountainous regions of France. A developmental radar processing system, called Traitements Régionalisés et Adaptatifs de Données Radar pour l’Hydrologie (Regionalized and Adaptive Radar Data Processing for Hydrological Applications), has been built and several algorithms were specifically produced as part of this project. These algorithms include 1) a clutter identification technique based on the pulse-to-pulse variability of reflectivity Z for noncoherent radar, 2) a coupled procedure for determining a rain partition between convective and widespread rainfall R and the associated normalized vertical profiles of reflectivity, and 3) a method for calculating reflectivity at ground level from reflectivities measured aloft. Several radar processing strategies, including nonadaptive, time-adaptive, and space–time-adaptive variants, have been implemented to assess the performance of these new algorithms. Reference rainfall data were derived from a careful analysis of rain gauge datasets furnished by the Cévennes–Vivarais Mediterranean Hydrometeorological Observatory. The assessment criteria for five intense and long-lasting Mediterranean rain events have proven that good quantitative precipitation estimates can be obtained from radar data alone within 100-km range by using well-sited, well-maintained radar systems and sophisticated, physically based data-processing systems. The basic requirements entail performing accurate electronic calibration and stability verification, determining the radar detection domain, achieving efficient clutter elimination, and capturing the vertical structure(s) of reflectivity for the target event. Radar performance was shown to depend on type of rainfall, with better results obtained with deep convective rain systems (Nash coefficients of roughly 0.90 for point radar–rain gauge comparisons at the event time step), as opposed to shallow convective and frontal rain systems (Nash coefficients in the 0.6–0.8 range). In comparison with time-adaptive strategies, the space–time-adaptive strategy yields a very significant reduction in the radar–rain gauge bias while the level of scatter remains basically unchanged. Because the Z–R relationships have not been optimized in this study, results are attributed to an improved processing of spatial variations in the vertical profile of reflectivity. The two main recommendations for future work consist of adapting the rain separation method for radar network operations and documenting Z–R relationships conditional on rainfall type.