27 resultados para Quantitative-evaluation
em CentAUR: Central Archive University of Reading - UK
Resumo:
The member countries of the World Health Organization have endorsed its Global Strategy on Diet, Physical Activity, and Health. We assess the potential consumption impacts of these norms in the United States, France, and the United Kingdom using a mathematical programming approach. We find that adherence would involve large reductions in the consumption of fats and oils accompanying large rises in the consumption of fruits, vegetables, and cereal. Further, in the United Kingdom and the United States, but not France, sugar intakes would have to shrink considerably. Focusing on sub-populations within each country, we find that the least educated, not necessarily the poorest, would have to bear the highest burden of adjustment.
Resumo:
Reconfigurable computing is becoming an important new alternative for implementing computations. Field programmable gate arrays (FPGAs) are the ideal integrated circuit technology to experiment with the potential benefits of using different strategies of circuit specialization by reconfiguration. The final form of the reconfiguration strategy is often non-trivial to determine. Consequently, in this paper, we examine strategies for reconfiguration and, based on our experience, propose general guidelines for the tradeoffs using an area-time metric called functional density. Three experiments are set up to explore different reconfiguration strategies for FPGAs applied to a systolic implementation of a scalar quantizer used as a case study. Quantitative results for each experiment are given. The regular nature of the example means that the results can be generalized to a wide class of industry-relevant problems based on arrays.
Resumo:
This paper presents a quantitative evaluation of a tracking system on PETS 2015 Challenge datasets using well-established performance measures. Using the existing tools, the tracking system implements an end-to-end pipeline that include object detection, tracking and post- processing stages. The evaluation results are presented on the provided sequences of both ARENA and P5 datasets of PETS 2015 Challenge. The results show an encouraging performance of the tracker in terms of accuracy but a greater tendency of being prone to cardinality error and ID changes on both datasets. Moreover, the analysis show a better performance of the tracker on visible imagery than on thermal imagery.
Resumo:
This article introduces a quantitative approach to e-commerce system evaluation based on the theory of process simulation. The general concept of e-commerce system simulation is presented based on the considerations of some limitations in e-commerce system development such as the huge amount of initial investments of time and money, and the long period from business planning to system development, then to system test and operation, and finally to exact return; in other words, currently used system analysis and development method cannot tell investors about some keen attentions such as how good their e-commerce system could be, how many investment repayments they could have, and which area they should improve regarding the initial business plan. In order to exam the value and its potential effects of an e-commerce business plan, it is necessary to use a quantitative evaluation approach and the authors of this article believe that process simulation is an appropriate option. The overall objective of this article is to apply the theory of process simulation to e-commerce system evaluation, and the authors will achieve this though an experimental study on a business plan for online construction and demolition waste exchange. The methodologies adopted in this article include literature review, system analysis and development, simulation modelling and analysis, and case study. The results from this article include the concept of e-commerce system simulation, a comprehensive review of simulation methods adopted in e-commerce system evaluation, and a real case study of applying simulation to e-commerce system evaluation. Furthermore, the authors hope that the adoption and implementation of the process simulation approach can effectively support business decision-making, and improve the efficiency of e-commerce systems.
Resumo:
We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and 5 height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, 10 and are compared to scores based on the temporal or spatial mean value of the observations and a “random” model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), and the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global 15 vegetation models (DGVMs). SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP) is too high. The two DGVMs show little difference for most benchmarks (including the interannual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified 20 several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change 25 impacts and feedbacks.
Resumo:
Infrared polarization and intensity imagery provide complementary and discriminative information in image understanding and interpretation. In this paper, a novel fusion method is proposed by effectively merging the information with various combination rules. It makes use of both low-frequency and highfrequency images components from support value transform (SVT), and applies fuzzy logic in the combination process. Images (both infrared polarization and intensity images) to be fused are firstly decomposed into low-frequency component images and support value image sequences by the SVT. Then the low-frequency component images are combined using a fuzzy combination rule blending three sub-combination methods of (1) region feature maximum, (2) region feature weighting average, and (3) pixel value maximum; and the support value image sequences are merged using a fuzzy combination rule fusing two sub-combination methods of (1) pixel energy maximum and (2) region feature weighting. With the variables of two newly defined features, i.e. the low-frequency difference feature for low-frequency component images and the support-value difference feature for support value image sequences, trapezoidal membership functions are proposed and developed in tuning the fuzzy fusion process. Finally the fused image is obtained by inverse SVT operations. Experimental results of visual inspection and quantitative evaluation both indicate the superiority of the proposed method to its counterparts in image fusion of infrared polarization and intensity images.
Resumo:
Despite strong prospective epidemiology and mechanistic evidence for the benefits of certain micronutrients in preventing CVD, neutral and negative outcomes from secondary intervention trials have undermined the efficacy of supplemental nutrition in preventing CVD. In contrast, evidence for the positive impact of specific diets in CVD prevention, such as the Dietary Approaches to Stop Hypertension (DASH) diet, has focused attention on the potential benefits of whole diets and specific dietary patterns. These patterns have been scored on the basis of current guidelines for the prevention of CVD, to provide a quantitative evaluation of the relationship between diet and disease. Using this approach, large prospective studies have reported reductions in CVD risk ranging from 10 to 60% in groups whose diets can be variously classified as 'Healthy', 'Prudent', Mediterranean' or 'DASH compliant'. Evaluation of the relationship between dietary score and risk biomarkers has also been informative with respect to underlying mechanisms. However, although this analysis may appear to validate whole-diet approaches to disease prevention, it must be remembered that the classification of dietary scores is based on current understanding of diet-disease relationships, which may be incomplete or erroneous. Of particular concern is the limited number of high-quality intervention studies of whole diets, which include disease endpoints as the primary outcome. The aims of this review are to highlight the limitations of dietary guidelines based on nutrient-specific data, and the persuasive evidence for the benefits of whole dietary patterns on CVD risk. It also makes a plea for more randomised controlled trials, which are designed to support food and whole dietary-based approaches for preventing CVD.
Resumo:
We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover; composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global vegetation models (DGVMs). In general, the SDBM performs better than either of the DGVMs. It reproduces independent measurements of net primary production (NPP) but underestimates the amplitude of the observed CO2 seasonal cycle. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.
Resumo:
Retreating ice fronts (as a result of a warming climate) expose large expanses of deglaciated forefield, which become colonized by microbes and plants. There has been increasing interest in characterizing the biogeochemical development of these ecosystems using a chronosequence approach. Prior to the establishment of plants, microbes use autochthonously produced and allochthonously delivered nutrients for growth. The microbial community composition is largely made up of heterotrophic microbes (both bacteria and fungi), autotrophic microbes and nitrogen-fixing diazotrophs. Microbial activity is thought to be responsible for the initial build-up of labile nutrient pools, facilitating the growth of higher order plant life in developed soils. However, it is unclear to what extent these ecosystems rely on external sources of nutrients such as ancient carbon pools and periodic nitrogen deposition. Furthermore, the seasonal variation of chronosequence dynamics and the effect of winter are largely unexplored. Modelling this ecosystem will provide a quantitative evaluation of the key processes and could guide the focus of future research. Year-round datasets combined with novel metagenomic techniques will help answer some of the pressing questions in this relatively new but rapidly expanding field, which is of growing interest in the context of future large-scale ice retreat.
Resumo:
Two so-called “integrated” polarimetric rate estimation techniques, ZPHI (Testud et al., 2000) and ZZDR (Illingworth and Thompson, 2005), are evaluated using 12 episodes of the year 2005 observed by the French C-band operational Trappes radar, located near Paris. The term “integrated” means that the concentration parameter of the drop size distribution is assumed to be constant over some area and the algorithms retrieve it using the polarimetric variables in that area. The evaluation is carried out in ideal conditions (no partial beam blocking, no ground-clutter contamination, no bright band contamination, a posteriori calibration of the radar variables ZH and ZDR) using hourly rain gauges located at distances less than 60 km from the radar. Also included in the comparison, for the sake of benchmarking, is a conventional Z = 282R1.66 estimator, with and without attenuation correction and with and without adjustment by rain gauges as currently done operationally at Météo France. Under those ideal conditions, the two polarimetric algorithms, which rely solely on radar data, appear to perform as well if not better, pending on the measurements conditions (attenuation, rain rates, …), than the conventional algorithms, even when the latter take into account rain gauges through the adjustment scheme. ZZDR with attenuation correction is the best estimator for hourly rain gauge accumulations lower than 5 mm h−1 and ZPHI is the best one above that threshold. A perturbation analysis has been conducted to assess the sensitivity of the various estimators with respect to biases on ZH and ZDR, taking into account the typical accuracy and stability that can be reasonably achieved with modern operational radars these days (1 dB on ZH and 0.2 dB on ZDR). A +1 dB positive bias on ZH (radar too hot) results in a +14% overestimation of the rain rate with the conventional estimator used in this study (Z = 282R^1.66), a -19% underestimation with ZPHI and a +23% overestimation with ZZDR. Additionally, a +0.2 dB positive bias on ZDR results in a typical rain rate under- estimation of 15% by ZZDR.
Resumo:
Specific traditional plate count method and real-time PCR systems based on SYBR Green I and TaqMan technologies using a specific primer pair and probe for amplification of iap-gene were used for quantitative assay of Listeria monocytogenes in seven decimal serial dilution series of nutrient broth and milk samples containing 1.58 to 1.58×107 cfu /ml and the real-time PCR methods were compared with the plate count method with respect to accuracy and sensitivity. In this study, the plate count method was performed using surface-plating of 0.1 ml of each sample on Palcam Agar. The lowest detectable level for this method was 1.58×10 cfu/ml for both nutrient broth and milk samples. Using purified DNA as a template for generation of standard curves, as few as four copies of the iap-gene could be detected per reaction with both real-time PCR assays, indicating that they were highly sensitive. When these real-time PCR assays were applied to quantification of L. monocytogenes in decimal serial dilution series of nutrient broth and milk samples, 3.16×10 to 3.16×105 copies per reaction (equals to 1.58×103 to 1.58×107 cfu/ml L. monocytogenes) were detectable. As logarithmic cycles, for Plate Count and both molecular assays, the quantitative results of the detectable steps were similar to the inoculation levels.
Resumo:
The water vapour continuum absorption is an important component of molecular absorption of radiation in atmosphere. However, uncertainty in knowledge of the value of the continuum absorption at present can achieve 100% in different spectral regions leading to an error in flux calculation up to 3-5 W/m2 global mean. This work uses line-by-line calculations to reveal the best spectral intervals for experimental verification of the CKD water vapour continuum models in the currently least studied near-infrared spectral region. Possible sources of errors in continuum retrieval taken into account in the simulation include the sensitivity of laboratory spectrometers and uncertainties in the spectral line parameters in HITRAN-2004 and Schwenke-Partridge database. It is shown that a number of micro-windows in near-IR can be used at present for laboratory detection of the water vapour continuum with estimated accuracy from 30 to 5%.
Resumo:
As a continuing effort to establish the structure-activity relationships (SARs) within the series of the angiotensin II antagonists (sartans), a pharmacophoric model was built by using novel TOPP 3D descriptors. Statistical values were satisfactory (PC4: r(2)=0.96, q(2) ((5) (random) (groups))=0.84; SDEP=0.26) and encouraged the synthesis and consequent biological evaluation of a series of new pyrrolidine derivatives. SAR together with a combined 3D quantitative SAR and high-throughput virtual screening showed that the newly synthesized 1-acyl-N-(biphenyl-4-ylmethyl)pyrrolidine-2-carboxamides may represent an interesting starting point for the design of new antihypertensive agents. In particular, biological tests performed on CHO-hAT(1) cells stably expressing the human AT(1) receptor showed that the length of the acyl chain is crucial for the receptor interaction and that the valeric chain is the optimal one.
Resumo:
Thermal non-destructive testing (NDT) is commonly used for assessing aircraft structures. This research work evaluates the potential of pulsed -- transient thermography for locating fixtures beneath aircraft skins in order to facilitate accurate automated assembly operations. Representative aluminium and carbon fibre aircraft skin-fixture assemblies were modelled using thermal modelling software. The assemblies were also experimentally investigated with an integrated pulsed thermographic evaluation system, as well as using a custom built system incorporating a miniature un-cooled camera. Modelling showed that the presence of an air gap between skin and fixture significantly reduced the thermal contrast developed, especially in aluminium. Experimental results show that fixtures can be located to accuracies of 0.5 mm.
Resumo:
Information technology in construction (ITC) has been gaining wide acceptance and is being implemented in the construction research domains as a tool to assist decision makers. Most of the research into visualization technologies (VT) has been on the wide range of 3D and simulation applications suitable for construction processes. Despite its development with interoperability and standardization of products, VT usage has remained very low when it comes to communicating and addressing the needs of building end-users (BEU). This paper argues that building end users are a source of experience and expertise that can be brought into the briefing stage for the evaluation of design proposals. It also suggests that the end user is a source of new ideas promoting innovation. In this research a positivistic methodology that includes the comparison of 3D models and the traditional 2D methods is proposed. It will help to identify "how much", if anything, a non-spatial specialist can gain in terms Of "understanding" of a particular design proposal presented, using both methods.