801 resultados para Evaluating counselling
Resumo:
We evaluate the ability of process based models to reproduce observed global mean sea-level change. When the models are forced by changes in natural and anthropogenic radiative forcing of the climate system and anthropogenic changes in land-water storage, the average of the modelled sea-level change for the periods 1900–2010, 1961–2010 and 1990–2010 is about 80%, 85% and 90% of the observed rise. The modelled rate of rise is over 1 mm yr−1 prior to 1950, decreases to less than 0.5 mm yr−1 in the 1960s, and increases to 3 mm yr−1 by 2000. When observed regional climate changes are used to drive a glacier model and an allowance is included for an ongoing adjustment of the ice sheets, the modelled sea-level rise is about 2 mm yr−1 prior to 1950, similar to the observations. The model results encompass the observed rise and the model average is within 20% of the observations, about 10% when the observed ice sheet contributions since 1993 are added, increasing confidence in future projections for the 21st century. The increased rate of rise since 1990 is not part of a natural cycle but a direct response to increased radiative forcing (both anthropogenic and natural), which will continue to grow with ongoing greenhouse gas emissions
Resumo:
We utilize energy budget diagnostics from the Coupled Model Intercomparison Project phase 5 (CMIP5) to evaluate the models' climate forcing since preindustrial times employing an established regression technique. The climate forcing evaluated this way, termed the adjusted forcing (AF), includes a rapid adjustment term associated with cloud changes and other tropospheric and land-surface changes. We estimate a 2010 total anthropogenic and natural AF from CMIP5 models of 1.9 ± 0.9 W m−2 (5–95% range). The projected AF of the Representative Concentration Pathway simulations are lower than their expected radiative forcing (RF) in 2095 but agree well with efficacy weighted forcings from integrated assessment models. The smaller AF, compared to RF, is likely due to cloud adjustment. Multimodel time series of temperature change and AF from 1850 to 2100 have large intermodel spreads throughout the period. The intermodel spread of temperature change is principally driven by forcing differences in the present day and climate feedback differences in 2095, although forcing differences are still important for model spread at 2095. We find no significant relationship between the equilibrium climate sensitivity (ECS) of a model and its 2003 AF, in contrast to that found in older models where higher ECS models generally had less forcing. Given the large present-day model spread, there is no indication of any tendency by modelling groups to adjust their aerosol forcing in order to produce observed trends. Instead, some CMIP5 models have a relatively large positive forcing and overestimate the observed temperature change.
Resumo:
The techno-economic performance of a small wind turbine is very sensitive to the available wind resource. However, due to financial and practical constraints installers rely on low resolution wind speed databases to assess a potential site. This study investigates whether the two site assessment tools currently used in the UK, NOABL or the Energy Saving Trust wind speed estimator, are accurate enough to estimate the techno-economic performance of a small wind turbine. Both the tools tend to overestimate the wind speed, with a mean error of 23% and 18% for the NOABL and Energy Saving Trust tool respectively. A techno-economic assessment of 33 small wind turbines at each site has shown that these errors can have a significant impact on the estimated load factor of an installation. Consequently, site/turbine combinations which are not economically viable can be predicted to be viable. Furthermore, both models tend to underestimate the wind resource at relatively high wind speed sites, this can lead to missed opportunities as economically viable turbine/site combinations are predicted to be non-viable. These results show that a better understanding of the local wind resource is a required to make small wind turbines a viable technology in the UK.
Resumo:
The present study compares the impact of thermal and high pressure high temperature(HPHT) processing on volatile profile (via a non-targeted headspace fingerprinting) and structural and nutritional quality parameter (via targeted approaches) of orange and yellow carrot purees. The effect of oil enrichment was also considered. Since oil enrichment affects compounds volatility, the effect of oil was not studied when comparing the volatile fraction. For the targeted part, as yellow carrot purees were shown to contain a very low amount of carotenoids, focus was given to orange carrot purees. The results of the non-targeted approach demonstrated HPHT processing exerts a distinct effect on the volatile fractions compared to thermal processing. In addition, different colored carrot varieties are characterized by distinct headspace fingerprints. From a structural point of view, limited or no difference could be observed between orange carrot purees treated with HPHT or HT processes, both for samples without and with oil. From nutritional point of view, only in samples with oil, significant isomerisation of all-trans-β-carotene occurred due to both processing. Overall, for this type of product and for the selected conditions, HPHT processing seems to have a different impact on the volatile profile but rather similar impact on the structural and nutritional attributes compared to thermal processing.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
This commentary seeks to prompt new discussion about the place of urban planning history in the era of contemporary globalisation. Given the deep historic engagement of urban planning thought and practice with ‘place’ shaping and thus with the constitution of society, culture and politics, we ask how relevant is planning's legacy to the shaping of present day cities. Late twentieth century urban sociology, cultural and economic geography have demonstrated the increasing significance of intercity relations and the functional porosity of metropolitan boundaries in the network society, however statutory urban planning systems remain tied to the administrative geographies of states. This ‘territorial fixing’ of practice constrains the operational space of planning and, we argue, also limits its vision to geopolitical scales and agendas that have receding relevance for emerging urban relations. We propose that a re-evaluation of planning history could have an important part to play in addressing this spatial conundrum.
Resumo:
Research evaluating perceptual responses to music has identified many structural features as correlates that might be incorporated in computer music systems for affectively charged algorithmic composition and/or expressive music performance. In order to investigate the possible integration of isolated musical features to such a system, a discrete feature known to correlate some with emotional responses – rhythmic density – was selected from a literature review and incorporated into a prototype system. This system produces variation in rhythm density via a transformative process. A stimulus set created using this system was then subjected to a perceptual evaluation. Pairwise comparisons were used to scale differences between 48 stimuli. Listener responses were analysed with Multidimensional scaling (MDS). The 2-Dimensional solution was then rotated to place the stimuli with the largest range of variation across the horizontal plane. Stimuli with variation in rhythmic density were placed further from the source material than stimuli that were generated by random permutation. This, combined with the striking similarity between the MDS scaling and that of the 2-dimensional emotional model used by some affective algorithmic composition systems, suggests that isolated musical feature manipulation can now be used to parametrically control affectively charged automated composition in a larger system.
Resumo:
Using lessons from idealised predictability experiments, we discuss some issues and perspectives on the design of operational seasonal to inter-annual Arctic sea-ice prediction systems. We first review the opportunities to use a hierarchy of different types of experiment to learn about the predictability of Arctic climate. We also examine key issues for ensemble system design, such as: measuring skill, the role of ensemble size and generation of ensemble members. When assessing the potential skill of a set of prediction experiments, using more than one metric is essential as different choices can significantly alter conclusions about the presence or lack of skill. We find that increasing both the number of hindcasts and ensemble size is important for reliably assessing the correlation and expected error in forecasts. For other metrics, such as dispersion, increasing ensemble size is most important. Probabilistic measures of skill can also provide useful information about the reliability of forecasts. In addition, various methods for generating the different ensemble members are tested. The range of techniques can produce surprisingly different ensemble spread characteristics. The lessons learnt should help inform the design of future operational prediction systems.
Resumo:
The present study aimed to identify key parameters influencing N utilization and develop prediction equations for manure N output (MN), feces N output (FN), and urine N output (UN). Data were obtained under a series of digestibility trials with nonpregnant dry cows fed fresh grass at maintenance level. Grass was cut from 8 different ryegrass swards measured from early to late maturity in 2007 and 2008 (2 primary growth, 3 first regrowth, and 3 second regrowth) and from 2 primary growth early maturity swards in 2009. Each grass was offered to a group of 4 cows and 2 groups were used in each of the 8 swards in 2007 and 2008 for daily measurements over 6 wk; the first group (first 3 wk) and the second group (last 3 wk) assessed early and late maturity grass, respectively. Average values of continuous 3-d data of N intake (NI) and output for individual cows ( = 464) and grass nutrient contents ( = 116) were used in the statistical analysis. Grass N content was positively related to GE and ME contents but negatively related to grass water-soluble carbohydrates (WSC), NDF, and ADF contents ( < 0.01), indicating that accounting for nutrient interrelations is a crucial aspect of N mitigation. Significantly greater ratios of UN:FN, UN:MN, and UN:NI were found with increased grass WSC contents and ratios of N:WSC, N:digestible OM in total DM (DOMD), and N:ME ( < 0.01). Greater NI, animal BW, and grass N contents and lower grass WSC, NDF, ADF, DOMD, and ME concentrations were significantly associated with greater MN, FN, and UN ( < 0.05). The present study highlighted that using grass lower in N and greater in fermentable energy in animals fed solely fresh grass at maintenance level can improve N utilization, reduce N outputs, and shift part of N excretion toward feces rather than urine. These outcomes are highly desirable in mitigation strategies to reduce nitrous oxide emissions from livestock. Equations predicting N output from BW and grass N content explained a similar amount of variability as using NI and grass chemical composition (excluding DOMD and ME), implying that parameters easily measurable in practice could be used for estimating N outputs. In a research environment, where grass DOMD and ME are likely to be available, their use to predict N outputs is highly recommended because they strongly improved of the equations in the current study.
Resumo:
This paper presents a summary of the work done within the European Union's Seventh Framework Programme project ECLIPSE (Evaluating the Climate and Air Quality Impacts of Short-Lived Pollutants). ECLIPSE had a unique systematic concept for designing a realistic and effective mitigation scenario for short-lived climate pollutants (SLCPs; methane, aerosols and ozone, and their precursor species) and quantifying its climate and air quality impacts, and this paper presents the results in the context of this overarching strategy. The first step in ECLIPSE was to create a new emission inventory based on current legislation (CLE) for the recent past and until 2050. Substantial progress compared to previous work was made by including previously unaccounted types of sources such as flaring of gas associated with oil production, and wick lamps. These emission data were used for present-day reference simulations with four advanced Earth system models (ESMs) and six chemistry transport models (CTMs). The model simulations were compared with a variety of ground-based and satellite observational data sets from Asia, Europe and the Arctic. It was found that the models still underestimate the measured seasonality of aerosols in the Arctic but to a lesser extent than in previous studies. Problems likely related to the emissions were identified for northern Russia and India, in particular. To estimate the climate impacts of SLCPs, ECLIPSE followed two paths of research: the first path calculated radiative forcing (RF) values for a large matrix of SLCP species emissions, for different seasons and regions independently. Based on these RF calculations, the Global Temperature change Potential metric for a time horizon of 20 years (GTP20) was calculated for each SLCP emission type. This climate metric was then used in an integrated assessment model to identify all emission mitigation measures with a beneficial air quality and short-term (20-year) climate impact. These measures together defined a SLCP mitigation (MIT) scenario. Compared to CLE, the MIT scenario would reduce global methane (CH4) and black carbon (BC) emissions by about 50 and 80 %, respectively. For CH4, measures on shale gas production, waste management and coal mines were most important. For non-CH4 SLCPs, elimination of high-emitting vehicles and wick lamps, as well as reducing emissions from gas flaring, coal and biomass stoves, agricultural waste, solvents and diesel engines were most important. These measures lead to large reductions in calculated surface concentrations of ozone and particulate matter. We estimate that in the EU, the loss of statistical life expectancy due to air pollution was 7.5 months in 2010, which will be reduced to 5.2 months by 2030 in the CLE scenario. The MIT scenario would reduce this value by another 0.9 to 4.3 months. Substantially larger reductions due to the mitigation are found for China (1.8 months) and India (11–12 months). The climate metrics cannot fully quantify the climate response. Therefore, a second research path was taken. Transient climate ensemble simulations with the four ESMs were run for the CLE and MIT scenarios, to determine the climate impacts of the mitigation. In these simulations, the CLE scenario resulted in a surface temperature increase of 0.70 ± 0.14 K between the years 2006 and 2050. For the decade 2041–2050, the warming was reduced by 0.22 ± 0.07 K in the MIT scenario, and this result was in almost exact agreement with the response calculated based on the emission metrics (reduced warming of 0.22 ± 0.09 K). The metrics calculations suggest that non-CH4 SLCPs contribute ~ 22 % to this response and CH4 78 %. This could not be fully confirmed by the transient simulations, which attributed about 90 % of the temperature response to CH4 reductions. Attribution of the observed temperature response to non-CH4 SLCP emission reductions and BC specifically is hampered in the transient simulations by small forcing and co-emitted species of the emission basket chosen. Nevertheless, an important conclusion is that our mitigation basket as a whole would lead to clear benefits for both air quality and climate. The climate response from BC reductions in our study is smaller than reported previously, possibly because our study is one of the first to use fully coupled climate models, where unforced variability and sea ice responses cause relatively strong temperature fluctuations that may counteract (and, thus, mask) the impacts of small emission reductions. The temperature responses to the mitigation were generally stronger over the continents than over the oceans, and with a warming reduction of 0.44 K (0.39–0.49) K the largest over the Arctic. Our calculations suggest particularly beneficial climate responses in southern Europe, where surface warming was reduced by about 0.3 K and precipitation rates were increased by about 15 (6–21) mm yr−1 (more than 4 % of total precipitation) from spring to autumn. Thus, the mitigation could help to alleviate expected future drought and water shortages in the Mediterranean area. We also report other important results of the ECLIPSE project.
Resumo:
While several privacy protection techniques are pre- sented in the literature, they are not complemented with an established objective evaluation method for their assess- ment and comparison. This paper proposes an annotation- free evaluation method that assesses the two key aspects of privacy protection that are privacy and utility. Unlike some existing methods, the proposed method does not rely on the use of subjective judgements and does not assume a spe- cific target type in the image data. The privacy aspect is quantified as an appearance similarity and the utility aspect is measured as a structural similarity between the original raw image data and the privacy-protected image data. We performed an extensive experimentation using six challeng- ing datasets (including two new ones) to demonstrate the effectiveness of the evaluation method by providing a per- formance comparison of four state-of-the-art privacy pro- tection techniques.
Resumo:
Human Body Thermoregulation Models have been widely used in the field of human physiology or thermal comfort studies. However there are few studies on the evaluation method for these models. This paper summarises the existing evaluation methods and critically analyses the flaws. Based on that, a method for the evaluating the accuracy of the Human Body Thermoregulation models is proposed. The new evaluation method contributes to the development of Human Body Thermoregulation models and validates their accuracy both statistically and empirically. The accuracy of different models can be compared by the new method. Furthermore, the new method is not only suitable for the evaluation of Human Body Thermoregulation Models, but also can be theoretically applied to the evaluation of the accuracy of the population-based models in other research fields.
Resumo:
Aims: To study the biotechnological production of lipids containing rich amounts of the medically and nutritionally important c-linolenic acid (GLA), during cultivation of the Zygomycetes Thamnidium elegans, on mixtures of glucose and xylose, abundant sugars of lignocellulosic biomass. Methods and Results: Glucose and xylose were utilized as carbon sources, solely or in mixtures, under nitrogen-limited conditions, in batch-flask or bioreactor cultures. On glucose, T. elegans produced 31.9 g/L of biomass containing 15.0 g/L lipid with significantly high GLA content (1014 mg/L). Xylose was proved to be an adequate substrate for growth and lipid production. Additionally, xylitol secretion occurred when xylose was utilized as carbon source, solely or in mixtures with glucose. Batch-bioreactor trials on glucose yielded satisfactory lipid production, with rapid substrate consumption rates. Analysis of intracellular lipids showed that the highest GLA content was observed in early stationary growth phase, while the phospholipid fraction was the most unsaturated fraction of T. elegans. Conclusions: Thamnidium elegans represents a promising fungus for the successful valorization of sugar-based lignocellulosic residues into microbial lipids of high nutritional and pharmaceutical interest.