88 resultados para Best-Worst Scaling
Resumo:
We investigate the scaling between precipitation and temperature changes in warm and cold climates using six models that have simulated the response to both increased CO2 and Last Glacial Maximum (LGM) boundary conditions. Globally, precipitation increases in warm climates and decreases in cold climates by between 1.5%/°C and 3%/°C. Precipitation sensitivity to temperature changes is lower over the land than over the ocean and lower over the tropical land than over the extratropical land, reflecting the constraint of water availability. The wet tropics get wetter in warm climates and drier in cold climates, but the changes in dry areas differ among models. Seasonal changes of tropical precipitation in a warmer world also reflect this “rich get richer” syndrome. Precipitation seasonality is decreased in the cold-climate state. The simulated changes in precipitation per degree temperature change are comparable to the observed changes in both the historical period and the LGM.
Resumo:
The leaf carbon isotope ratio (δ13C) of C3 plants is inversely related to the drawdown of CO2 concentration during photosynthesis, which increases towards drier environments. We aimed to discriminate between the hypothesis of universal scaling, which predicts between-species responses of δ13C to aridity similar to within-species responses, and biotic homoeostasis, which predicts offsets in the δ13C of species occupying adjacent ranges. The Northeast China Transect spans 130–900 mm annual precipitation within a narrow latitude and temperature range. Leaves of 171 species were sampled at 33 sites along the transect (18 at ≥ 5 sites) for dry matter, carbon (C) and nitrogen (N) content, specific leaf area (SLA) and δ13C. The δ13C of species generally followed a common relationship with the climatic moisture index (MI). Offsets between adjacent species were not observed. Trees and forbs diverged slightly at high MI. In C3 plants, δ13C predicted N per unit leaf area (Narea) better than MI. The δ13C of C4 plants was invariant with MI. SLA declined and Narea increased towards low MI in both C3 and C4 plants. The data are consistent with optimal stomatal regulation with respect to atmospheric dryness. They provide evidence for universal scaling of CO2 drawdown with aridity in C3 plants.
Resumo:
Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.
Resumo:
The aim of this article is to improve the communication of the probabilistic flood forecasts generated by hydrological ensemble prediction systems (HEPS) by understanding perceptions of different methods of visualizing probabilistic forecast information. This study focuses on interexpert communication and accounts for differences in visualization requirements based on the information content necessary for individual users. The perceptions of the expert group addressed in this study are important because they are the designers and primary users of existing HEPS. Nevertheless, they have sometimes resisted the release of uncertainty information to the general public because of doubts about whether it can be successfully communicated in ways that would be readily understood to nonexperts. In this article, we explore the strengths and weaknesses of existing HEPS visualization methods and thereby formulate some wider recommendations about the best practice for HEPS visualization and communication. We suggest that specific training on probabilistic forecasting would foster use of probabilistic forecasts with a wider range of applications. The result of a case study exercise showed that there is no overarching agreement between experts on how to display probabilistic forecasts and what they consider the essential information that should accompany plots and diagrams. In this article, we propose a list of minimum properties that, if consistently displayed with probabilistic forecasts, would make the products more easily understandable. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
This paper explores the nature of private social and environmental reporting (SER). From interviews with UK institutional investors, we show that both investors and investees employ Goffmanesque, staged impression management as a means of creating and disseminating a dual myth of social and environmental accountability. The interviewees’ utterances unveil private meetings imbued with theatrical verbal and physical impression management. Most of the time, the investors’ shared awareness of reality belongs to a Goffmanesque frame whereby they accept no intentionality, misrepresentation or fabrication, believing instead that the ‘performers’ (investees) are not intending to deceive them. A shared perception that social and environmental considerations are subordinated to financial issues renders private SER an empty encounter characterised as a relationship-building exercise with seldom any impact on investment decision-making. Investors spoke of occasional instances of fabrication but these were insufficient to break the frame of dual myth creation. They only identified a handful of instances where intentional misrepresentation had been significant enough to alter their reality and behaviour. Only in the most extreme cases of fabrication and lying did the staged meeting break frame and become a genuine occasion of accountability, where investors demanded greater transparency, further meetings and at the extreme, divested shares. We conclude that the frontstage, ritualistic impression management in private SER is inconsistent with backstage activities within financial institutions where private financial reporting is prioritised. The investors appeared to be in a double bind whereby they devoted resources to private SER but were simultaneously aware that these efforts may be at best subordinated, at worst ignored, rendering private SER a predominantly cosmetic, theatrical and empty exercise.
Resumo:
The complexity of current and emerging high performance architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven performance modelling approach is outlined that is appro- priate for modern multicore architectures. The approach is demonstrated by constructing a model of a simple shallow water code on a Cray XE6 system, from application-specific benchmarks that illustrate precisely how architectural char- acteristics impact performance. The model is found to recre- ate observed scaling behaviour up to 16K cores, and used to predict optimal rank-core affinity strategies, exemplifying the type of problem such a model can be used for.
Resumo:
To optimise the placement of small wind turbines in urban areas a detailed understanding of the spatial variability of the wind resource is required. At present, due to a lack of observations, the NOABL wind speed database is frequently used to estimate the wind resource at a potential site. However, recent work has shown that this tends to overestimate the wind speed in urban areas. This paper suggests a method for adjusting the predictions of the NOABL in urban areas by considering the impact of the underlying surface on a neighbourhood scale. In which, the nature of the surface is characterised on a 1 km2 resolution using an urban morphology database. The model was then used to estimate the variability of the annual mean wind speed across Greater London at a height typical of current small wind turbine installations. Initial validation of the results suggests that the predicted wind speeds are considerably more accurate than the NOABL values. The derived wind map therefore currently provides the best opportunity to identify the neighbourhoods in Greater London at which small wind turbines yield their highest energy production. The model does not consider street scale processes, however previously derived scaling factors can be applied to relate the neighbourhood wind speed to a value at a specific rooftop site. The results showed that the wind speed predicted across London is relatively low, exceeding 4 ms-1 at only 27% of the neighbourhoods in the city. Of these sites less than 10% are within 10 km of the city centre, with the majority over 20 km from the city centre. Consequently, it is predicted that small wind turbines tend to perform better towards the outskirts of the city, therefore for cities which fit the Burgess concentric ring model, such as Greater London, ‘distance from city centre’ is a useful parameter for siting small wind turbines. However, there are a number of neighbourhoods close to the city centre at which the wind speed is relatively high and these sites can only been identified with a detailed representation of the urban surface, such as that developed in this study.
Resumo:
Black carbon aerosol plays a unique and important role in Earth’s climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that is quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption; influence on liquid, mixed phase, and ice clouds; and deposition on snow and ice. These effects are calculated with climate models, but when possible, they are evaluated with both microphysical measurements and field observations. Predominant sources are combustion related, namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg yr�-1 in the year 2000 with an uncertainty range of 2000 to 29000. However, global atmospheric absorption attributable to black carbon is too low in many models and should be increased by a factor of almost 3. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of atmospheric black carbon is +0.71 W m�-2 with 90% uncertainty bounds of (+0.08, +1.27)Wm�-2. Total direct forcing by all black carbon sources, without subtracting the preindustrial background, is estimated as +0.88 (+0.17, +1.48) W m�-2. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings, including rapid adjustments. The best estimate of industrial-era climate forcing of black carbon through all forcing mechanisms, including clouds and cryosphere forcing, is +1.1 W m�-2 with 90% uncertainty bounds of +0.17 to +2.1 W m�-2. Thus, there is a very high probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. We estimate that black carbon, with a total climate forcing of +1.1 W m�-2, is the second most important human emission in terms of its climate forcing in the present-day atmosphere; only carbon dioxide is estimated to have a greater forcing. Sources that emit black carbon also emit other short-lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of short-lived co-emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil fuel and biofuel) have an industrial-era climate forcing of +0.22 (�-0.50 to +1.08) W m-�2 during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all short-lived emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all short-lived species from black-carbon-rich sources becomes slightly negative (�-0.06 W m�-2 with 90% uncertainty bounds of �-1.45 to +1.29 W m�-2). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.
Resumo:
This paper reports part of a qualitative study into evolving practice in the implementation of the Dispute Adjudication Board (DAB) construction dispute resolution technique, a variant of the Dispute Review Board (DRB) concept used in the US and Canada. Data was collected through a focus group interview of 20 highly experienced dispute resolution practitioners from engineering and the law. The group was assembled from members of FIDIC-NET with direct experience of project DABs. The part reported here concerns practice and procedure for establishing DABs. The main findings are that: constitution of the DABs is often delayed because of either project owners' ignorance of the DAB process or deterrence by the cost of the DABs; such owners also tend to insist on appointing DAB members from local engineers and lawyers without sufficient understanding of the DAB process; rates of remuneration of DAB members vary widely; the training provision for DAB membership and advocacy skills is inadequate; the process of selecting candidates for DAB membership and negotiating the tripartite agreement between each member and the contractual parties needs to be navigated with great care to avoid raising ethical problems. The research contribution is threefold. First, it highlights the importance of realistic fees for DAB members within a standard framework in achieving timely establishment of a board that works well as a team. Second, it illustrates the use of qualitative focus group interview to study the impact of new contract terms from multiple stakeholder perspectives. Finally, it identifies areas where further research is needed.
Resumo:
A glance along the finance shelves at any bookshop reveals a large number of books that seek to show readers how to ‘make a million’ or ‘beat the market’ with allegedly highly profitable equity trading strategies. This paper investigates whether useful trading strategies can be derived from popular books of investment strategy, with What Works on Wall Street by James P. O'Shaughnessy used as an example. Specifically, we test whether this strategy would have produced a similarly spectacular performance in the UK context as was demonstrated by the author for the US market. As part of our investigation, we highlight a general methodology for determining whether the observed superior performance of a trading rule could be attributed in part or in entirety to data mining. Overall, we find that the O'Shaughnessy rule performs reasonably well in the UK equity market, yielding higher returns than the FTSE All-Share Index, but lower returns than an equally weighted benchmark
Resumo:
Artificial diagenesis of the intra-crystalline proteins isolated from Patella vulgata was induced by isothermal heating at 140 °C, 110 °C and 80 °C. Protein breakdown was quantified for multiple amino acids, measuring the extent of peptide bond hydrolysis, amino acid racemisation and decomposition. The patterns of diagenesis are complex; therefore the kinetic parameters of the main reactions were estimated by two different methods: 1) a well-established approach based on fitting mathematical expressions to the experimental data, e.g. first-order rate equations for hydrolysis and power-transformed first-order rate equations for racemisation; and 2) an alternative model-free approach, which was developed by estimating a “scaling” factor for the independent variable (time) which produces the best alignment of the experimental data. This method allows the calculation of the relative reaction rates for the different temperatures of isothermal heating. High-temperature data were compared with the extent of degradation detected in sub-fossil Patella specimens of known age, and we evaluated the ability of kinetic experiments to mimic diagenesis at burial temperature. The results highlighted a difference between patterns of degradation at low and high temperature and therefore we recommend caution for the extrapolation of protein breakdown rates to low burial temperatures for geochronological purposes when relying solely on kinetic data.
Resumo:
Recent studies showed that features extracted from brain MRIs can well discriminate Alzheimer’s disease from Mild Cognitive Impairment. This study provides an algorithm that sequentially applies advanced feature selection methods for findings the best subset of features in terms of binary classification accuracy. The classifiers that provided the highest accuracies, have been then used for solving a multi-class problem by the one-versus-one strategy. Although several approaches based on Regions of Interest (ROIs) extraction exist, the prediction power of features has not yet investigated by comparing filter and wrapper techniques. The findings of this work suggest that (i) the IntraCranial Volume (ICV) normalization can lead to overfitting and worst the accuracy prediction of test set and (ii) the combined use of a Random Forest-based filter with a Support Vector Machines-based wrapper, improves accuracy of binary classification.
Resumo:
This paper identifies characteristics of knowledge intensive processes and a method to improve their performance based on analysis of investment banking front office processes. The inability to improve these processes using standard process improvement techniques confirmed that much of the process was not codified and depended on tacit knowledge and skills. This led to the use of a semi-structured analysis of the characteristics of the processes via a questionnaire to identify knowledge intensive processes characteristics that adds to existing theory. Further work identified innovative process analysis and change techniques that could generate improvements based on an analysis of their properties and the issue drivers. An improvement methodology was developed to harness a number of techniques that were found to effective in resolving the issue drivers and improving these knowledge intensive processes.