59 resultados para information quality
em CentAUR: Central Archive University of Reading - UK
Resumo:
In any enterprise, decisions need be made during the life cycle of information about its management. This requires information evaluation to take place; a little-understood process. For evaluation support to be both effective and resource efficient, some sort of automatic or semi-automatic evaluation method would be invaluable. Such a method would require an understanding of the diversity of the contexts in which evaluation takes place so that evaluation support can have the necessary context-sensitivity. This paper identifies the dimensions influencing the information evaluation process and defines the elements that characterise them, thus providing the foundations for a context-sensitive evaluation framework.
Resumo:
We examine the impact of accounting quality, used as a proxy for information risk, on the behavior of equity implied volatility around quarterly earnings announcements. Using US data during 1996–2010, we observe that lower (higher) accounting quality significantly relates to higher (lower) levels of implied volatility (IV) around announcements. Worse accounting quality is further associated with a significant increase in IV before announcements, and is found to relate to a larger resolution in IV after the announcement has taken place. We interpret our findings as indicative of information risk having a significant impact on implied volatility behavior around earnings announcements.
Resumo:
Water quality models generally require a relatively large number of parameters to define their functional relationships, and since prior information on parameter values is limited, these are commonly defined by fitting the model to observed data. In this paper, the identifiability of water quality parameters and the associated uncertainty in model simulations are investigated. A modification to the water quality model `Quality Simulation Along River Systems' is presented in which an improved flow component is used within the existing water quality model framework. The performance of the model is evaluated in an application to the Bedford Ouse river, UK, using a Monte-Carlo analysis toolbox. The essential framework of the model proved to be sound, and calibration and validation performance was generally good. However some supposedly important water quality parameters associated with algal activity were found to be completely insensitive, and hence non-identifiable, within the model structure, while others (nitrification and sedimentation) had optimum values at or close to zero, indicating that those processes were not detectable from the data set examined. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
With both climate change and air quality on political and social agendas from local to global scale, the links between these hitherto separate fields are becoming more apparent. Black carbon, largely from combustion processes, scatters and absorbs incoming solar radiation, contributes to poor air quality and induces respiratory and cardiovascular problems. Uncertainties in the amount, location, size and shape of atmospheric black carbon cause large uncertainty in both climate change estimates and toxicology studies alike. Increased research has led to new effects and areas of uncertainty being uncovered. Here we draw together recent results and explore the increasing opportunities for synergistic research that will lead to improved confidence in the impact of black carbon on climate change, air quality and human health. Topics of mutual interest include better information on spatial distribution, size, mixing state and measuring and monitoring. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
This study suggests a statistical strategy for explaining how food purchasing intentions are influenced by different levels of risk perception and trust in food safety information. The modelling process is based on Ajzen's Theory of Planned Behaviour and includes trust and risk perception as additional explanatory factors. Interaction and endogeneity across these determinants is explored through a system of simultaneous equations, while the SPARTA equation is estimated through an ordered probit model. Furthermore, parameters are allowed to vary as a function of socio-demographic variables. The application explores chicken purchasing intentions both in a standard situation and conditional to an hypothetical salmonella scare. Data were collected through a nationally representative UK wide survey of 533 UK respondents in face-to-face, in-home interviews. Empirical findings show that interactions exist among the determinants of planned behaviour and socio-demographic variables improve the model's performance. Attitudes emerge as the key determinant of intention to purchase chicken, while trust in food safety information provided by media reduces the likelihood to purchase. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
As the ideal method of assessing the nutritive value of a feedstuff, namely offering it to the appropriate class of animal and recording the production response obtained, is neither practical nor cost effective a range of feed evaluation techniques have been developed. Each of these balances some degree of compromise with the practical situation against data generation. However, due to the impact of animal-feed interactions over and above that of feed composition, the target animal remains the ultimate arbitrator of nutritional value. In this review current in vitro feed evaluation techniques are examined according to the degree of animal-feed interaction. Chemical analysis provides absolute values and therefore differs from the majority of in vitro methods that simply rank feeds. However, with no host animal involvement, estimates of nutritional value are inferred by statistical association. In addition given the costs involved, the practical value of many analyses conducted should be reviewed. The in sacco technique has made a substantial contribution to both understanding rumen microbial degradative processes and the rapid evaluation of feeds, especially in developing countries. However, the numerous shortfalls of the technique, common to many in vitro methods, the desire to eliminate the use of surgically modified animals for routine feed evaluation, paralleled with improvements in in vitro techniques, will see this technique increasingly replaced. The majority of in vitro systems use substrate disappearance to assess degradation, however, this provides no information regarding the quantity of derived end-products available to the host animal. As measurement of volatile fatty acids or microbial biomass production greatly increases analytical costs, fermentation gas release, a simple and non-destructive measurement, has been used as an alternative. However, as gas release alone is of little use, gas-based systems, where both degradation and fermentation gas release are measured simultaneously, are attracting considerable interest. Alternative microbial inocula are being considered, as is the potential of using multi-enzyme systems to examine degradation dynamics. It is concluded that while chemical analysis will continue to form an indispensable part of feed evaluation, enhanced use will be made of increasingly complex in vitro systems. It is vital, however, the function and limitations of each methodology are fully understood and that the temptation to over-interpret the data is avoided so as to draw the appropriate conclusions. With careful selection and correct application in vitro systems offer powerful research tools with which to evaluate feedstuffs. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
OBJECTIVES: This contribution provides a unifying concept for meta-analysis integrating the handling of unobserved heterogeneity, study covariates, publication bias and study quality. It is important to consider these issues simultaneously to avoid the occurrence of artifacts, and a method for doing so is suggested here. METHODS: The approach is based upon the meta-likelihood in combination with a general linear nonparametric mixed model, which lays the ground for all inferential conclusions suggested here. RESULTS: The concept is illustrated at hand of a meta-analysis investigating the relationship of hormone replacement therapy and breast cancer. The phenomenon of interest has been investigated in many studies for a considerable time and different results were reported. In 1992 a meta-analysis by Sillero-Arenas et al. concluded a small, but significant overall effect of 1.06 on the relative risk scale. Using the meta-likelihood approach it is demonstrated here that this meta-analysis is due to considerable unobserved heterogeneity. Furthermore, it is shown that new methods are available to model this heterogeneity successfully. It is argued further to include available study covariates to explain this heterogeneity in the meta-analysis at hand. CONCLUSIONS: The topic of HRT and breast cancer has again very recently become an issue of public debate, when results of a large trial investigating the health effects of hormone replacement therapy were published indicating an increased risk for breast cancer (risk ratio of 1.26). Using an adequate regression model in the previously published meta-analysis an adjusted estimate of effect of 1.14 can be given which is considerably higher than the one published in the meta-analysis of Sillero-Arenas et al. In summary, it is hoped that the method suggested here contributes further to a good meta-analytic practice in public health and clinical disciplines.
Resumo:
Background: Selecting the highest quality 3D model of a protein structure from a number of alternatives remains an important challenge in the field of structural bioinformatics. Many Model Quality Assessment Programs (MQAPs) have been developed which adopt various strategies in order to tackle this problem, ranging from the so called "true" MQAPs capable of producing a single energy score based on a single model, to methods which rely on structural comparisons of multiple models or additional information from meta-servers. However, it is clear that no current method can separate the highest accuracy models from the lowest consistently. In this paper, a number of the top performing MQAP methods are benchmarked in the context of the potential value that they add to protein fold recognition. Two novel methods are also described: ModSSEA, which based on the alignment of predicted secondary structure elements and ModFOLD which combines several true MQAP methods using an artificial neural network. Results: The ModSSEA method is found to be an effective model quality assessment program for ranking multiple models from many servers, however further accuracy can be gained by using the consensus approach of ModFOLD. The ModFOLD method is shown to significantly outperform the true MQAPs tested and is competitive with methods which make use of clustering or additional information from multiple servers. Several of the true MQAPs are also shown to add value to most individual fold recognition servers by improving model selection, when applied as a post filter in order to re-rank models. Conclusion: MQAPs should be benchmarked appropriately for the practical context in which they are intended to be used. Clustering based methods are the top performing MQAPs where many models are available from many servers; however, they often do not add value to individual fold recognition servers when limited models are available. Conversely, the true MQAP methods tested can often be used as effective post filters for re-ranking few models from individual fold recognition servers and further improvements can be achieved using a consensus of these methods.
Resumo:
We present here an indicator of soil quality that evaluates soil ecosystem services through a set of 5 subindicators, and further combines them into a single general Indicator of Soil Quality (GISQ). We used information derived from 54 properties commonly used to describe the multifaceted aspects of soil quality. The design and calculation of the indicators were based on sequences of multivariate analyses. Subindicators evaluated the physical quality, chemical fertility, organic matter stocks, aggregation and morphology of the upper 5 cm of soil and the biodiversity of soil macrofauna. A GISQ combined the different subindicators providing a global assessment of soil quality. Research was conducted in two hillside regions of Colombia and Nicaragua, with similar types of land use and socio-economic context. However, soil and climatic conditions differed significantly. In Nicaragua, soil quality was assessed at 61 points regularly distributed 200 m apart on a regular grid across the landscape. In Colombia, 8 plots representing different types of land use were arbitrarily chosen in the landscape and intensively sampled. Indicators that were designed in the Nicaragua site were further applied to the Colombian site to test for their applicability. In Nicaragua, coffee plantations, fallows, pastures and forest had the highest values of GISQ (1.00; 0.80; 0.78 and 0.77, respectively) while maize crops and eroded soils (0.19 and 0.10) had the lowest values. Examination of subindicator values allowed the separate evaluation of different aspects of soil quality: subindicators of organic matter, aggregation and morphology and biodiversity of macrofauna had the maximum values in coffee plantations (0.89; 0.72 and 0.56, respectively on average) while eroded soils had the lowest values for these indicators (0.10; 0.31 and 0.33, respectively). Indicator formulae derived from information gained at the Nicaraguan sites were not applicable to the Colombian situation and site-specific constants were calculated. This indicator allows the evaluation of soil quality and facilitates the identification of problem areas through the individual values of each subindicator. It allows monitoring of change through time and can guide the implementation of soil restoration technologies. Although GISQ formulae computed on a set of data were only valid at a regional scale, the methodology used to create these indices can be applied everywhere.
Resumo:
There is a concerted global effort to digitize biodiversity occurrence data from herbarium and museum collections that together offer an unparalleled archive of life on Earth over the past few centuries. The Global Biodiversity Information Facility provides the largest single gateway to these data. Since 2004 it has provided a single point of access to specimen data from databases of biological surveys and collections. Biologists now have rapid access to more than 120 million observations, for use in many biological analyses. We investigate the quality and coverage of data digitally available, from the perspective of a biologist seeking distribution data for spatial analysis on a global scale. We present an example of automatic verification of geographic data using distributions from the International Legume Database and Information Service to test empirically, issues of geographic coverage and accuracy. There are over 1/2 million records covering 31% of all Legume species, and 84% of these records pass geographic validation. These data are not yet a global biodiversity resource for all species, or all countries. A user will encounter many biases and gaps in these data which should be understood before data are used or analyzed. The data are notably deficient in many of the world's biodiversity hotspots. The deficiencies in data coverage can be resolved by an increased application of resources to digitize and publish data throughout these most diverse regions. But in the push to provide ever more data online, we should not forget that consistent data quality is of paramount importance if the data are to be useful in capturing a meaningful picture of life on Earth.
Resumo:
Groundwater is an important resource in the UK, with 45% of public water supplies in the Thames Water region derived from subterranean sources. In urban areas, groundwater has been affected by onthropogenic activities over 0 long period of time and from a multitude of sources, At present, groundwater quality is assessed using a range of chemical species to determine the extent of contamination. However, analysing a complex mixture of chemicals is time-consuming and expensive, whereas the use of an ecotoxicity test provides information on (a) the degree of pollution present in the groundwater and (b) the potential effect of that pollution. Microtox (TM), Eclox (TM) and Daphnia magna microtests were used in conjunction with standard chemical protocols to assess the contamination of groundwaters from sites throughout the London Borough of Hounslow and nearby Heathrow Airport. Because of their precision, range of responses and ease of use, Daphnia magna and Microfox (TM) tests are the bioassays that appear to be most effective for assessing groundwater toxicity However, neither test is ideal because it is also essential to monitor water hardness. Eclox (TM) does not appear to be suitable for use in groundwater-quality assessment in this area, because it is adversely affected by high total dissolved solids and electrical conductivity.
Resumo:
The method of distributing the outdoor air in classrooms has a major impact on indoor air quality and thermal comfort of pupils. In a previous study, ([11] Karimipanah T, Sandberg M, Awbi HB. A comparative study of different air distribution systems in a classroom. In: Proceedings of Roomvent 2000, vol. II, Reading, UK, 2000. p. 1013-18; [13] Karimipanah T, Sandberg M, Awbi HB, Blomqvist C. Effectiveness of confluent jets ventilation system for classrooms. In: Idoor Air 2005, Beijing, China, 2005 (to be presented).) presented results for four and two types of air distribution systems tested in a purpose built classroom with simulated occupancy as well as computational fluid dynamics (CFD) modelling. In this paper, the same experimental setup has been used to investigate the indoor environment in the classroom using confluent jet ventilation, see also ([12]Cho YJ, Awbi HB, Karimipanah T. The characteristics of wall confluent jets for ventilated enclosures. In: Proceedings of Roomvent 2004, Coimbra, Portugal, 2004.) Measurements of air speed, air temperature and tracer gas concentrations have been carried out for different thermal conditions. In addition, 56 cases of CFD simulations have been carried to provide additional information on the indoor air quality and comfort conditions throughout the classroom, such as ventilation effectiveness, air exchange effectiveness, effect of flow rate, effect of radiation, effect of supply temperature, etc., and these are compared with measured data.
Resumo:
Purpose – For many academics in UK universities the nature and orientation of their research is overwhelmingly determined by considerations of how that work will be graded in research assessment exercises (RAEs). The grades awarded to work in a particular subject area can have a considerable impact on the individual and their university. There is a need to better understand those factors which may influence these grades. The paper seeks to address this issue. Design/methodology/approach – The paper considers relationships between the grades awarded and the quantitative information provided to the assessment panels for the 1996 and 2001 RAEs for two subject areas, built environment and town and country planning, and for three other subject areas, civil engineering, geography and archaeology, in the 2001 RAE. Findings – A simple model demonstrating strong and consistent relationships is established. RAE performance relates to numbers of research active staff, the production of books and journal papers, numbers of research studentships and graduations, and research income. Important differences between subject areas are identified. Research limitations/implications – Important issues are raised about the extent to which the new assessment methodology to be adopted for the 2008 RAE will capture the essence of good quality research in architecture and built environment. Originality/value – The findings provide a developmental perspective of RAEs and show how, despite a changed methodology, various research activities might be valued in the 2008 RAE. The basis for a methodology for reviewing the credibility of the judgements of panels is proposed.
Resumo:
The management of information in engineering organisations is facing a particular challenge in the ever-increasing volume of information. It has been recognised that an effective methodology is required to evaluate information in order to avoid information overload and to retain the right information for reuse. By using, as a starting point, a number of the current tools and techniques which attempt to obtain ‘the value’ of information, it is proposed that an assessment or filter mechanism for information is needed to be developed. This paper addresses this issue firstly by briefly reviewing the information overload problem, the definition of value, and related research work on the value of information in various areas. Then a “characteristic” based framework of information evaluation is introduced using the key characteristics identified from related work as an example. A Bayesian Network diagram method is introduced to the framework to build the linkage between the characteristics and information value in order to quantitatively calculate the quality and value of information. The training and verification process for the model is then described using 60 real engineering documents as a sample. The model gives a reasonable accurate result and the differences between the model calculation and training judgements are summarised as the potential causes are discussed. Finally, several further issues including the challenge of the framework and the implementations of this evaluation assessment method are raised.
Resumo:
Objective: To determine whether the use of verbal descriptors suggested by the European Union (EU) such as "common" (1-10% frequency) and "rare" (0.01-0.1%) effectively conveys the level of risk of side effects to people taking a medicine. Design: Randomised controlled study with unconcealed allocation. Participants: 120 adults taking simvastatin or atorvastatin after cardiac surgery or myocardial infarction. Setting: Cardiac rehabilitation clinics at two hospitals in Leeds, UK. Intervention: A written statement about one of the side effects of the medicine (either constipation or pancreatitis). Within each side effect condition half the patients were given the information in verbal form and half in numerical form (for constipation, "common" or 2.5%; for pancreatitis, "rare" or 0.04%). Main outcome measure: The estimated likelihood of the side effect occurring. Other outcome measures related to the perceived severity of the side effect, its risk to health, and its effect on decisions about whether to take the medicine. Results: The mean likelihood estimate given for the constipation side effect was 34.2% in the verbal group and 8.1% in the numerical group; for pancreatitis it was 18% in the verbal group and 2.1% in the numerical group. The verbal descriptors were associated with more negative perceptions of the medicine than their equivalent numerical descriptors. Conclusions: Patients want and need understandable information about medicines and their risks and benefits. This is essential if they are to become partners in medicine taking. The use of verbal descriptors to improve the level of information about side effect risk leads to overestimation of the level of harm and may lead patients to make inappropriate decisions about whether or not they take the medicine.