958 resultados para Bowker Collection Analysis Tool
Resumo:
Bloom-forming and toxin-producing cyanobacteria remain a persistent nuisance across the world. Modelling of cyanobacteria in freshwaters is an important tool for understanding their population dynamics and predicting the location and timing of the bloom events in lakes and rivers. A new deterministic-mathematical model was developed, which simulates the growth and movement of cyanobacterial blooms in river systems. The model focuses on the mathematical description of the bloom formation, vertical migration and lateral transport of colonies within river environments by taking into account the major factors that affect the cyanobacterial bloom formation in rivers including, light, nutrients and temperature. A technique called generalised sensitivity analysis was applied to the model to identify the critical parameter uncertainties in the model and investigates the interaction between the chosen parameters of the model. The result of the analysis suggested that 8 out of 12 parameters were significant in obtaining the observed cyanobacterial behaviour in a simulation. It was found that there was a high degree of correlation between the half-saturation rate constants used in the model.
Resumo:
More data will be produced in the next five years than in the entire history of human kind, a digital deluge that marks the beginning of the Century of Information. Through a year-long consultation with UK researchers, a coherent strategy has been developed, which will nurture Century-of-Information Research (CIR); it crystallises the ideas developed by the e-Science Directors' Forum Strategy Working Group. This paper is an abridged version of their latest report which can be found at: http://wikis.nesc.ac.uk/escienvoy/Century_of_Information_Research_Strategy which also records the consultation process and the affiliations of the authors. This document is derived from a paper presented at the Oxford e-Research Conference 2008 and takes into account suggestions made in the ensuing panel discussion. The goals of the CIR Strategy are to facilitate the growth of UK research and innovation that is data and computationally intensive and to develop a new culture of 'digital-systems judgement' that will equip research communities, businesses, government and society as a whole, with the skills essential to compete and prosper in the Century of Information. The CIR Strategy identifies a national requirement for a balanced programme of coordination, research, infrastructure, translational investment and education to empower UK researchers, industry, government and society. The Strategy is designed to deliver an environment which meets the needs of UK researchers so that they can respond agilely to challenges, can create knowledge and skills, and can lead new kinds of research. It is a call to action for those engaged in research, those providing data and computational facilities, those governing research and those shaping education policies. The ultimate aim is to help researchers strengthen the international competitiveness of the UK research base and increase its contribution to the economy. The objectives of the Strategy are to better enable UK researchers across all disciplines to contribute world-leading fundamental research; to accelerate the translation of research into practice; and to develop improved capabilities, facilities and context for research and innovation. It envisages a culture that is better able to grasp the opportunities provided by the growing wealth of digital information. Computing has, of course, already become a fundamental tool in all research disciplines. The UK e-Science programme (2001-06)—since emulated internationally—pioneered the invention and use of new research methods, and a new wave of innovations in digital-information technologies which have enabled them. The Strategy argues that the UK must now harness and leverage its own, plus the now global, investment in digital-information technology in order to spread the benefits as widely as possible in research, education, industry and government. Implementing the Strategy would deliver the computational infrastructure and its benefits as envisaged in the Science & Innovation Investment Framework 2004-2014 (July 2004), and in the reports developing those proposals. To achieve this, the Strategy proposes the following actions: support the continuous innovation of digital-information research methods; provide easily used, pervasive and sustained e-Infrastructure for all research; enlarge the productive research community which exploits the new methods efficiently; generate capacity, propagate knowledge and develop skills via new curricula; and develop coordination mechanisms to improve the opportunities for interdisciplinary research and to make digital-infrastructure provision more cost effective. To gain the best value for money strategic coordination is required across a broad spectrum of stakeholders. A coherent strategy is essential in order to establish and sustain the UK as an international leader of well-curated national data assets and computational infrastructure, which is expertly used to shape policy, support decisions, empower researchers and to roll out the results to the wider benefit of society. The value of data as a foundation for wellbeing and a sustainable society must be appreciated; national resources must be more wisely directed to the collection, curation, discovery, widening access, analysis and exploitation of these data. Every researcher must be able to draw on skills, tools and computational resources to develop insights, test hypotheses and translate inventions into productive use, or to extract knowledge in support of governmental decision making. This foundation plus the skills developed will launch significant advances in research, in business, in professional practice and in government with many consequent benefits for UK citizens. The Strategy presented here addresses these complex and interlocking requirements.
Resumo:
BACKGROUND: Serial Analysis of Gene Expression (SAGE) is a powerful tool for genome-wide transcription studies. Unlike microarrays, it has the ability to detect novel forms of RNA such as alternatively spliced and antisense transcripts, without the need for prior knowledge of their existence. One limitation of using SAGE on an organism with a complex genome and lacking detailed sequence information, such as the hexaploid bread wheat Triticum aestivum, is accurate annotation of the tags generated. Without accurate annotation it is impossible to fully understand the dynamic processes involved in such complex polyploid organisms. Hence we have developed and utilised novel procedures to characterise, in detail, SAGE tags generated from the whole grain transcriptome of hexaploid wheat. RESULTS: Examination of 71,930 Long SAGE tags generated from six libraries derived from two wheat genotypes grown under two different conditions suggested that SAGE is a reliable and reproducible technique for use in studying the hexaploid wheat transcriptome. However, our results also showed that in poorly annotated and/or poorly sequenced genomes, such as hexaploid wheat, considerably more information can be extracted from SAGE data by carrying out a systematic analysis of both perfect and "fuzzy" (partially matched) tags. This detailed analysis of the SAGE data shows first that while there is evidence of alternative polyadenylation this appears to occur exclusively within the 3' untranslated regions. Secondly, we found no strong evidence for widespread alternative splicing in the developing wheat grain transcriptome. However, analysis of our SAGE data shows that antisense transcripts are probably widespread within the transcriptome and appear to be derived from numerous locations within the genome. Examination of antisense transcripts showing sequence similarity to the Puroindoline a and Puroindoline b genes suggests that such antisense transcripts might have a role in the regulation of gene expression. CONCLUSION: Our results indicate that the detailed analysis of transcriptome data, such as SAGE tags, is essential to understand fully the factors that regulate gene expression and that such analysis of the wheat grain transcriptome reveals that antisense transcripts maybe widespread and hence probably play a significant role in the regulation of gene expression during grain development.
Resumo:
A range of funding schemes and policy instruments exist to effect enhancement of the landscapes and habitats of the UK. While a number of assessments of these mechanisms have been conducted, little research has been undertaken to compare both quantitatively and qualitatively their relative effectiveness across a range of criteria. It is argued that few tools are available for such a multi-faceted evaluation of effectiveness. A form of Multiple Criteria Decision Analysis (MCDA) is justified and utilized as a framework in which to evaluate the effectiveness of nine mechanisms in relation to the protection of existing areas of chalk grassland and the creation of new areas in the South Downs of England. These include established schemes, such as the Countryside Stewardship and Environmentally Sensitive Area Schemes, along with other less common mechanisms, for example, land purchase and tender schemes. The steps involved in applying an MCDA to evaluate such mechanisms are identified and the process is described. Quantitative results from the comparison of the effectiveness of different mechanisms are presented, although the broader aim of the paper is that of demonstrating the performance of MCDA as a tool for measuring the effectiveness of mechanisms aimed at landscape and habitat enhancement.
Resumo:
Despite the wide use of Landscape Character Assessment (LCA) as a tool for landscape planning in NW Europe, there are few examples of its application in the Mediterranean. This paper reports on the results from the development of a typology for LCA in a study area of northern Sardinia, Italy to provide a spatial framework for the analysis of current patterns of cork oak distribution and future restoration of this habitat. Landscape units were derived from a visual interpretation of map data stored within a GIS describing the physical and cultural characteristics of the study area. The units were subsequently grouped into Landscape Types according to the similarity of shared attributes using Two Way Indicator Species Analysis (TWINSPAN). The preliminary results showed that the methodology classified distinct Landscape Types but, based on field observations, there is a need for further refinement of the classification. The distribution and properties of two main cork oak habitats types was examined within the identified Landscape Types namely woodlands and wood pastures using Patch Analyst. The results show very clearly a correspondence between the distribution of cork oak pastures and cork oak woodland and landscape types. This forms the basis of the development of strategies for the maintenance, restoration and recreation of these habitat types within the study area, ultimately for the whole island of Sardinia. Future work is required to improve the landscape characterisation , particularly with respect to cultural factors, and to determine the validity of the landscape spatial framework for the analysis of cork oak distribution as part of a programme of habitat restoration and re-creation.
Resumo:
Motivated by a matched case-control study to investigate potential risk factors for meningococcal disease amongst adolescents, we consider the analysis of matched case-control studies where disease incidence, and possibly other risk factors, vary with time of year. For the cases, the time of infection may be recorded. For controls, however, the recorded time is simply the time of data collection, which is shortly after the time of infection for the matched case, and so depends on the latter. We show that the effect of risk factors and interactions may be adjusted for the time of year effect in a standard conditional logistic regression analysis without introducing any bias. We also show that, if the time delay between data collection for cases and controls is constant, provided this delay is not very short, estimates of the time of year effect are approximately unbiased. In the case that the length of the delay varies over time, the estimate of the time of year effect is biased. We obtain an approximate expression for the degree of bias in this case. Copyright © 2004 John Wiley & Sons, Ltd.
Resumo:
The technique of rapid acidification and alkylation can be used to characterise the redox status of oxidoreductases, and to determine numbers of free cysteine residues within substrate proteins. We have previously used this method to analyse interacting components of the MHC class I pathway, namely ERp57 and tapasin. Here, we have applied rapid acidification alkylation as a novel approach to analysing the redox status of MHC class I molecules. This analysis of the redox status of the MHC class I molecules HLA-A2 and HLA-B27, which is strongly associated with a group of inflammatory arthritic disorders referred to as Spondyloarthropathies, revealed structural and conformational information. We propose that this assay provides a useful tool in the study of in vivo MHC class I structure. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
Motivation: There is a frequent need to apply a large range of local or remote prediction and annotation tools to one or more sequences. We have created a tool able to dispatch one or more sequences to assorted services by defining a consistent XML format for data and annotations. Results: By analyzing annotation tools, we have determined that annotations can be described using one or more of the six forms of data: numeric or textual annotation of residues, domains (residue ranges) or whole sequences. With this in mind, XML DTDs have been designed to store the input and output of any server. Plug-in wrappers to a number of services have been written which are called from a master script. The resulting APATML is then formatted for display in HTML. Alternatively further tools may be written to perform post-analysis.
Resumo:
Risk management (RM) comprises of risk identification, risk analysis, response planning, monitoring and action planning tasks that are carried out throughout the life cycle of a project in order to ensure that project objectives are met. Although the methodological aspects of RM are well-defined, the philosophical background is rather vague. In this paper, a learning-based approach is proposed. In order to implement this approach in practice, a tool has been developed to facilitate construction of a lessons learned database that contains risk-related information and risk assessment throughout the life cycle of a project. The tool is tested on a real construction project. The case study findings demonstrate that it can be used for storing as well as updating risk-related information and finally, carrying out a post-project appraisal. The major weaknesses of the tool are identified as, subjectivity of the risk rating process and unwillingness of people to enter information about reasons of failure.
Resumo:
Aims: To develop a quantitative equation [prebiotic index ( PI)] to aid the analysis of prebiotic fermentation of commercially available and novel prebiotic carbohydrates in vitro, using previously published fermentation data. Methods: The PI equation is based on the changes in key bacterial groups during fermentation. The bacterial groups incorporated into this PI equation were bifidobacteria, lactobacilli, clostridia and bacteroides. The changes in these bacterial groups from previous studies were entered into the PI equation in order to determine a quantitative PI score. PI scores were than compared with the qualitative conclusions made in these publications. In general the PI scores agreed with the qualitative conclusions drawn and provided a quantitative measure. Conclusions: The PI allows the magnitude of prebiotic effects to be quantified rather than evaluations being solely qualitative. Significance and Impact of the Study: The PI equation may be of great use in quantifying prebiotic effects in vitro. It is expected that this will facilitate more rational food product development and the development of more potent prebiotics with activity at lower doses.
Resumo:
This review focuses on methodological approaches used to study the composition of human faecal microbiota. Gene sequencing is the most accurate tool for revealing the phylogenetic relationships between bacteria. The main application of fluorescence in situ hybridization (FISH) in both microscopy and flow cytometry is to enumerate faecal bacteria. While flow cytometry is a very fast method, FISH microscopy still has a considerably lower detection limit.
Resumo:
Background Epidemiological studies suggest that soy consumption contributes to the prevention of coronary heart disease. The proposed anti-atherogenic effects of soy appear to be carried by the soy isoflavones with genistein as the most abundant compound. Aim of the study To identify proteins or pathways by which genistein might exert its protective activities on atherosclerosis, we analyzed the proteomic response of primary human umbilical vein endothelial cells ( HUVEC) that were exposed to the pro-atherosclerotic stressors homocysteine or oxidized low-density lipoprotein (ox-LDL). Methods HUVEC were incubated with physiological concentrations of homocysteine or ox-LDL in the absence and presence of genistein at concentrations that can be reached in human plasma by a diet rich in soy products (2.5 muM) or by pharmacological intervention ( 25 muM). Proteins from HUVEC were separated by two-dimensional polyacrylamide gel electrophoresis and those that showed altered expression level upon genistein treatment were identified by peptide mass fingerprints derived from tryptic digests of the protein spots. Results Several proteins were found to be differentially affected by genistein. The most interesting proteins that were potently decreased by homocysteine treatment were annexin V and lamin A. Annexin V is an antithrombotic molecule and mutations in nuclear lamin A have been found to result in perturbations of plasma lipids associated with hypertension. Genistein at low and high concentrations reversed the stressor-induced decrease of these anti-atherogenic proteins. Ox-LDL treatment of HUVEC resulted in an increase in ubiquitin conjugating enzyme 12, a protein involved in foam cell formation. Treatment with genistein at both doses reversed this effect. Conclusions Proteome analysis allows the identification of potential interactions of dietary components in the molecular process of atherosclerosis and consequently provides a powerful tool to define biomarkers of response.
Resumo:
Introduction A high saturated fatty acid intake is a well recognized risk factor for coronary heart disease development. More recently a high intake of n-6 polyunsaturated fatty acids (PUFA) in combination with a low intake of the long chain n-3 PUFA, eicosapentaenoic acid and docosahexaenoic acid has also been implicated as an important risk factor. Aim To compare total dietary fat and fatty acid intake measured by chemical analysis of duplicate diets with nutritional database analysis of estimated dietary records, collected over the same 3-day study period. Methods Total fat was analysed using soxhlet extraction and subsequently the individual fatty acid content of the diet was determined by gas chromatography. Estimated dietary records were analysed using a nutrient database which was supplemented with a selection of dishes commonly consumed by study participants. Results Bland & Altman statistical analysis demonstrated a lack of agreement between the two dietary assessment techniques for determining dietary fat and fatty acid intake. Conclusion The lack of agreement observed between dietary evaluation techniques may be attributed to inadequacies in either or both assessment techniques. This study highlights the difficulties that may be encountered when attempting to accurately evaluate dietary fat intake among the population.