24 resultados para Opportunity discovery and exploitation
em CentAUR: Central Archive University of Reading - UK
Resumo:
Excavations at the Pre-Pottery Neolithic site of WF16 in the Southern Levant produced an archaeobotanical assemblage constituted by plant macro-fossils and wood charcoal. As with all such assemblages, its species composition will most likely provide a biased reflection of those within the Neolithic woodland that had been exploited owing to cultural selection and differential preservation. As a means of facilitating its interpretation, a survey was undertaken of a relatively undisturbed patch of gallery woodland associated with a permanent water course at Hammam Adethni, approximately four kilometres south-east of WF16. The substantial overlap of the species within this woodland and those in the archaeobotanical assemblage suggests that this present-day woodland provides an analogue for that of the Neolithic and may therefore indicate what other plant resources the inhabitants of WF16 may have exploited, but which have left no archaeological trace. The interpretation of the results is supported by a comparative study of wood charcoal from present-day Bedouin hearths in Wadi Faynan.
Resumo:
The first haploid angiosperm, a dwarf form of cotton with half the normal chromosome complement, was discovered in 1920, and in the ninety years since then such plants have been identified in many other species. They can occur either spontaneously or can be induced by modified pollination methods in vivo, or by in vitro culture of immature male or female gametophytes. Haploids represent an immediate, one-stage route to homozygous diploids and thence to F(1) hybrid production. The commercial exploitation of heterosis in such F(1) hybrids leads to the development of hybrid seed companies and subsequently to the GM revolution in agriculture. This review describes the range of techniques available for the isolation or induction of haploids and discusses their value in a range of areas, from fundamental research on mutant isolation and transformation, through to applied aspects of quantitative genetics and plant breeding. It will also focus on how molecular methods have been used recently to explore some of the underlying aspects of this fascinating developmental phenomenon.
Resumo:
Diabetes like many diseases and biological processes is not mono-causal. On the one hand multifactorial studies with complex experimental design are required for its comprehensive analysis. On the other hand, the data from these studies often include a substantial amount of redundancy such as proteins that are typically represented by a multitude of peptides. Coping simultaneously with both complexities (experimental and technological) makes data analysis a challenge for Bioinformatics.
Resumo:
There are three key components for developing a metadata system: a container structure laying out the key semantic issues of interest and their relationships; an extensible controlled vocabulary providing possible content; and tools to create and manipulate that content. While metadata systems must allow users to enter their own information, the use of a controlled vocabulary both imposes consistency of definition and ensures comparability of the objects described. Here we describe the controlled vocabulary (CV) and metadata creation tool built by the METAFOR project for use in the context of describing the climate models, simulations and experiments of the fifth Coupled Model Intercomparison Project (CMIP5). The CV and resulting tool chain introduced here is designed for extensibility and reuse and should find applicability in many more projects.
Resumo:
The notion that wetlands are among the most productive environments in the world is widely quoted, but its relationship with the exploitation of wetland ecosystems during the prehistoric and early historic period has been the subject of few investigations. The current paper discusses the primary production of different wetland habitats and its relationship to the resource potential of these habitats and their actual exploitation, using recent results from the Humber Wetlands Survey. It is argued that during the early Holocene, wetland landscapes were central to the subsistence economy and that a clear association exists between the primary productivity of wetlands and the intensity of exploitation. With the introduction of agriculture, however, wetland habitats become increasingly peripheral to the economy.
Resumo:
This paper examines the determinants of cross-platform arbitrage profits. We develop a structural model that enables us to decompose the likelihood of an arbitrage opportunity into three distinct factors: the fixed cost to trade the opportunity, the level at which one of the platforms delays a price update and the impact of the order flow on the quoted prices (inventory and asymmetric information effects). We then investigate the predictions from the theoretical model for the European Bond market with the estimation of a probit model. Our main finding is that the results found in the empirical part corroborate strongly the predictions from the structural model. The event of a cross market arbitrage opportunity has a certain degree of predictability where an optimal ex ante scenario is represented by a low level of spreads on both platforms, a time of the day close to the end of trading hours and a high volume of trade.
Resumo:
More data will be produced in the next five years than in the entire history of human kind, a digital deluge that marks the beginning of the Century of Information. Through a year-long consultation with UK researchers, a coherent strategy has been developed, which will nurture Century-of-Information Research (CIR); it crystallises the ideas developed by the e-Science Directors' Forum Strategy Working Group. This paper is an abridged version of their latest report which can be found at: http://wikis.nesc.ac.uk/escienvoy/Century_of_Information_Research_Strategy which also records the consultation process and the affiliations of the authors. This document is derived from a paper presented at the Oxford e-Research Conference 2008 and takes into account suggestions made in the ensuing panel discussion. The goals of the CIR Strategy are to facilitate the growth of UK research and innovation that is data and computationally intensive and to develop a new culture of 'digital-systems judgement' that will equip research communities, businesses, government and society as a whole, with the skills essential to compete and prosper in the Century of Information. The CIR Strategy identifies a national requirement for a balanced programme of coordination, research, infrastructure, translational investment and education to empower UK researchers, industry, government and society. The Strategy is designed to deliver an environment which meets the needs of UK researchers so that they can respond agilely to challenges, can create knowledge and skills, and can lead new kinds of research. It is a call to action for those engaged in research, those providing data and computational facilities, those governing research and those shaping education policies. The ultimate aim is to help researchers strengthen the international competitiveness of the UK research base and increase its contribution to the economy. The objectives of the Strategy are to better enable UK researchers across all disciplines to contribute world-leading fundamental research; to accelerate the translation of research into practice; and to develop improved capabilities, facilities and context for research and innovation. It envisages a culture that is better able to grasp the opportunities provided by the growing wealth of digital information. Computing has, of course, already become a fundamental tool in all research disciplines. The UK e-Science programme (2001-06)—since emulated internationally—pioneered the invention and use of new research methods, and a new wave of innovations in digital-information technologies which have enabled them. The Strategy argues that the UK must now harness and leverage its own, plus the now global, investment in digital-information technology in order to spread the benefits as widely as possible in research, education, industry and government. Implementing the Strategy would deliver the computational infrastructure and its benefits as envisaged in the Science & Innovation Investment Framework 2004-2014 (July 2004), and in the reports developing those proposals. To achieve this, the Strategy proposes the following actions: support the continuous innovation of digital-information research methods; provide easily used, pervasive and sustained e-Infrastructure for all research; enlarge the productive research community which exploits the new methods efficiently; generate capacity, propagate knowledge and develop skills via new curricula; and develop coordination mechanisms to improve the opportunities for interdisciplinary research and to make digital-infrastructure provision more cost effective. To gain the best value for money strategic coordination is required across a broad spectrum of stakeholders. A coherent strategy is essential in order to establish and sustain the UK as an international leader of well-curated national data assets and computational infrastructure, which is expertly used to shape policy, support decisions, empower researchers and to roll out the results to the wider benefit of society. The value of data as a foundation for wellbeing and a sustainable society must be appreciated; national resources must be more wisely directed to the collection, curation, discovery, widening access, analysis and exploitation of these data. Every researcher must be able to draw on skills, tools and computational resources to develop insights, test hypotheses and translate inventions into productive use, or to extract knowledge in support of governmental decision making. This foundation plus the skills developed will launch significant advances in research, in business, in professional practice and in government with many consequent benefits for UK citizens. The Strategy presented here addresses these complex and interlocking requirements.
Resumo:
This paper documents the extent of inequality of educational opportunity in India spanning the period 1983–2004 using National Sample Surveys. We build on recent developments in the literature that have operationalized concepts of inequality of opportunity theory and construct several indices of inequality of educational opportunity for an adult sample. Kerala stands out as the least opportunity-unequal state. Rajasthan, Gujarat, and Uttar Pradesh experienced large-scale falls in the ranking of inequality of opportunities. By contrast, West Bengal and Orissa made significant progress in reducing inequality of opportunity. We also examine the links between progress toward equality of opportunity and a selection of pro-poor policies.
Resumo:
This paper examines the time-varying nature of price discovery in eighteenth century cross-listed stocks. Specifically, we investigate how quickly news is reflected in prices for two of the great moneyed com- panies, the Bank of England and the East India Company, over the period 1723 to 1794. These British companies were cross-listed on the London and Amsterdam stock exchange and news between the capitals flowed mainly via the use of boats that transported mail. We examine in detail the historical context sur- rounding the defining events of the period, and use these as a guide to how the data should be analysed. We show that both trading venues contributed to price discovery, and although the London venue was more important for these stocks, its importance varies over time.
Resumo:
Measurements of affinity and efficacy are fundamental for work on agonists both in drug discovery and in basic studies on receptors. In this review I wish to consider methods for measuring affinity and efficacy at G protein coupled receptors (GPCRs). Agonist affinity may be estimated in terms of the dissociation constant for agonist binding to a receptor using ligand binding or functional assays. It has, however, been suggested that measurements of affinity are always contaminated by efficacy so that it is impossible to separate the two parameters. Here I show that for many GPCRs, if receptor/G protein coupling is suppressed, experimental measurements of agonist affinity using ligand binding (K-obs) provide quite accurate measures of the agonist microscopic dissociation constant (K-A). Also in pharmacological functional studies, good estimates of agonist dissociation constants are possible. Efficacy can be quantitated in several ways based on functional data ( maximal effect of the agonist (E-max), ratio of agonist dissociation constant to concentration of agonist giving half maximal effect in functional assay ( K-obs/ EC50), a combined parameter EmaxKobs/EC50). Here I show that EmaxKobs/EC50 provides the best assessment of efficacy for a range of agonists across the full range of efficacy for full to partial agonists. Considerable evidence now suggests that ligand efficacy may be dependent on the pathway used to assess it. The efficacy of a ligand may, therefore, be multidimensional. It is still, however, necessary to have accurate measures of efficacy in different pathways.
Resumo:
Information provision to address the changing requirements can be best supported by content management. The Current information technology enables information to be stored and provided from various distributed sources. To identify and retrieve relevant information requires effective mechanisms for information discovery and assembly. This paper presents a method, which enables the design of such mechanisms, with a set of techniques for articulating and profiling users' requirements, formulating information provision specifications, realising management of information content in repositories, and facilitating response to the user's requirements dynamically during the process of knowledge construction. These functions are represented in an ontology which integrates the capability of the mechanisms. The ontological modelling in this paper has adopted semiotics principles with embedded norms to ensure coherent course of actions represented in these mechanisms. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Web Services for Remote Portlets (WSRP) is gaining attention among portal developers and vendors to enable easy development, increased richness in functionality, pluggability, and flexibility of deployment. Whilst currently not supporting all WSRP functionalities, open-source portal frameworks could in future use WSRP Consumers to access remote portlets found from a WSRP Producer registry service. This implies that we need a central registry for the remote portlets and a more expressive WSRP Consumer interface to implement the remote portlet functions. This paper reports on an investigation into a new system architecture, which includes a Web Services repository, registry, and client interface. The Web Services repository holds portlets as remote resource producers. A new data structure for expressing remote portlets is found and published by populating a Universal Description, Discovery and Integration (UDDI) registry. A remote portlet publish and search engine for UDDI has also been developed. Finally, a remote portlet client interface was developed as a Web application. The client interface supports remote portlet features, as well as window status and mode functions. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
This study describes the discovery and characterisation of a novel aminopeptidase A from the venom of B. g. rhinoceros and highlights its potential biological importance. Similar to mammalian aminopeptidases, rhiminopeptidase A might be capable of playing roles in altering the blood pressure and brain function of victims. Furthermore, it could have additional effects on the biological functions of other host proteins by cleaving their N-terminal amino acids. This study points towards the importance of complete analysis of individual components of snake venom in order to develop effective therapies for snake bites.