30 resultados para Information extraction strategies
Resumo:
A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi-Sugeno (T-S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency. In order to achieve maximized model robustness and sparsity, a new robust extended Gram-Schmidt (G-S) method has been introduced via two effective and complementary approaches of regularization and D-optimality experimental design. Model rule bases are decomposed into orthogonal subspaces, so as to enhance model transparency with the capability of interpreting the derived rule base energy level. A locally regularized orthogonal least squares algorithm, combined with a D-optimality used for subspace based rule selection, has been extended for fuzzy rule regularization and subspace based information extraction. By using a weighting for the D-optimality cost function, the entire model construction procedure becomes automatic. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
Remote sensing can potentially provide information useful in improving pollution transport modelling in agricultural catchments. Realisation of this potential will depend on the availability of the raw data, development of information extraction techniques, and the impact of the assimilation of the derived information into models. High spatial resolution hyperspectral imagery of a farm near Hereford, UK is analysed. A technique is described to automatically identify the soil and vegetation endmembers within a field, enabling vegetation fractional cover estimation. The aerially-acquired laser altimetry is used to produce digital elevation models of the site. At the subfield scale the hypothesis that higher resolution topography will make a substantial difference to contaminant transport is tested using the AGricultural Non-Point Source (AGNPS) model. Slope aspect and direction information are extracted from the topography at different resolutions to study the effects on soil erosion, deposition, runoff and nutrient losses. Field-scale models are often used to model drainage water, nitrate and runoff/sediment loss, but the demanding input data requirements make scaling up to catchment level difficult. By determining the input range of spatial variables gathered from EO data, and comparing the response of models to the range of variation measured, the critical model inputs can be identified. Response surfaces to variation in these inputs constrain uncertainty in model predictions and are presented. Although optical earth observation analysis can provide fractional vegetation cover, cloud cover and semi-random weather patterns can hinder data acquisition in Northern Europe. A Spring and Autumn cloud cover analysis is carried out over seven UK sites close to agricultural districts, using historic satellite image metadata, climate modelling and historic ground weather observations. Results are assessed in terms of probability of acquisition probability and implications for future earth observation missions. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
This paper explores methodological issues related to research into second language listening strategies. We argue that a number of central questions regarding research methodology in this line of enquiry are underexamined, and we engage in the discussion of three key methodological questions: (1) To what extent is a verbal report a valid and reliable way of eliciting information about strategies? (2) Should we control for learners' level of linguistic knowledge when examining their listening strategy use? and (3) What are the problems surrounding the analysis of data gained through verbal reports? We discuss each of these three methodological issues within the framework of a research project investigating listening strategies deployed by learners of French in secondary schools in England. Implications from these findings for future research are discussed.
Resumo:
Explaining the diversity of languages across the world is one of the central aims of typological, historical, and evolutionary linguistics. We consider the effect of language contact-the number of non-native speakers a language has-on the way languages change and evolve. By analysing hundreds of languages within and across language families, regions, and text types, we show that languages with greater levels of contact typically employ fewer word forms to encode the same information content (a property we refer to as lexical diversity). Based on three types of statistical analyses, we demonstrate that this variance can in part be explained by the impact of non-native speakers on information encoding strategies. Finally, we argue that languages are information encoding systems shaped by the varying needs of their speakers. Language evolution and change should be modeled as the co-evolution of multiple intertwined adaptive systems: On one hand, the structure of human societies and human learning capabilities, and on the other, the structure of language.
Resumo:
Children’s eye movements during reading. In this chapter, we evaluate the literature on children’s eye movements during reading to date. We describe the basic, developmental changes that occur in eye movement behaviour during reading, discuss age-related changes in the extent and time course of information extraction during fixations in reading, and compare the effects of visual and linguistic manipulations in the text on children’s eye movement behaviour in relation to skilled adult readers. We argue that future research will benefit from examining how eye movement behaviour during reading develops in relation to language and literacy skills, and use of computational modelling with children’s eye movement data may improve our understanding of the mechanisms that underlie the progression from beginning to skilled reader.
Resumo:
The aim of this paper is to study the impact of channel state information on the design of cooperative transmission protocols. This is motivated by the fact that the performance gain achieved by cooperative diversity comes at the price of the extra bandwidth resource consumption. Several opportunistic relaying strategies are developed to fully utilize the different types of a priori channel information. The information-theoretic measures such as outage probability and diversity-multiplexing tradeoff are developed for the proposed protocols. The analytical and numerical results demonstrate that the use of such a priori information increases the spectral efficiency of cooperative diversity, especially at low signal-to-noise ratio.
Resumo:
Information costs play a key role in determining the relative efficiency of alternative organisational structures. The choice of locations at which information is stored in a firm is an important determinant of its information costs. A specific example of information use is modelled in order to explore what factors determine whether information should be stored centrally or locally and if it should be replicated at different sites. This provides insights into why firms are structured hierarchically, with some decisions and tasks being performed centrally and others at different levels of decentralisation. The effects of new information technologies are also discussed. These can radically alter the patterns and levels of information costs within a firm and so can cause substantial changes in organisational structure.
Resumo:
A future goal in nuclear fuel reprocessing is the conversion or transmutation of the long-lived radioisotopes of minor actinides, such as americium, into short-lived isotopes by irradiation with neutrons. In order to achieve this transmutation, it is necessary to separate the minor actinides(III), [An(Ill)], from the lanthanides(III), [Ln(Ill)], by solvent extraction (partitioning), because the lanthanides absorb neutrons too effectively and hence limit neutron capture by the transmutable actinides. Partitioning using ligands containing only carbon, hydrogen, nitrogen and oxygen atoms is desirable because they are completely incinerable and thus the final volume of waste is minimised [1]. Nitric acid media will be used in the extraction experiments because it is envisaged that the An(III)/Ln(III) separation process could take place after the PUREX process. There is no doubt that the correct design of a molecule that is capable of acting as a ligand or extraction reagent is required for the effective separation of metal ions such as actinides(III) from lanthanides. Recent attention has been directed towards heterocyclic ligands with for the preferential separation of the minor actinides. Although such molecules have a rich chemistry, this is only now becoming sufficiently well understood in relation to the partitioning process [2]. The molecules shown in Figures I and 2 will be the principal focus of this study. Although the examples chosen here are used rather specific, the guidelines can be extended to other areas such as the separation of precious metals [3].
Resumo:
This article reviews current technological developments, particularly Peer-to-Peer technologies and Distributed Data Systems, and their value to community memory projects, particularly those concerned with the preservation of the cultural, literary and administrative data of cultures which have suffered genocide or are at risk of genocide. It draws attention to the comparatively good representation online of genocide denial groups and changes in the technological strategies of holocaust denial and other far-right groups. It draws on the author's work in providing IT support for a UK-based Non-Governmental Organization providing support for survivors of genocide in Rwanda.
Resumo:
With the increasing production and consumption of potato and its products, glycoalkaloid (GA) formation and toxicity are likely to become an important focus for food safety researchers and public health agencies. Not only the presence of GA, particularly in the form of a-solanine and a-chaconine, but also the changes occurring as a result of various post-harvest handling practices and storage, are critical issues influencing the quality of stored potatoes. Studies on various factors (pre-harvest, during harvest anal post-harvest) affecting GA have been carried out from time to time, but it is difficult to compare the results of one study with another due to wide variation in the parameters chosen. This review aims to develop a clear understanding of these issues. Published information on the types of GA, their effects on health, their typical concentrations in potatoes, their formation mechanisms, and how their levels can be controlled by following appropriate post harvest practices and storage regimes are critically analysed. The levels of GA in potato can be controlled effectively by adopting appropriate post-harvest practices. Further studies are necessary, however, to investigate best practices, which either check completely or retard substantially their formation. (C) 2008 Society of Chemical Industry.
Resumo:
There is a growing body of information on sex differences In callitrichid behaviour that includes the animals' performance in food tasks. For example, both reproductive and non-reproductive adult females have been found to be more successful than adult males in solving food tasks. In this study ten adult male and ten adult female common marmosets (Callithrix jacchus), housed individually, were tested with an unfamiliar task that involved the extraction of an embedded food. The task was to open a plastic canister that contained a raisin; the open end was covered with parchment paper. Each marmoset was given 15 trials in three blocks of 5 consecutive days. We measured the latency for each animal to open the lid and get the raisin-by one of five strategies that spontaneously emerged. The females learned the task faster and more efficiently than males; all the females opened the canister on day 1, for instance, in contrast to seven of the males on the same day. Females also progressively decreased the time that they took to open the tube. The final latency on day 15, for instance, was significantly shorter for the females. These results are consistent with relevant literature for callitrichids and cannot be accounted for in terms of differences in mental abilities, strength, hand morphology, or energy requirements. Further investigation is necessary to clarify the reasons for these differences.
Resumo:
Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.
Resumo:
When competing strategies for development programs, clinical trial designs, or data analysis methods exist, the alternatives need to be evaluated in a systematic way to facilitate informed decision making. Here we describe a refinement of the recently proposed clinical scenario evaluation framework for the assessment of competing strategies. The refinement is achieved by subdividing key elements previously proposed into new categories, distinguishing between quantities that can be estimated from preexisting data and those that cannot and between aspects under the control of the decision maker from those that are determined by external constraints. The refined framework is illustrated by an application to a design project for an adaptive seamless design for a clinical trial in progressive multiple sclerosis.
Resumo:
In the emerging digital economy, the management of information in aerospace and construction organisations is facing a particular challenge due to the ever-increasing volume of information and the extensive use of information and communication technologies (ICTs). This paper addresses the problems of information overload and the value of information in both industries by providing some cross-disciplinary insights. In particular it identifies major issues and challenges in the current information evaluation practice in these two industries. Interviews were conducted to get a spectrum of industrial perspectives (director/strategic, project management and ICT/document management) on these issues in particular to information storage and retrieval strategies and the contrasting approaches to knowledge and information management of personalisation and codification. Industry feedback was collected by a follow-up workshop to strengthen the findings of the research. An information-handling agenda is outlined for the development of a future Information Evaluation Methodology (IEM) which could facilitate the practice of the codification of high-value information in order to support through-life knowledge and information management (K&IM) practice.