6 resultados para elaboration likelihood model

em Aston University Research Archive


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The retrieval of wind vectors from satellite scatterometer observations is a non-linear inverse problem. A common approach to solving inverse problems is to adopt a Bayesian framework and to infer the posterior distribution of the parameters of interest given the observations by using a likelihood model relating the observations to the parameters, and a prior distribution over the parameters. We show how Gaussian process priors can be used efficiently with a variety of likelihood models, using local forward (observation) models and direct inverse models for the scatterometer. We present an enhanced Markov chain Monte Carlo method to sample from the resulting multimodal posterior distribution. We go on to show how the computational complexity of the inference can be controlled by using a sparse, sequential Bayes algorithm for estimation with Gaussian processes. This helps to overcome the most serious barrier to the use of probabilistic, Gaussian process methods in remote sensing inverse problems, which is the prohibitively large size of the data sets. We contrast the sampling results with the approximations that are found by using the sparse, sequential Bayes algorithm.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This chapter examines the contexts in which people will process more deeply, and therefore be more influenced by, a position that is supported by either a numerical majority or minority. The chapter reviews the major theories of majority and minority influence with reference to which source condition is associated with most message processing (and where relevant, the contexts under which this occurs) and experimental research examining these predictions. The chapter then presents a new theoretical model (the source-context-elaboration model, SCEM) that aims to integrate the disparate research findings. The model specifies the processes underlying majority and minority influence, the contexts under which these processes occur and the consequences for attitudes changed by majority and minority influence. The chapter then describes a series of experiments that address each of the aspects of the theoretical model. Finally, a range of research-related issues are discussed and future issues for the research area as a whole are considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adherence of pathogenic Escherichia coli and Salmonella spp. to host cells is in part mediated by curli fimbriae which, along with other virulence determinants, are positively regulated by RpoS. Interested in the role and regulation of curli (SEF17) fimbriae of Salmonella enteritidis in poultry infection, we tested the virulence of naturally occurring S. enteritidis PT4 strains 27655R and 27655S which displayed constitutive and null expression of curli (SEF17) fimbriae, respectively, in a chick invasion assay and analysed their rpoS alleles. Both strains were shown to be equally invasive and as invasive as a wild-type phage type 4 strain and an isogenic derivative defective for the elaboration of curli. We showed that the rpoS allele of 27655S was intact even though this strain was non-curliated and we confirmed that a S. enteritidis rpoS::strr null mutant was unable to express curli, as anticipated. Strain 27655R, constitutively curliated, possessed a frameshift mutation at position 697 of the rpoS coding sequence which resulted in a truncated product and remained curliated even when transduced to rpoS::strr. Additionally, rpoS mutants are known to be cold-sensitive, a phenotype confirmed for strain 27655R. Collectively, these data indicated that curliation was not a significant factor for pathogenesis of S. enteritidis in this model and that curliation of strains 27655R and 27655S was independent of RpoS. Significantly, strain 27655R possessed a defective rpoS allele and remained virulent. Here was evidence that supported the concept that different naturally occurring rpoS alleles may generate varying virulence phenotypic traits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary objective of this research was to understand what kinds of knowledge and skills people use in `extracting' relevant information from text and to assess the extent to which expert systems techniques could be applied to automate the process of abstracting. The approach adopted in this thesis is based on research in cognitive science, information science, psycholinguistics and textlinguistics. The study addressed the significance of domain knowledge and heuristic rules by developing an information extraction system, called INFORMEX. This system, which was implemented partly in SPITBOL, and partly in PROLOG, used a set of heuristic rules to analyse five scientific papers of expository type, to interpret the content in relation to the key abstract elements and to extract a set of sentences recognised as relevant for abstracting purposes. The analysis of these extracts revealed that an adequate abstract could be generated. Furthermore, INFORMEX showed that a rule based system was a suitable computational model to represent experts' knowledge and strategies. This computational technique provided the basis for a new approach to the modelling of cognition. It showed how experts tackle the task of abstracting by integrating formal knowledge as well as experiential learning. This thesis demonstrated that empirical and theoretical knowledge can be effectively combined in expert systems technology to provide a valuable starting approach to automatic abstracting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we discuss how discriminative training can be applied to the hidden vector state (HVS) model in different task domains. The HVS model is a discrete hidden Markov model (HMM) in which each HMM state represents the state of a push-down automaton with a finite stack size. In previous applications, maximum-likelihood estimation (MLE) is used to derive the parameters of the HVS model. However, MLE makes a number of assumptions and unfortunately some of these assumptions do not hold. Discriminative training, without making such assumptions, can improve the performance of the HVS model by discriminating the correct hypothesis from the competing hypotheses. Experiments have been conducted in two domains: the travel domain for the semantic parsing task using the DARPA Communicator data and the Air Travel Information Services (ATIS) data and the bioinformatics domain for the information extraction task using the GENIA corpus. The results demonstrate modest improvements of the performance of the HVS model using discriminative training. In the travel domain, discriminative training of the HVS model gives a relative error reduction rate of 31 percent in F-measure when compared with MLE on the DARPA Communicator data and 9 percent on the ATIS data. In the bioinformatics domain, a relative error reduction rate of 4 percent in F-measure is achieved on the GENIA corpus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

eHabitat is a Web Processing Service (WPS) designed to compute the likelihood of finding ecosystems with equal properties. Inputs to the WPS, typically thematic geospatial "layers", can be discovered using standardised catalogues, and the outputs tailored to specific end user needs. Because these layers can range from geophysical data captured through remote sensing to socio-economical indicators, eHabitat is exposed to a broad range of different types and levels of uncertainties. Potentially chained to other services to perform ecological forecasting, for example, eHabitat would be an additional component further propagating uncertainties from a potentially long chain of model services. This integration of complex resources increases the challenges in dealing with uncertainty. For such a system, as envisaged by initiatives such as the "Model Web" from the Group on Earth Observations, to be used for policy or decision making, users must be provided with information on the quality of the outputs since all system components will be subject to uncertainty. UncertWeb will create the Uncertainty-Enabled Model Web by promoting interoperability between data and models with quantified uncertainty, building on existing open, international standards. It is the objective of this paper to illustrate a few key ideas behind UncertWeb using eHabitat to discuss the main types of uncertainties the WPS has to deal with and to present the benefits of the use of the UncertWeb framework.