864 resultados para Information search – models
Resumo:
Upper-mantle seismic anisotropy has been extensively used to infer both present and past deformation processes at lithospheric and asthenospheric depths. Analysis of shear-wave splitting (mainly from core-refracted SKS phases) provides information regarding upper-mantle anisotropy. We present average measurements of fast-polarization directions at 21 new sites in poorly sampled regions of intra-plate South America, such as northern and northeastern Brazil. Despite sparse data coverage for the South American stable platform, consistent orientations are observed over hundreds of kilometers. Over most of the continent, the fast-polarization direction tends to be close to the absolute plate motion direction given by the hotspot reference model HS3-NUVEL-1A. A previous global comparison of the SKS fast-polarization directions with flow models of the upper mantle showed relatively poor correlation on the continents, which was interpreted as evidence for a large contribution of ""frozen"" anisotropy in the lithosphere. For the South American plate, our data indicate that one of the reasons for the poor correlation may have been the relatively coarse model of lithospheric thicknesses. We suggest that improved models of upper-mantle flow that are based on more detailed lithospheric thicknesses in South America may help to explain most of the observed anisotropy patterns.
Resumo:
Identifying the correct sense of a word in context is crucial for many tasks in natural language processing (machine translation is an example). State-of-the art methods for Word Sense Disambiguation (WSD) build models using hand-crafted features that usually capturing shallow linguistic information. Complex background knowledge, such as semantic relationships, are typically either not used, or used in specialised manner, due to the limitations of the feature-based modelling techniques used. On the other hand, empirical results from the use of Inductive Logic Programming (ILP) systems have repeatedly shown that they can use diverse sources of background knowledge when constructing models. In this paper, we investigate whether this ability of ILP systems could be used to improve the predictive accuracy of models for WSD. Specifically, we examine the use of a general-purpose ILP system as a method to construct a set of features using semantic, syntactic and lexical information. This feature-set is then used by a common modelling technique in the field (a support vector machine) to construct a classifier for predicting the sense of a word. In our investigation we examine one-shot and incremental approaches to feature-set construction applied to monolingual and bilingual WSD tasks. The monolingual tasks use 32 verbs and 85 verbs and nouns (in English) from the SENSEVAL-3 and SemEval-2007 benchmarks; while the bilingual WSD task consists of 7 highly ambiguous verbs in translating from English to Portuguese. The results are encouraging: the ILP-assisted models show substantial improvements over those that simply use shallow features. In addition, incremental feature-set construction appears to identify smaller and better sets of features. Taken together, the results suggest that the use of ILP with diverse sources of background knowledge provide a way for making substantial progress in the field of WSD.
Resumo:
In this paper we make use of some stochastic volatility models to analyse the behaviour of a weekly ozone average measurements series. The models considered here have been used previously in problems related to financial time series. Two models are considered and their parameters are estimated using a Bayesian approach based on Markov chain Monte Carlo (MCMC) methods. Both models are applied to the data provided by the monitoring network of the Metropolitan Area of Mexico City. The selection of the best model for that specific data set is performed using the Deviance Information Criterion and the Conditional Predictive Ordinate method.
Resumo:
We consider bipartitions of one-dimensional extended systems whose probability distribution functions describe stationary states of stochastic models. We define estimators of the information shared between the two subsystems. If the correlation length is finite, the estimators stay finite for large system sizes. If the correlation length diverges, so do the estimators. The definition of the estimators is inspired by information theory. We look at several models and compare the behaviors of the estimators in the finite-size scaling limit. Analytical and numerical methods as well as Monte Carlo simulations are used. We show how the finite-size scaling functions change for various phase transitions, including the case where one has conformal invariance.
Resumo:
Three-dimensional quantitative structure-activity relationships (3D-QSAR) were performed for a series of analgesic cyclic imides using the CoMFA and CoMSIA methods. Significant correlation coefficients ( CoMFA, r(2) = 0.95 and q(2) = 0.72; CoMSIA, r(2) = 0.96 and q(2) = 0.76) were obtained, and the generated models were externally validated using test sets. The final QSAR models as well as the information gathered from 3D contour maps should be useful for the design of novel cyclic imides having improved analgesic activity.
Resumo:
Alzheimer`s disease is an ultimately fatal neurodegenerative disease, and BACE-1 has become an attractive validated target for its therapy, with more than a hundred crystal structures deposited in the PDB. In the present study, we present a new methodology that integrates ligand-based methods with structural information derived from the receptor. 128 BACE-1 inhibitors recently disclosed by GlaxoSmithKline R&D were selected specifically because the crystal structures of 9 of these compounds complexed to BACE-1, as well as five closely related analogs, have been made available. A new fragment-guided approach was designed to incorporate this wealth of structural information into a CoMFA study, and the methodology was systematically compared to other popular approaches, such as docking, for generating a molecular alignment. The influence of the partial charges calculation method was also analyzed. Several consistent and predictive models are reported, including one with r (2) = 0.88, q (2) = 0.69 and r (pred) (2) = 0.72. The models obtained with the new methodology performed consistently better than those obtained by other methodologies, particularly in terms of external predictive power. The visual analyses of the contour maps in the context of the enzyme drew attention to a number of possible opportunities for the development of analogs with improved potency. These results suggest that 3D-QSAR studies may benefit from the additional structural information added by the presented methodology.
Resumo:
Brazil`s State of Sao Paulo Research Foundation
Resumo:
We present parallel algorithms on the BSP/CGM model, with p processors, to count and generate all the maximal cliques of a circle graph with n vertices and m edges. To count the number of all the maximal cliques, without actually generating them, our algorithm requires O(log p) communication rounds with O(nm/p) local computation time. We also present an algorithm to generate the first maximal clique in O(log p) communication rounds with O(nm/p) local computation, and to generate each one of the subsequent maximal cliques this algorithm requires O(log p) communication rounds with O(m/p) local computation. The maximal cliques generation algorithm is based on generating all maximal paths in a directed acyclic graph, and we present an algorithm for this problem that uses O(log p) communication rounds with O(m/p) local computation for each maximal path. We also show that the presented algorithms can be extended to the CREW PRAM model.
Resumo:
We introduce in this paper the class of linear models with first-order autoregressive elliptical errors. The score functions and the Fisher information matrices are derived for the parameters of interest and an iterative process is proposed for the parameter estimation. Some robustness aspects of the maximum likelihood estimates are discussed. The normal curvatures of local influence are also derived for some usual perturbation schemes whereas diagnostic graphics to assess the sensitivity of the maximum likelihood estimates are proposed. The methodology is applied to analyse the daily log excess return on the Microsoft whose empirical distributions appear to have AR(1) and heavy-tailed errors. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
We give a list of all possible schemes for performing amino acid and codon assignments in algebraic models for the genetic code, which are consistent with a few simple symmetry principles, in accordance with the spirit of the algebraic approach to the evolution of the genetic code proposed by Hornos and Hornos. Our results are complete in the sense of covering all the algebraic models that arise within this approach, whether based on Lie groups/Lie algebras, on Lie superalgebras or on finite groups.
Resumo:
In this paper we discuss bias-corrected estimators for the regression and the dispersion parameters in an extended class of dispersion models (Jorgensen, 1997b). This class extends the regular dispersion models by letting the dispersion parameter vary throughout the observations, and contains the dispersion models as particular case. General formulae for the O(n(-1)) bias are obtained explicitly in dispersion models with dispersion covariates, which generalize previous results obtained by Botter and Cordeiro (1998), Cordeiro and McCullagh (1991), Cordeiro and Vasconcellos (1999), and Paula (1992). The practical use of the formulae is that we can derive closed-form expressions for the O(n(-1)) biases of the maximum likelihood estimators of the regression and dispersion parameters when the information matrix has a closed-form. Various expressions for the O(n(-1)) biases are given for special models. The formulae have advantages for numerical purposes because they require only a supplementary weighted linear regression. We also compare these bias-corrected estimators with two different estimators which are also bias-free to order O(n(-1)) that are based on bootstrap methods. These estimators are compared by simulation. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This paper provides general matrix formulas for computing the score function, the (expected and observed) Fisher information and the A matrices (required for the assessment of local influence) for a quite general model which includes the one proposed by Russo et al. (2009). Additionally, we also present an expression for the generalized leverage on fixed and random effects. The matrix formulation has notational advantages, since despite the complexity of the postulated model, all general formulas are compact, clear and have nice forms. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Cytochrome P450 (CYP450) is a class of enzymes where the substrate identification is particularly important to know. It would help medicinal chemists to design drugs with lower side effects due to drug-drug interactions and to extensive genetic polymorphism. Herein, we discuss the application of the 2D and 3D-similarity searches in identifying reference Structures with higher capacity to retrieve Substrates of three important CYP enzymes (CYP2C9, CYP2D6, and CYP3A4). On the basis of the complementarities of multiple reference structures selected by different similarity search methods, we proposed the fusion of their individual Tanimoto scores into a consensus Tanimoto score (T(consensus)). Using this new score, true positive rates of 63% (CYP2C9) and 81% (CYP2D6) were achieved with false positive rates of 4% for the CYP2C9-CYP2D6 data Set. Extended similarity searches were carried out oil a validation data set, and the results showed that by using the T(consensus) score, not only the area of a ROC graph increased, but also more substrates were recovered at the beginning of a ranked list.
Resumo:
This presentation was offered as part of the CUNY Library Assessment Conference, Reinventing Libraries: Reinventing Assessment, held at the City University of New York in June 2014.
Information och undervisning vid omvårdnad av patient med hjärt- kärlsjukdomar : En Litteraturstudie
Resumo:
Syfte: Syftet med denna litteraturstudie var att undersöka hur information når fram till patienter med hjärt-kärlsjukdomar. Metod: Systematisk litteraturstudie. 15 artiklar, både kvalitativa och kvantitativa, granskades till resultatsdelen efter sökningar genom databaserna Academic Search Elite, CINAHL och PubMed, samt sökmotorn Elin@Dalarna. Artiklarna som använts är från 2002 och framåt. De hade en kvalitet som var antingen medel eller hög efter en granskning med hjälp av granskningsmallar för kvalitetsbedömning.Huvudresultat: Det fanns behov för patienter och deras anhöriga att få information om patientens tillstånd och kommande besvär. Riskuppfattning, riskbearbetning, känslor, personliga värderingar, social press, miljö och ekonomiska förutsättningar bidrog som barriärer för att få informationen att nå fram till patienterna. Patienterna uppgav att det behövdes förbättringar angående information och undervisning. Det fanns många områden där förbättringar kunde göras angående information och undervisning vid hjärt- och kärlsjukdomar. Ibland ges inte den informationen som patienterna tycker att de behöver.