947 resultados para inductive inference


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Idiosyncratic markers are features of genes and genomes that are so unusual that it is unlikely that they evolved more than once in a lineage of organisms. Here we explore further the potential of idiosyncratic markers and changes to typically conserved tRNA sequences for phylogenetic inference. Hard ticks were chosen as the model group because their phylogeny has been studied extensively. Fifty-eight candidate markers from hard ticks ( family Ixodidae) and 22 markers from the subfamily Rhipicephalinae sensu lato were mapped onto phylogenies of these groups. Two of the most interesting markers, features of the secondary structure of two different tRNAs, gave strong support to the hypothesis that species of the Prostriata ( Ixodes spp.) are monophyletic. Previous analyses of genes and morphology did not strongly support this relationship, instead suggesting that the Prostriata is paraphyletic with respect to the Metastriata ( the rest of the hard ticks). Parallel or convergent evolution was not found in the arrangements of mitochondrial genes in ticks nor were there any reversals to the ancestral arthropod character state. Many of the markers identified were phylogenetically informative, whereas others should be informative with study of additional taxa. Idiosyncratic markers and changes to typically conserved nucleotides in tRNAs that are phylogenetically informative were common in this data set, and thus these types of markers might be found in other organisms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: With the decrease of DNA sequencing costs, sequence-based typing methods are rapidly becoming the gold standard for epidemiological surveillance. These methods provide reproducible and comparable results needed for a global scale bacterial population analysis, while retaining their usefulness for local epidemiological surveys. Online databases that collect the generated allelic profiles and associated epidemiological data are available but this wealth of data remains underused and are frequently poorly annotated since no user-friendly tool exists to analyze and explore it. Results: PHYLOViZ is platform independent Java software that allows the integrated analysis of sequence-based typing methods, including SNP data generated from whole genome sequence approaches, and associated epidemiological data. goeBURST and its Minimum Spanning Tree expansion are used for visualizing the possible evolutionary relationships between isolates. The results can be displayed as an annotated graph overlaying the query results of any other epidemiological data available. Conclusions: PHYLOViZ is a user-friendly software that allows the combined analysis of multiple data sources for microbial epidemiological and population studies. It is freely available at http://www.phyloviz.net.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increased integration of wind power into the electric grid, as nowadays occurs in Portugal, poses new challenges due to its intermittency and volatility. Hence, good forecasting tools play a key role in tackling these challenges. In this paper, an adaptive neuro-fuzzy inference approach is proposed for short-term wind power forecasting. Results from a real-world case study are presented. A thorough comparison is carried out, taking into account the results obtained with other approaches. Numerical results are presented and conclusions are duly drawn. (C) 2011 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Maxwell equations play a fundamental role in the electromagnetic theory and lead to models useful in physics and engineering. This formalism involves integer-order differential calculus, but the electromagnetic diffusion points towards the adoption of a fractional calculus approach. This study addresses the skin effect and develops a new method for implementing fractional-order inductive elements. Two genetic algorithms are adopted, one for the system numerical evaluation and another for the parameter identification, both with good results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We intend to study the algebraic structure of the simple orthogonal models to use them, through binary operations as building blocks in the construction of more complex orthogonal models. We start by presenting some matrix results considering Commutative Jordan Algebras of symmetric matrices, CJAs. Next, we use these results to study the algebraic structure of orthogonal models, obtained by crossing and nesting simpler ones. Then, we study the normal models with OBS, which can also be orthogonal models. We intend to study normal models with OBS (Orthogonal Block Structure), NOBS (Normal Orthogonal Block Structure), obtaining condition for having complete and suffcient statistics, having UMVUE, is unbiased estimators with minimal covariance matrices whatever the variance components. Lastly, see ([Pereira et al. (2014)]), we study the algebraic structure of orthogonal models, mixed models whose variance covariance matrices are all positive semi definite, linear combinations of known orthogonal pairwise orthogonal projection matrices, OPOPM, and whose least square estimators, LSE, of estimable vectors are best linear unbiased estimator, BLUE, whatever the variance components, so they are uniformly BLUE, UBLUE. From the results of the algebraic structure we will get explicit expressions for the LSE of these models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present experimental and theoretical analyses of data requirements for haplotype inference algorithms. Our experiments include a broad range of problem sizes under two standard models of tree distribution and were designed to yield statistically robust results despite the size of the sample space. Our results validate Gusfield's conjecture that a population size of n log n is required to give (with high probability) sufficient information to deduce the n haplotypes and their complete evolutionary history. The experimental results inspired our experimental finding with theoretical bounds on the population size. We also analyze the population size required to deduce some fixed fraction of the evolutionary history of a set of n haplotypes and establish linear bounds on the required sample size. These linear bounds are also shown theoretically.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved Vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper we develop time varying parameter models which permit cointegration. Time-varying parameter VARs (TVP-VARs) typically use state space representations to model the evolution of parameters. In this paper, we show that it is not sensible to use straightforward extensions of TVP-VARs when allowing for cointegration. Instead we develop a specification which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP-VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving a permanent/transitory variance decomposition for inflation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider a model with parameter phi, and an auxiliary model with parameter theta. Let phi be a randomly sampled from a given density over the known parameter space. Monte Carlo methods can be used to draw simulated data and compute the corresponding estimate of theta, say theta_tilde. A large set of tuples (phi, theta_tilde) can be generated in this manner. Nonparametric methods may be use to fit the function E(phi|theta_tilde=a), using these tuples. It is proposed to estimate phi using the fitted E(phi|theta_tilde=theta_hat), where theta_hat is the auxiliary estimate, using the real sample data. This is a consistent and asymptotically normally distributed estimator, under certain assumptions. Monte Carlo results for dynamic panel data and vector autoregressions show that this estimator can have very attractive small sample properties. Confidence intervals can be constructed using the quantiles of the phi for which theta_tilde is close to theta_hat. Such confidence intervals are found to have very accurate coverage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Continuing developments in science and technology mean that the amounts of information forensic scientists are able to provide for criminal investigations is ever increasing. The commensurate increase in complexity creates difficulties for scientists and lawyers with regard to evaluation and interpretation, notably with respect to issues of inference and decision. Probability theory, implemented through graphical methods, and specifically Bayesian networks, provides powerful methods to deal with this complexity. Extensions of these methods to elements of decision theory provide further support and assistance to the judicial system. Bayesian Networks for Probabilistic Inference and Decision Analysis in Forensic Science provides a unique and comprehensive introduction to the use of Bayesian decision networks for the evaluation and interpretation of scientific findings in forensic science, and for the support of decision-makers in their scientific and legal tasks. Includes self-contained introductions to probability and decision theory. Develops the characteristics of Bayesian networks, object-oriented Bayesian networks and their extension to decision models. Features implementation of the methodology with reference to commercial and academically available software. Presents standard networks and their extensions that can be easily implemented and that can assist in the reader's own analysis of real cases. Provides a technique for structuring problems and organizing data based on methods and principles of scientific reasoning. Contains a method for the construction of coherent and defensible arguments for the analysis and evaluation of scientific findings and for decisions based on them. Is written in a lucid style, suitable for forensic scientists and lawyers with minimal mathematical background. Includes a foreword by Ian Evett. The clear and accessible style of this second edition makes this book ideal for all forensic scientists, applied statisticians and graduate students wishing to evaluate forensic findings from the perspective of probability and decision analysis. It will also appeal to lawyers and other scientists and professionals interested in the evaluation and interpretation of forensic findings, including decision making based on scientific information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Empirical modeling of exposure levels has been popular for identifying exposure determinants in occupational hygiene. Traditional data-driven methods used to choose a model on which to base inferences have typically not accounted for the uncertainty linked to the process of selecting the final model. Several new approaches propose making statistical inferences from a set of plausible models rather than from a single model regarded as 'best'. This paper introduces the multimodel averaging approach described in the monograph by Burnham and Anderson. In their approach, a set of plausible models are defined a priori by taking into account the sample size and previous knowledge of variables influent on exposure levels. The Akaike information criterion is then calculated to evaluate the relative support of the data for each model, expressed as Akaike weight, to be interpreted as the probability of the model being the best approximating model given the model set. The model weights can then be used to rank models, quantify the evidence favoring one over another, perform multimodel prediction, estimate the relative influence of the potential predictors and estimate multimodel-averaged effects of determinants. The whole approach is illustrated with the analysis of a data set of 1500 volatile organic compound exposure levels collected by the Institute for work and health (Lausanne, Switzerland) over 20 years, each concentration having been divided by the relevant Swiss occupational exposure limit and log-transformed before analysis. Multimodel inference represents a promising procedure for modeling exposure levels that incorporates the notion that several models can be supported by the data and permits to evaluate to a certain extent model selection uncertainty, which is seldom mentioned in current practice.