18 resultados para nonparametric inference

em Archivo Digital para la Docencia y la Investigación - Repositorio Institucional de la Universidad del País Vasco


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper estimates a standard version of the New Keynesian monetary (NKM) model under alternative specifications of the monetary policy rule using U.S. and Eurozone data. The estimation procedure implemented is a classical method based on the indirect inference principle. An unrestricted VAR is considered as the auxiliary model. On the one hand, the estimation method proposed overcomes some of the shortcomings of using a structural VAR as the auxiliary model in order to identify the impulse response that defines the minimum distance estimator implemented in the literature. On the other hand, by following a classical approach we can further assess the estimation results found in recent papers that follow a maximum-likelihood Bayesian approach. The estimation results show that some structural parameter estimates are quite sensitive to the specification of monetary policy. Moreover, the estimation results in the U.S. show that the fit of the NKM under an optimal monetary plan is much worse than the fit of the NKM model assuming a forward-looking Taylor rule. In contrast to the U.S. case, in the Eurozone the best fit is obtained assuming a backward-looking Taylor rule, but the improvement is rather small with respect to assuming either a forward-looking Taylor rule or an optimal plan.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Protein inference from peptide identifications in shotgun proteomics must deal with ambiguities that arise due to the presence of peptides shared between different proteins, which is common in higher eukaryotes. Recently data independent acquisition (DIA) approaches have emerged as an alternative to the traditional data dependent acquisition (DDA) in shotgun proteomics experiments. MSE is the term used to name one of the DIA approaches used in QTOF instruments. MSE data require specialized software to process acquired spectra and to perform peptide and protein identifications. However the software available at the moment does not group the identified proteins in a transparent way by taking into account peptide evidence categories. Furthermore the inspection, comparison and report of the obtained results require tedious manual intervention. Here we report a software tool to address these limitations for MSE data. Results In this paper we present PAnalyzer, a software tool focused on the protein inference process of shotgun proteomics. Our approach considers all the identified proteins and groups them when necessary indicating their confidence using different evidence categories. PAnalyzer can read protein identification files in the XML output format of the ProteinLynx Global Server (PLGS) software provided by Waters Corporation for their MSE data, and also in the mzIdentML format recently standardized by HUPO-PSI. Multiple files can also be read simultaneously and are considered as technical replicates. Results are saved to CSV, HTML and mzIdentML (in the case of a single mzIdentML input file) files. An MSE analysis of a real sample is presented to compare the results of PAnalyzer and ProteinLynx Global Server. Conclusions We present a software tool to deal with the ambiguities that arise in the protein inference process. Key contributions are support for MSE data analysis by ProteinLynx Global Server and technical replicates integration. PAnalyzer is an easy to use multiplatform and free software tool.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Blowflies are insects of forensic interest as they may indicate characteristics of the environment where a body has been laying prior to the discovery. In order to estimate changes in community related to landscape and to assess if blowfly species can be used as indicators of the landscape where a corpse has been decaying, we studied the blowfly community and how it is affected by landscape in a 7,000 km(2) region during a whole year. Using baited traps deployed monthly we collected 28,507 individuals of 10 calliphorid species, 7 of them well represented and distributed in the study area. Multiple Analysis of Variance found changes in abundance between seasons in the 7 analyzed species, and changes related to land use in 4 of them (Calliphora vomitoria, Lucilia ampullacea, L. caesar and L. illustris). Generalised Linear Model analyses of abundance of these species compared with landscape descriptors at different scales found only a clear significant relationship between summer abundance of C. vomitoria and distance to urban areas and degree of urbanisation. This relationship explained more deviance when considering the landscape composition at larger geographical scales (up to 2,500 m around sampling site). For the other species, no clear relationship between land uses and abundance was found, and therefore observed changes in their abundance patterns could be the result of other variables, probably small changes in temperature. Our results suggest that blowfly community composition cannot be used to infer in what kind of landscape a corpse has decayed, at least in highly fragmented habitats, the only exception being the summer abundance of C. vomitoria.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper uses a structural approach based on the indirect inference principle to estimate a standard version of the new Keynesian monetary (NKM) model augmented with term structure using both revised and real-time data. The estimation results show that the term spread and policy inertia are both important determinants of the U.S. estimated monetary policy rule whereas the persistence of shocks plays a small but significant role when revised and real-time data of output and inflation are both considered. More importantly, the relative importance of term spread and persistent shocks in the policy rule and the shock transmission mechanism drastically change when it is taken into account that real-time data are not well behaved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper estimates a standard version of the New Keynesian Monetary (NKM) model augmented with financial variables in order to analyze the relative importance of stock market returns and term spread in the estimated U.S. monetary policy rule. The estimation procedure implemented is a classical structural method based on the indirect inference principle. The empirical results show that the Fed seems to respond to the macroeconomic outlook and to the stock market return but does not seem to respond to the term spread. Moreover, policy inertia and persistent policy shocks are also significant features of the estimated policy rule.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Published as an article in: Spanish Economic Review, 2008, vol. 10, issue 4, pages 251-277.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Published also as: Documento de Trabajo Banco de España 0504/2005.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Published as an article in: Studies in Nonlinear Dynamics & Econometrics, 2004, vol. 8, issue 3, article 6.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Published as an article in: Investigaciones Economicas, 2005, vol. 29, issue 3, pages 483-523.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Published as an article in: The Quarterly Review of Economics and Finance, 2004, vol. 44, issue 2, pages 224-236.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The digital management of collections in museums, archives, libraries and galleries is an increasingly important part of cultural heritage studies. This paper describes a representation for folk song metadata, based on the Web Ontology Language (OWL) implementation of the CIDOC Conceptual Reference Model. The OWL representation facilitates encoding and reasoning over a genre ontology, while the CIDOC model enables a representation of complex spatial containment and proximity relations among geographic regions. It is shown how complex queries of folk song metadata, relying on inference and not only retrieval, can be expressed in OWL and solved using a description logic reasoner.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes an extended version of the basic New Keynesian monetary (NKM) model which contemplates revision processes of output and inflation data in order to assess the importance of data revisions on the estimated monetary policy rule parameters and the transmission of policy shocks. Our empirical evidence based on a structural econometric approach suggests that although the initial announcements of output and inflation are not rational forecasts of revised output and inflation data, ignoring the presence of non well-behaved revision processes may not be a serious drawback in the analysis of monetary policy in this framework. However, the transmission of inflation-push shocks is largely affected by considering data revisions. The latter being especially true when the nominal stickiness parameter is estimated taking into account data revision processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Methods for generating a new population are a fundamental component of estimation of distribution algorithms (EDAs). They serve to transfer the information contained in the probabilistic model to the new generated population. In EDAs based on Markov networks, methods for generating new populations usually discard information contained in the model to gain in efficiency. Other methods like Gibbs sampling use information about all interactions in the model but are computationally very costly. In this paper we propose new methods for generating new solutions in EDAs based on Markov networks. We introduce approaches based on inference methods for computing the most probable configurations and model-based template recombination. We show that the application of different variants of inference methods can increase the EDAs’ convergence rate and reduce the number of function evaluations needed to find the optimum of binary and non-binary discrete functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we propose a new approach to deduction methods for temporal logic. Our proposal is based on an inductive definition of eventualities that is different from the usual one. On the basis of this non-customary inductive definition for eventualities, we first provide dual systems of tableaux and sequents for Propositional Linear-time Temporal Logic (PLTL). Then, we adapt the deductive approach introduced by means of these dual tableau and sequent systems to the resolution framework and we present a clausal temporal resolution method for PLTL. Finally, we make use of this new clausal temporal resolution method for establishing logical foundations for declarative temporal logic programming languages. The key element in the deduction systems for temporal logic is to deal with eventualities and hidden invariants that may prevent the fulfillment of eventualities. Different ways of addressing this issue can be found in the works on deduction systems for temporal logic. Traditional tableau systems for temporal logic generate an auxiliary graph in a first pass.Then, in a second pass, unsatisfiable nodes are pruned. In particular, the second pass must check whether the eventualities are fulfilled. The one-pass tableau calculus introduced by S. Schwendimann requires an additional handling of information in order to detect cyclic branches that contain unfulfilled eventualities. Regarding traditional sequent calculi for temporal logic, the issue of eventualities and hidden invariants is tackled by making use of a kind of inference rules (mainly, invariant-based rules or infinitary rules) that complicates their automation. A remarkable consequence of using either a two-pass approach based on auxiliary graphs or aone-pass approach that requires an additional handling of information in the tableau framework, and either invariant-based rules or infinitary rules in the sequent framework, is that temporal logic fails to carry out the classical correspondence between tableaux and sequents. In this thesis, we first provide a one-pass tableau method TTM that instead of a graph obtains a cyclic tree to decide whether a set of PLTL-formulas is satisfiable. In TTM tableaux are classical-like. For unsatisfiable sets of formulas, TTM produces tableaux whose leaves contain a formula and its negation. In the case of satisfiable sets of formulas, TTM builds tableaux where each fully expanded open branch characterizes a collection of models for the set of formulas in the root. The tableau method TTM is complete and yields a decision procedure for PLTL. This tableau method is directly associated to a one-sided sequent calculus called TTC. Since TTM is free from all the structural rules that hinder the mechanization of deduction, e.g. weakening and contraction, then the resulting sequent calculus TTC is also free from this kind of structural rules. In particular, TTC is free of any kind of cut, including invariant-based cut. From the deduction system TTC, we obtain a two-sided sequent calculus GTC that preserves all these good freeness properties and is finitary, sound and complete for PLTL. Therefore, we show that the classical correspondence between tableaux and sequent calculi can be extended to temporal logic. The most fruitful approach in the literature on resolution methods for temporal logic, which was started with the seminal paper of M. Fisher, deals with PLTL and requires to generate invariants for performing resolution on eventualities. In this thesis, we present a new approach to resolution for PLTL. The main novelty of our approach is that we do not generate invariants for performing resolution on eventualities. Our method is based on the dual methods of tableaux and sequents for PLTL mentioned above. Our resolution method involves translation into a clausal normal form that is a direct extension of classical CNF. We first show that any PLTL-formula can be transformed into this clausal normal form. Then, we present our temporal resolution method, called TRS-resolution, that extends classical propositional resolution. Finally, we prove that TRS-resolution is sound and complete. In fact, it finishes for any input formula deciding its satisfiability, hence it gives rise to a new decision procedure for PLTL. In the field of temporal logic programming, the declarative proposals that provide a completeness result do not allow eventualities, whereas the proposals that follow the imperative future approach either restrict the use of eventualities or deal with them by calculating an upper bound based on the small model property for PLTL. In the latter, when the length of a derivation reaches the upper bound, the derivation is given up and backtracking is used to try another possible derivation. In this thesis we present a declarative propositional temporal logic programming language, called TeDiLog, that is a combination of the temporal and disjunctive paradigms in Logic Programming. We establish the logical foundations of our proposal by formally defining operational and logical semantics for TeDiLog and by proving their equivalence. Since TeDiLog is, syntactically, a sublanguage of PLTL, the logical semantics of TeDiLog is supported by PLTL logical consequence. The operational semantics of TeDiLog is based on TRS-resolution. TeDiLog allows both eventualities and always-formulas to occur in clause heads and also in clause bodies. To the best of our knowledge, TeDiLog is the first declarative temporal logic programming language that achieves this high degree of expressiveness. Since the tableau method presented in this thesis is able to detect that the fulfillment of an eventuality is prevented by a hidden invariant without checking for it by means of an extra process, since our finitary sequent calculi do not include invariant-based rules and since our resolution method dispenses with invariant generation, we say that our deduction methods are invariant-free.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]Probability models on permutations associate a probability value to each of the permutations on n items. This paper considers two popular probability models, the Mallows model and the Generalized Mallows model. We describe methods for making inference, sampling and learning such distributions, some of which are novel in the literature. This paper also describes operations for permutations, with special attention in those related with the Kendall and Cayley distances and the random generation of permutations. These operations are of key importance for the efficient computation of the operations on distributions. These algorithms are implemented in the associated R package. Moreover, the internal code is written in C++.