15 resultados para Inference.
em Archivo Digital para la Docencia y la Investigación - Repositorio Institucional de la Universidad del País Vasco
Resumo:
This paper estimates a standard version of the New Keynesian monetary (NKM) model under alternative specifications of the monetary policy rule using U.S. and Eurozone data. The estimation procedure implemented is a classical method based on the indirect inference principle. An unrestricted VAR is considered as the auxiliary model. On the one hand, the estimation method proposed overcomes some of the shortcomings of using a structural VAR as the auxiliary model in order to identify the impulse response that defines the minimum distance estimator implemented in the literature. On the other hand, by following a classical approach we can further assess the estimation results found in recent papers that follow a maximum-likelihood Bayesian approach. The estimation results show that some structural parameter estimates are quite sensitive to the specification of monetary policy. Moreover, the estimation results in the U.S. show that the fit of the NKM under an optimal monetary plan is much worse than the fit of the NKM model assuming a forward-looking Taylor rule. In contrast to the U.S. case, in the Eurozone the best fit is obtained assuming a backward-looking Taylor rule, but the improvement is rather small with respect to assuming either a forward-looking Taylor rule or an optimal plan.
Resumo:
Background Protein inference from peptide identifications in shotgun proteomics must deal with ambiguities that arise due to the presence of peptides shared between different proteins, which is common in higher eukaryotes. Recently data independent acquisition (DIA) approaches have emerged as an alternative to the traditional data dependent acquisition (DDA) in shotgun proteomics experiments. MSE is the term used to name one of the DIA approaches used in QTOF instruments. MSE data require specialized software to process acquired spectra and to perform peptide and protein identifications. However the software available at the moment does not group the identified proteins in a transparent way by taking into account peptide evidence categories. Furthermore the inspection, comparison and report of the obtained results require tedious manual intervention. Here we report a software tool to address these limitations for MSE data. Results In this paper we present PAnalyzer, a software tool focused on the protein inference process of shotgun proteomics. Our approach considers all the identified proteins and groups them when necessary indicating their confidence using different evidence categories. PAnalyzer can read protein identification files in the XML output format of the ProteinLynx Global Server (PLGS) software provided by Waters Corporation for their MSE data, and also in the mzIdentML format recently standardized by HUPO-PSI. Multiple files can also be read simultaneously and are considered as technical replicates. Results are saved to CSV, HTML and mzIdentML (in the case of a single mzIdentML input file) files. An MSE analysis of a real sample is presented to compare the results of PAnalyzer and ProteinLynx Global Server. Conclusions We present a software tool to deal with the ambiguities that arise in the protein inference process. Key contributions are support for MSE data analysis by ProteinLynx Global Server and technical replicates integration. PAnalyzer is an easy to use multiplatform and free software tool.
Resumo:
Blowflies are insects of forensic interest as they may indicate characteristics of the environment where a body has been laying prior to the discovery. In order to estimate changes in community related to landscape and to assess if blowfly species can be used as indicators of the landscape where a corpse has been decaying, we studied the blowfly community and how it is affected by landscape in a 7,000 km(2) region during a whole year. Using baited traps deployed monthly we collected 28,507 individuals of 10 calliphorid species, 7 of them well represented and distributed in the study area. Multiple Analysis of Variance found changes in abundance between seasons in the 7 analyzed species, and changes related to land use in 4 of them (Calliphora vomitoria, Lucilia ampullacea, L. caesar and L. illustris). Generalised Linear Model analyses of abundance of these species compared with landscape descriptors at different scales found only a clear significant relationship between summer abundance of C. vomitoria and distance to urban areas and degree of urbanisation. This relationship explained more deviance when considering the landscape composition at larger geographical scales (up to 2,500 m around sampling site). For the other species, no clear relationship between land uses and abundance was found, and therefore observed changes in their abundance patterns could be the result of other variables, probably small changes in temperature. Our results suggest that blowfly community composition cannot be used to infer in what kind of landscape a corpse has decayed, at least in highly fragmented habitats, the only exception being the summer abundance of C. vomitoria.
Resumo:
This paper uses a structural approach based on the indirect inference principle to estimate a standard version of the new Keynesian monetary (NKM) model augmented with term structure using both revised and real-time data. The estimation results show that the term spread and policy inertia are both important determinants of the U.S. estimated monetary policy rule whereas the persistence of shocks plays a small but significant role when revised and real-time data of output and inflation are both considered. More importantly, the relative importance of term spread and persistent shocks in the policy rule and the shock transmission mechanism drastically change when it is taken into account that real-time data are not well behaved.
Resumo:
This paper estimates a standard version of the New Keynesian Monetary (NKM) model augmented with financial variables in order to analyze the relative importance of stock market returns and term spread in the estimated U.S. monetary policy rule. The estimation procedure implemented is a classical structural method based on the indirect inference principle. The empirical results show that the Fed seems to respond to the macroeconomic outlook and to the stock market return but does not seem to respond to the term spread. Moreover, policy inertia and persistent policy shocks are also significant features of the estimated policy rule.
Resumo:
Published as an article in: Spanish Economic Review, 2008, vol. 10, issue 4, pages 251-277.
Resumo:
Published as an article in: The Quarterly Review of Economics and Finance, 2004, vol. 44, issue 2, pages 224-236.
Resumo:
The digital management of collections in museums, archives, libraries and galleries is an increasingly important part of cultural heritage studies. This paper describes a representation for folk song metadata, based on the Web Ontology Language (OWL) implementation of the CIDOC Conceptual Reference Model. The OWL representation facilitates encoding and reasoning over a genre ontology, while the CIDOC model enables a representation of complex spatial containment and proximity relations among geographic regions. It is shown how complex queries of folk song metadata, relying on inference and not only retrieval, can be expressed in OWL and solved using a description logic reasoner.
Resumo:
This paper proposes an extended version of the basic New Keynesian monetary (NKM) model which contemplates revision processes of output and inflation data in order to assess the importance of data revisions on the estimated monetary policy rule parameters and the transmission of policy shocks. Our empirical evidence based on a structural econometric approach suggests that although the initial announcements of output and inflation are not rational forecasts of revised output and inflation data, ignoring the presence of non well-behaved revision processes may not be a serious drawback in the analysis of monetary policy in this framework. However, the transmission of inflation-push shocks is largely affected by considering data revisions. The latter being especially true when the nominal stickiness parameter is estimated taking into account data revision processes.
Resumo:
Methods for generating a new population are a fundamental component of estimation of distribution algorithms (EDAs). They serve to transfer the information contained in the probabilistic model to the new generated population. In EDAs based on Markov networks, methods for generating new populations usually discard information contained in the model to gain in efficiency. Other methods like Gibbs sampling use information about all interactions in the model but are computationally very costly. In this paper we propose new methods for generating new solutions in EDAs based on Markov networks. We introduce approaches based on inference methods for computing the most probable configurations and model-based template recombination. We show that the application of different variants of inference methods can increase the EDAs’ convergence rate and reduce the number of function evaluations needed to find the optimum of binary and non-binary discrete functions.
Resumo:
In this thesis we propose a new approach to deduction methods for temporal logic. Our proposal is based on an inductive definition of eventualities that is different from the usual one. On the basis of this non-customary inductive definition for eventualities, we first provide dual systems of tableaux and sequents for Propositional Linear-time Temporal Logic (PLTL). Then, we adapt the deductive approach introduced by means of these dual tableau and sequent systems to the resolution framework and we present a clausal temporal resolution method for PLTL. Finally, we make use of this new clausal temporal resolution method for establishing logical foundations for declarative temporal logic programming languages. The key element in the deduction systems for temporal logic is to deal with eventualities and hidden invariants that may prevent the fulfillment of eventualities. Different ways of addressing this issue can be found in the works on deduction systems for temporal logic. Traditional tableau systems for temporal logic generate an auxiliary graph in a first pass.Then, in a second pass, unsatisfiable nodes are pruned. In particular, the second pass must check whether the eventualities are fulfilled. The one-pass tableau calculus introduced by S. Schwendimann requires an additional handling of information in order to detect cyclic branches that contain unfulfilled eventualities. Regarding traditional sequent calculi for temporal logic, the issue of eventualities and hidden invariants is tackled by making use of a kind of inference rules (mainly, invariant-based rules or infinitary rules) that complicates their automation. A remarkable consequence of using either a two-pass approach based on auxiliary graphs or aone-pass approach that requires an additional handling of information in the tableau framework, and either invariant-based rules or infinitary rules in the sequent framework, is that temporal logic fails to carry out the classical correspondence between tableaux and sequents. In this thesis, we first provide a one-pass tableau method TTM that instead of a graph obtains a cyclic tree to decide whether a set of PLTL-formulas is satisfiable. In TTM tableaux are classical-like. For unsatisfiable sets of formulas, TTM produces tableaux whose leaves contain a formula and its negation. In the case of satisfiable sets of formulas, TTM builds tableaux where each fully expanded open branch characterizes a collection of models for the set of formulas in the root. The tableau method TTM is complete and yields a decision procedure for PLTL. This tableau method is directly associated to a one-sided sequent calculus called TTC. Since TTM is free from all the structural rules that hinder the mechanization of deduction, e.g. weakening and contraction, then the resulting sequent calculus TTC is also free from this kind of structural rules. In particular, TTC is free of any kind of cut, including invariant-based cut. From the deduction system TTC, we obtain a two-sided sequent calculus GTC that preserves all these good freeness properties and is finitary, sound and complete for PLTL. Therefore, we show that the classical correspondence between tableaux and sequent calculi can be extended to temporal logic. The most fruitful approach in the literature on resolution methods for temporal logic, which was started with the seminal paper of M. Fisher, deals with PLTL and requires to generate invariants for performing resolution on eventualities. In this thesis, we present a new approach to resolution for PLTL. The main novelty of our approach is that we do not generate invariants for performing resolution on eventualities. Our method is based on the dual methods of tableaux and sequents for PLTL mentioned above. Our resolution method involves translation into a clausal normal form that is a direct extension of classical CNF. We first show that any PLTL-formula can be transformed into this clausal normal form. Then, we present our temporal resolution method, called TRS-resolution, that extends classical propositional resolution. Finally, we prove that TRS-resolution is sound and complete. In fact, it finishes for any input formula deciding its satisfiability, hence it gives rise to a new decision procedure for PLTL. In the field of temporal logic programming, the declarative proposals that provide a completeness result do not allow eventualities, whereas the proposals that follow the imperative future approach either restrict the use of eventualities or deal with them by calculating an upper bound based on the small model property for PLTL. In the latter, when the length of a derivation reaches the upper bound, the derivation is given up and backtracking is used to try another possible derivation. In this thesis we present a declarative propositional temporal logic programming language, called TeDiLog, that is a combination of the temporal and disjunctive paradigms in Logic Programming. We establish the logical foundations of our proposal by formally defining operational and logical semantics for TeDiLog and by proving their equivalence. Since TeDiLog is, syntactically, a sublanguage of PLTL, the logical semantics of TeDiLog is supported by PLTL logical consequence. The operational semantics of TeDiLog is based on TRS-resolution. TeDiLog allows both eventualities and always-formulas to occur in clause heads and also in clause bodies. To the best of our knowledge, TeDiLog is the first declarative temporal logic programming language that achieves this high degree of expressiveness. Since the tableau method presented in this thesis is able to detect that the fulfillment of an eventuality is prevented by a hidden invariant without checking for it by means of an extra process, since our finitary sequent calculi do not include invariant-based rules and since our resolution method dispenses with invariant generation, we say that our deduction methods are invariant-free.
Resumo:
[EN]Probability models on permutations associate a probability value to each of the permutations on n items. This paper considers two popular probability models, the Mallows model and the Generalized Mallows model. We describe methods for making inference, sampling and learning such distributions, some of which are novel in the literature. This paper also describes operations for permutations, with special attention in those related with the Kendall and Cayley distances and the random generation of permutations. These operations are of key importance for the efficient computation of the operations on distributions. These algorithms are implemented in the associated R package. Moreover, the internal code is written in C++.
Resumo:
[ES] Se propone en este trabajo un modelo de control borroso que ayude a filtrar y seleccionar las solicitudes de subvención que pueda recibir una institución pública en un programa de fomento para la creación y desarrollo de nuevas iniciativas empresariales. Creemos que la utilización de la lógica borrosa presenta ventajas sobre los procedimientos ordinarios ya que nos movemos en un escenario de actuación complejo y vago. El control borroso introduce el conocimiento de los expertos de un modo muy natural mediante variables lingüísticas y procesos de inferencia propios del lenguaje ordinario, lo que facilita la toma de decisiones en situaciones complejas. Nuestro modelo considera por un lado la idea empresarial y por otro la persona . Los indicadores y criterios que los expertos consideran relevantes para la evaluación de la subvención son modelados mediante variables lingüísticas y tratados como antecedentes y consecuentes de un motor de inferencia borroso, cuya salida nos proporciona la valoración final de la solicitud. Al final de nuestro trabajo resolvemos un caso práctico sencillo para aclarar el procedimiento.
Resumo:
Most wearable activity recognition systems assume a predefined sensor deployment that remains unchanged during runtime. However, this assumption does not reflect real-life conditions. During the normal use of such systems, users may place the sensors in a position different from the predefined sensor placement. Also, sensors may move from their original location to a different one, due to a loose attachment. Activity recognition systems trained on activity patterns characteristic of a given sensor deployment may likely fail due to sensor displacements. In this work, we innovatively explore the effects of sensor displacement induced by both the intentional misplacement of sensors and self-placement by the user. The effects of sensor displacement are analyzed for standard activity recognition techniques, as well as for an alternate robust sensor fusion method proposed in a previous work. While classical recognition models show little tolerance to sensor displacement, the proposed method is proven to have notable capabilities to assimilate the changes introduced in the sensor position due to self-placement and provides considerable improvements for large misplacements.
Resumo:
In this work we show the results obtained applying a Unified Dark Matter (UDM) model with a fast transition to a set of cosmological data. Two different functions to model the transition are tested, and the feasibility of both models is explored using CMB shift data from Planck [1], Galaxy Clustering data from [2] and [3], and Union2.1 SNe Ia [4]. These new models are also statistically compared with the ACDM and quiessence models using Bayes factor through evidence. Bayesian inference does not discard the UDM models in favor of ACDM.