44 resultados para Information Models
Resumo:
This paper introduces local distance-based generalized linear models. These models extend (weighted) distance-based linear models firstly with the generalized linear model concept, then by localizing. Distances between individuals are the only predictor information needed to fit these models. Therefore they are applicable to mixed (qualitative and quantitative) explanatory variables or when the regressor is of functional type. Models can be fitted and analysed with the R package dbstats, which implements several distancebased prediction methods.
Resumo:
A parts based model is a parametrization of an object class using a collection of landmarks following the object structure. The matching of parts based models is one of the problems where pairwise Conditional Random Fields have been successfully applied. The main reason of their effectiveness is tractable inference and learning due to the simplicity of involved graphs, usually trees. However, these models do not consider possible patterns of statistics among sets of landmarks, and thus they sufffer from using too myopic information. To overcome this limitation, we propoese a novel structure based on a hierarchical Conditional Random Fields, which we explain in the first part of this memory. We build a hierarchy of combinations of landmarks, where matching is performed taking into account the whole hierarchy. To preserve tractable inference we effectively sample the label set. We test our method on facial feature selection and human pose estimation on two challenging datasets: Buffy and MultiPIE. In the second part of this memory, we present a novel approach to multiple kernel combination that relies on stacked classification. This method can be used to evaluate the landmarks of the parts-based model approach. Our method is based on combining responses of a set of independent classifiers for each individual kernel. Unlike earlier approaches that linearly combine kernel responses, our approach uses them as inputs to another set of classifiers. We will show that we outperform state-of-the-art methods on most of the standard benchmark datasets.
Resumo:
Actualment a l'Estat espanyol s'està implantant el Pla Bolonya per incorporar-se a l'Espai Europeu d'Estudis Superiors (l'EEES). Com a un dels principals objectius, l'EEES pretén homogeneïtzar els estudis i de manera concreta les competències adquirides per qualsevol estudiant independentment d'on hagi realitzat els seus estudis. Per això, existeixen iniciatives europees (com el projecte Tuning) que treballen per definir competències per a totes les titulacions universitàries.El projecte presenta l'anàlisi realitzat sobre vint Universitats de diferents continents per identificar models d'ensenyament-aprenentatge de competències no tècniques. La recerca es centra addicionalment en la competència comunicativa escrita.La font principal de dades ha estat la informació proporcionada a les pàgines Web de les universitats i molt especialment els seus plans d'estudi.
Resumo:
Aquest document de treball mira d'establir un nou camp d'investigació a la cruïlla entre els fluxos de migració i d'informació i comunicació. Hi ha diversos factors que fan que valgui la pena adoptar aquesta perspectiva. El punt central és que la migració internacional contemporània és incrustada en la dinàmica de la societat de la informació, seguint models comuns i dinàmiques interconnectades. Per consegüent, s'està començant a identificar els fluxos d'informació com a qüestions clau en les polítiques de migració. A més, hi ha una manca de coneixement empíric en el disseny de xarxes d'informació i l'ús de les tecnologies d'informació i comunicació en contextos migratoris. Aquest document de treball també mira de ser una font d'hipòtesis per a investigacions posteriors.
Resumo:
In this paper we present a novel structure from motion (SfM) approach able to infer 3D deformable models from uncalibrated stereo images. Using a stereo setup dramatically improves the 3D model estimation when the observed 3D shape is mostly deforming without undergoing strong rigid motion. Our approach first calibrates the stereo system automatically and then computes a single metric rigid structure for each frame. Afterwards, these 3D shapes are aligned to a reference view using a RANSAC method in order to compute the mean shape of the object and to select the subset of points on the object which have remained rigid throughout the sequence without deforming. The selected rigid points are then used to compute frame-wise shape registration and to extract the motion parameters robustly from frame to frame. Finally, all this information is used in a global optimization stage with bundle adjustment which allows to refine the frame-wise initial solution and also to recover the non-rigid 3D model. We show results on synthetic and real data that prove the performance of the proposed method even when there is no rigid motion in the original sequence
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
Background: Single nucleotide polymorphisms (SNPs) are the most frequent type of sequence variation between individuals, and represent a promising tool for finding genetic determinants of complex diseases and understanding the differences in drug response. In this regard, it is of particular interest to study the effect of non-synonymous SNPs in the context of biological networks such as cell signalling pathways. UniProt provides curated information about the functional and phenotypic effects of sequence variation, including SNPs, as well as on mutations of protein sequences. However, no strategy has been developed to integrate this information with biological networks, with the ultimate goal of studying the impact of the functional effect of SNPs in the structure and dynamics of biological networks. Results: First, we identified the different challenges posed by the integration of the phenotypic effect of sequence variants and mutations with biological networks. Second, we developed a strategy for the combination of data extracted from public resources, such as UniProt, NCBI dbSNP, Reactome and BioModels. We generated attribute files containing phenotypic and genotypic annotations to the nodes of biological networks, which can be imported into network visualization tools such as Cytoscape. These resources allow the mapping and visualization of mutations and natural variations of human proteins and their phenotypic effect on biological networks (e.g. signalling pathways, protein-protein interaction networks, dynamic models). Finally, an example on the use of the sequence variation data in the dynamics of a network model is presented. Conclusion: In this paper we present a general strategy for the integration of pathway and sequence variation data for visualization, analysis and modelling purposes, including the study of the functional impact of protein sequence variations on the dynamics of signalling pathways. This is of particular interest when the SNP or mutation is known to be associated to disease. We expect that this approach will help in the study of the functional impact of disease-associated SNPs on the behaviour of cell signalling pathways, which ultimately will lead to a better understanding of the mechanisms underlying complex diseases.
Resumo:
The purpose of this paper is to examine (1) some of the models commonly used to represent fading,and (2) the information-theoretic metrics most commonly used to evaluate performance over those models. We raise the question of whether these models and metrics remain adequate in light of the advances that wireless systems haveundergone over the last two decades. Weaknesses are pointedout, and ideas on possible fixes are put forth.
Resumo:
This paper proposes a method to conduct inference in panel VAR models with cross unit interdependencies and time variations in the coefficients. The approach can be used to obtain multi-unit forecasts and leading indicators and to conduct policy analysis in a multiunit setups. The framework of analysis is Bayesian and MCMC methods are used to estimate the posterior distribution of the features of interest. The model is reparametrized to resemble an observable index model and specification searches are discussed. As an example, we construct leading indicators for inflation and GDP growth in the Euro area using G-7 information.
Resumo:
Protectionism enjoys surprising popular support, in spite of deadweight losses. At thesame time, trade barriers appear to decline with public information about protection.This paper develops an electoral model with heterogeneously informed voters whichexplains both facts and predicts the pattern of trade policy across industries. In themodel, each agent endogenously acquires more information about his sector of employment. As a result, voters support protectionism, because they learn more about thetrade barriers that help them as producers than those that hurt them as consumers.In equilibrium, asymmetric information induces a universal protectionist bias. Thestructure of protection is Pareto inefficient, in contrast to existing models. The modelpredicts a Dracula effect: trade policy for a sector is less protectionist when there ismore public information about it. Using a measure of newspaper coverage across industries, I find that cross-sector evidence from the United States bears out my theoreticalpredictions.
Resumo:
Two-stage game models of information acquisition in stochastic oligopoliesrequire the unrealistic assumption that firms observe the precision ofinformation chosen by their competitors before determining quantities. Thispaper analyzes secret information acquisition as a one-stage game. Relativeto the two-stage game firms are shown to acquire less information. Policyimplications based on the two-stage game yield, therefore, too high taxes ortoo low subsidies for research activities. For the case of heterogeneousduopoly it is shown that comparative statics results partly depend on theobservability assumption.
Resumo:
We show that unconditionally efficient returns do not achieve the maximum unconditionalSharpe ratio, neither display zero unconditional Jensen s alphas, when returns arepredictable. Next, we define a new type of efficient returns that is characterized by thoseunconditional properties. We also study a different type of efficient returns that is rationalizedby standard mean-variance preferences and motivates new Sharpe ratios and Jensen salphas. We revisit the testable implications of asset pricing models from the perspective ofthe three sets of efficient returns. We also revisit the empirical evidence on the conditionalvariants of the CAPM and the Fama-French model from a portfolio perspective.
Resumo:
Until recently farm management made little use of accounting and agriculture has been largely excluded from the scope of accounting standards. This article examines the current use of accounting in agriculture and points theneed to establish accounting standards for agriculture. Empirical evidence shows that accounting can make a significant contribution to agricultural management and farm viability and could also be important for other agents involved in agricultural decision making. Existing literature on failureprediction models and farm viability prediction studies provide the starting point for our research, in which two dichotomous logit models were applied to subsamples of viable and unviable farms in Catalonia, Spain. The firstmodel considered only non-financial variables, while the other also considered financial ones. When accounting variables were added to the model, a significant reduction in deviance was observed.
Resumo:
We perform an experiment on a pure coordination game with uncertaintyabout the payoffs. Our game is closely related to models that have beenused in many macroeconomic and financial applications to solve problemsof equilibrium indeterminacy. In our experiment each subject receives anoisy signal about the true payoffs. This game has a unique strategyprofile that survives the iterative deletion of strictly dominatedstrategies (thus a unique Nash equilibrium). The equilibrium outcomecoincides, on average, with the risk-dominant equilibrium outcome ofthe underlying coordination game. The behavior of the subjects convergesto the theoretical prediction after enough experience has been gained. The data (and the comments) suggest that subjects do not apply through"a priori" reasoning the iterated deletion of dominated strategies.Instead, they adapt to the responses of other players. Thus, the lengthof the learning phase clearly varies for the different signals. We alsotest behavior in a game without uncertainty as a benchmark case. The gamewith uncertainty is inspired by the "global" games of Carlsson and VanDamme (1993).
Resumo:
We use subjects actions in modified dictator games to perform a within-subject classification ofindividuals into four different types of interdependent preferences: Selfish, Social Welfaremaximizers, Inequity Averse and Competitive. We elicit beliefs about other subjects actions inthe same modified dictator games to test how much of the existent heterogeneity in others actions is known by subjects. We find that subjects with different interdependent preferences infact have different beliefs about others actions. In particular, Selfish individuals cannotconceive others being non-Selfish while Social Welfare maximizers are closest to the actualdistribution of others actions. We finally provide subjects with information on other subjects actions and re-classify individuals according to their (new) actions in the same modified dictatorgames. We find that social information does not affect Selfish individuals, but that individualswith interdependent preferences are more likely to change their behavior and tend to behavemore selfishly.