992 resultados para Incomplete model


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis concerns the analysis of epidemic models. We adopt the Bayesian paradigm and develop suitable Markov Chain Monte Carlo (MCMC) algorithms. This is done by considering an Ebola outbreak in the Democratic Republic of Congo, former Zaïre, 1995 as a case of SEIR epidemic models. We model the Ebola epidemic deterministically using ODEs and stochastically through SDEs to take into account a possible bias in each compartment. Since the model has unknown parameters, we use different methods to estimate them such as least squares, maximum likelihood and MCMC. The motivation behind choosing MCMC over other existing methods in this thesis is that it has the ability to tackle complicated nonlinear problems with large number of parameters. First, in a deterministic Ebola model, we compute the likelihood function by sum of square of residuals method and estimate parameters using the LSQ and MCMC methods. We sample parameters and then use them to calculate the basic reproduction number and to study the disease-free equilibrium. From the sampled chain from the posterior, we test the convergence diagnostic and confirm the viability of the model. The results show that the Ebola model fits the observed onset data with high precision, and all the unknown model parameters are well identified. Second, we convert the ODE model into a SDE Ebola model. We compute the likelihood function using extended Kalman filter (EKF) and estimate parameters again. The motivation of using the SDE formulation here is to consider the impact of modelling errors. Moreover, the EKF approach allows us to formulate a filtered likelihood for the parameters of such a stochastic model. We use the MCMC procedure to attain the posterior distributions of the parameters of the SDE Ebola model drift and diffusion parts. In this thesis, we analyse two cases: (1) the model error covariance matrix of the dynamic noise is close to zero , i.e. only small stochasticity added into the model. The results are then similar to the ones got from deterministic Ebola model, even if methods of computing the likelihood function are different (2) the model error covariance matrix is different from zero, i.e. a considerable stochasticity is introduced into the Ebola model. This accounts for the situation where we would know that the model is not exact. As a results, we obtain parameter posteriors with larger variances. Consequently, the model predictions then show larger uncertainties, in accordance with the assumption of an incomplete model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dans cette thèse, je me suis interessé à l’identification partielle des effets de traitements dans différents modèles de choix discrets avec traitements endogènes. Les modèles d’effets de traitement ont pour but de mesurer l’impact de certaines interventions sur certaines variables d’intérêt. Le type de traitement et la variable d’intérêt peuvent être défini de manière générale afin de pouvoir être appliqué à plusieurs différents contextes. Il y a plusieurs exemples de traitement en économie du travail, de la santé, de l’éducation, ou en organisation industrielle telle que les programmes de formation à l’emploi, les techniques médicales, l’investissement en recherche et développement, ou l’appartenance à un syndicat. La décision d’être traité ou pas n’est généralement pas aléatoire mais est basée sur des choix et des préférences individuelles. Dans un tel contexte, mesurer l’effet du traitement devient problématique car il faut tenir compte du biais de sélection. Plusieurs versions paramétriques de ces modèles ont été largement étudiées dans la littérature, cependant dans les modèles à variation discrète, la paramétrisation est une source importante d’identification. Dans un tel contexte, il est donc difficile de savoir si les résultats empiriques obtenus sont guidés par les données ou par la paramétrisation imposée au modèle. Etant donné, que les formes paramétriques proposées pour ces types de modèles n’ont généralement pas de fondement économique, je propose dans cette thèse de regarder la version nonparamétrique de ces modèles. Ceci permettra donc de proposer des politiques économiques plus robustes. La principale difficulté dans l’identification nonparamétrique de fonctions structurelles, est le fait que la structure suggérée ne permet pas d’identifier un unique processus générateur des données et ceci peut être du soit à la présence d’équilibres multiples ou soit à des contraintes sur les observables. Dans de telles situations, les méthodes d’identifications traditionnelles deviennent inapplicable d’où le récent développement de la littérature sur l’identification dans les modèles incomplets. Cette littérature porte une attention particuliere à l’identification de l’ensemble des fonctions structurelles d’intérêt qui sont compatibles avec la vraie distribution des données, cet ensemble est appelé : l’ensemble identifié. Par conséquent, dans le premier chapitre de la thèse, je caractérise l’ensemble identifié pour les effets de traitements dans le modèle triangulaire binaire. Dans le second chapitre, je considère le modèle de Roy discret. Je caractérise l’ensemble identifié pour les effets de traitements dans un modèle de choix de secteur lorsque la variable d’intérêt est discrète. Les hypothèses de sélection du secteur comprennent le choix de sélection simple, étendu et généralisé de Roy. Dans le dernier chapitre, je considère un modèle à variable dépendante binaire avec plusieurs dimensions d’hétérogéneité, tels que les jeux d’entrées ou de participation. je caractérise l’ensemble identifié pour les fonctions de profits des firmes dans un jeux avec deux firmes et à information complète. Dans tout les chapitres, l’ensemble identifié des fonctions d’intérêt sont écrites sous formes de bornes et assez simple pour être estimées à partir des méthodes d’inférence existantes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The object of this doctoral thesis is the analysis of the political and administrative purpose that is given to the reform process of a vital sector of State powers within the framework of delegate democracy, such as the administration of Justice. The object is also to analyze if State reform in a diminished or non-liberal surrounding increase or improve conditions of democracy in a given situation, based on the constitutional “what should be”, or if what occurs is a process of “seizure” of the functions of State, which becomes an institutional risk. Finally, we will examine the real and effective existence of a horizontal accountability process through the use of institutional resources, which would evidence the existence of an incomplete model of democracy. This analysis implies the relationship between two institutions within public administration: State Reform, as an act of change in State structure in order to improve qualitatively the outcomes and outputs of public policies, and in sum, to make the system work better. This, as it will be examined later, is the case of Latin America as a response of the State to three processes in crisis: fiscal, as in government intervention or in the form of bureaucratic administration. In that scheme of things, this thesis examines the present state of the art in public administration science of this process to prove that in delegate democracy, this type of instruments disregard the constitutive elements of democracy and serve, especially in critical areas of the administration, allowing for Power to dismiss Law. This research seeks to contribute towards an area seldom analyzed regarding public administration doctrine under the light of the theory of law, which is the connection between previous conditions or principal inputs of an execution process of a democracy and, on the other hand, regarding the effects of introducing a reform within models of a changing democracy and new concepts of the rule of law. While reviewing writings regarding State reform, it is clear that no approximations have been previously made in reference to prior conditions of the political system in order to begin operating a reform which respects fundamental rights as an object of this procedure. Furthermore, no analysis has been found regarding structural change of strategic areas in State services as to the effect caused on democratic exercise and the outcome in an open society...

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We consider a Bertrand duopoly model with unknown costs. The firms' aim is to choose the price of its product according to the well-known concept of Bayesian Nash equilibrium. The chooses are made simultaneously by both firms. In this paper, we suppose that each firm has two different technologies, and uses one of them according to a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We show that this game has exactly one Bayesian Nash equilibrium. We analyse the advantages, for firms and for consumers, of using the technology with highest production cost versus the one with cheapest production cost. We prove that the expected profit of each firm increases with the variance of its production costs. We also show that the expected price of each good increases with both expected production costs, being the effect of the expected production costs of the rival dominated by the effect of the own expected production costs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We consider inference in randomized studies, in which repeatedly measured outcomes may be informatively missing due to drop out. In this setting, it is well known that full data estimands are not identified unless unverified assumptions are imposed. We assume a non-future dependence model for the drop-out mechanism and posit an exponential tilt model that links non-identifiable and identifiable distributions. This model is indexed by non-identified parameters, which are assumed to have an informative prior distribution, elicited from subject-matter experts. Under this model, full data estimands are shown to be expressed as functionals of the distribution of the observed data. To avoid the curse of dimensionality, we model the distribution of the observed data using a Bayesian shrinkage model. In a simulation study, we compare our approach to a fully parametric and a fully saturated model for the distribution of the observed data. Our methodology is motivated and applied to data from the Breast Cancer Prevention Trial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When the data consist of certain attributes measured on the same set of items in different situations, they would be described as a three-mode three-way array. A mixture likelihood approach can be implemented to cluster the items (i.e., one of the modes) on the basis of both of the other modes simultaneously (i.e,, the attributes measured in different situations). In this paper, it is shown that this approach can be extended to handle three-mode three-way arrays where some of the data values are missing at random in the sense of Little and Rubin (1987). The methodology is illustrated by clustering the genotypes in a three-way soybean data set where various attributes were measured on genotypes grown in several environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The IEEE 802.15.4 protocol has the ability to support time-sensitive Wireless Sensor Network (WSN) applications due to the Guaranteed Time Slot (GTS) Medium Access Control mechanism. Recently, several analytical and simulation models of the IEEE 802.15.4 protocol have been proposed. Nevertheless, currently available simulation models for this protocol are both inaccurate and incomplete, and in particular they do not support the GTS mechanism. In this paper, we propose an accurate OPNET simulation model, with focus on the implementation of the GTS mechanism. The motivation that has driven this work is the validation of the Network Calculus based analytical model of the GTS mechanism that has been previously proposed and to compare the performance evaluation of the protocol as given by the two alternative approaches. Therefore, in this paper we contribute an accurate OPNET model for the IEEE 802.15.4 protocol. Additionally, and probably more importantly, based on the simulation model we propose a novel methodology to tune the protocol parameters such that a better performance of the protocol can be guaranteed, both concerning maximizing the throughput of the allocated GTS as well as concerning minimizing frame delay.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Double Degree in Economics and International Business from the NOVA – School of Business and Economics and Insper Instituto de Ensino e Pesquisa

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze a continuous-time bilateral double auction in the presence of two-sided incomplete information and a smallest money unit. A distinguishing feature of our model is that intermediate concessions are not observable by the adversary: they are only communicated to a passive auctioneer. An alternative interpretation is that of mediated bargaining. We show that an equilibrium using only the extreme agreements always exists and display the necessary and sufficient condition for the existence of (perfect Bayesian) equilibra which yield intermediate agreements. For the symmetric case with uniform type distribution we numerically calculate the equilibria. We find that the equilibrium which does not use compromise agreements is the least efficient, however, the rest of the equilibria yield the lower social welfare the higher number of compromise agreements are used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that a flex-price two-sector open economy DSGE model can explain the poor degree of international risk sharing and exchange rate disconnect. We use a suite of model evaluation measures and examine the role of (i) traded and non-traded sectors; (ii) financial market incompleteness; (iii) preference shocks; (iv) deviations from UIP condition for the exchange rates; and (v) creditor status in net foreign assets. We find that there is a good case for both traded and non-traded productivity shocks as well as UIP deviations in explaining the puzzles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper provides a new benchmark for the analysis of the international diversi…cation puzzle in a tractable new open economy macroeconomic model. Building on Cole and Obstfeld (1991) and Heathcote and Perri (2009), this model speci…es an equilibrium model of perfect risk sharing in incomplete markets, with endogenous portfolios and number of varieties. Equity home bias may not be a puzzle but a perfectly optimal allocation for hedging risk. In contrast to previous work, the model shows that: (i) optimal international portfolio diversi…cation is driven by home bias in capital goods, independently of home bias in consumption, and by the share of income accruing to labour. The model explains reasonably well the recent patterns of portfolio allocations in developed economies; and (ii) optimal portfolio shares are independent of market dynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the quantitative properties of a dynamic general equilibrium model in which agents face both idiosyncratic and aggregate income risk, state-dependent borrowing constraints that bind in some but not all periods and markets are incomplete. Optimal individual consumption-savings plans and equilibrium asset prices are computed under various assumptions about income uncertainty. Then we investigate whether our general equilibrium model with incomplete markets replicates two empirical observations: the high correlation between individual consumption and individual income, and the equity premium puzzle. We find that, when the driving processes are calibrated according to the data from wage income in different sectors of the US economy, the results move in the direction of explaining these observations, but the model falls short of explaining the observed correlations quantitatively. If the incomes of agents are assumed independent of each other, the observations can be explained quantitatively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the advancement of phylogenetic methods to estimate speciation and extinction rates, their power can be limited under variable rates, in particular for clades with high extinction rates and small number of extant species. Fossil data can provide a powerful alternative source of information to investigate diversification processes. Here, we present PyRate, a computer program to estimate speciation and extinction rates and their temporal dynamics from fossil occurrence data. The rates are inferred in a Bayesian framework and are comparable to those estimated from phylogenetic trees. We describe how PyRate can be used to explore different models of diversification. In addition to the diversification rates, it provides estimates of the parameters of the preservation process (fossilization and sampling) and the times of speciation and extinction of each species in the data set. Moreover, we develop a new birth-death model to correlate the variation of speciation/extinction rates with changes of a continuous trait. Finally, we demonstrate the use of Bayes factors for model selection and show how the posterior estimates of a PyRate analysis can be used to generate calibration densities for Bayesian molecular clock analysis. PyRate is an open-source command-line Python program available at http://sourceforge.net/projects/pyrate/.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We construct and calibrate a general equilibrium business cycle model with unemployment and precautionary saving. We compute the cost of business cycles and locate the optimum in a set of simple cyclical fiscal policies. Our economy exhibits productivity shocks, giving firms an incentive to hire more when productivity is high. However, business cycles make workers' income riskier, both by increasing the unconditional probability of unusuallylong unemployment spells, and by making wages more variable, and therefore they decrease social welfare by around one-fourth or one-third of 1% of consumption. Optimal fiscal policy offsets the cycle, holding unemployment benefits constant but varying the tax rate procyclically to smooth hiring. By running a deficit of 4% to 5% of output in recessions, the government eliminates half the variation in the unemployment rate, most of the variation in workers'aggregate consumption, and most of the welfare cost of business cycles.