51 resultados para Rational didactic reconstruction
Resumo:
By identifying types whose low-order beliefs up to level li about the state of nature coincide, weobtain quotient type spaces that are typically smaller than the original ones, preserve basic topologicalproperties, and allow standard equilibrium analysis even under bounded reasoning. Our Bayesian Nash(li; l-i)-equilibria capture players inability to distinguish types belonging to the same equivalence class.The case with uncertainty about the vector of levels (li; l-i) is also analyzed. Two examples illustratethe constructions.
Resumo:
Through an experiment, we investigate how the level of rationality relatesto concerns for equality and efficiency. Subjects perform dictator games anda guessing game. More rational subjects are not more frequently of the selfregardingtype. When performing a comparison within the same degree of rationality,self-regarding subjects show more strategic sophistication than othersubjects.
Resumo:
We propose an evolutionary model of a credit market. We show that theeconomy exhibits credit cycles. The model predicts dynamics which are consistent with some evidence about the Great Depression. Real shocks triggerepisodes of credit--crunch which are observed in the process of adjustmenttowards the post shock equilibrium.
Resumo:
The defaults of Philip II have attained mythical status as the origin of sovereigndebt crises. We reassess the fiscal position of Habsburg Castile, derivingcomprehensive estimates of revenue, debt, and expenditure from new archivaldata. The king s debts were sustainable. Primary surpluses were large and rising.Debt-to-revenue ratios remained broadly unchanged during Philip s reign.Castilian finances in the sixteenth century compare favorably with those of otherearly modern fiscal states at the height of their imperial ambitions, includingBritain. The defaults of Philip II therefore reflected short-term liquidity crises,and were not a sign of unsustainable debts.
Resumo:
Many experiments have shown that human subjects do not necessarily behave in line with game theoretic assumptions and solution concepts. The reasons for this non-conformity are multiple. In this paper we study the argument whether a deviation from game theory is because subjects are rational, but doubt that others are rational as well, compared to the argument that subjects, in general, are boundedly rational themselves. To distinguish these two hypotheses, we study behavior in repeated 2-person and many-person Beauty-Contest-Games which are strategically different from one another. We analyze four different treatments and observe that convergence toward equilibrium is driven by learning through the information about the other player s choice and adaptation rather than self-initiated rational reasoning.
Resumo:
In this article we show that in the presence of trading constraints, such as short sale constraints, the standard definition of a Rational Expectations Equilibrium allows for equilibrium prices that reveal information unknown to any active trader in the market. We propose a new definition of the Rational Expectations Equilibrium that incorporates a stronger measurability condition than measurability with respect to the join of the information sets of the agents and give an example of non-existence of equilibrium. The example is robust to perturbations on the data of the economy and the introduction of new assets.
Resumo:
I examine the impact of alternative monetary policy rules on arational asset price bubble, through the lens of an overlapping generations model with nominal rigidities. A systematic increase in interestrates in response to a growing bubble is shown to enhance the fluctuations in the latter, through its positive effect on bubble growth. Theoptimal monetary policy seeks to strike a balance between stabilization of the bubble and stabilization of aggregate demand. The paper'smain findings call into question the theoretical foundations of the casefor "leaning against the wind" monetary policies.
Resumo:
We propose an algorithm that extracts image features that are consistent with the 3D structure of the scene. The features can be robustly tracked over multiple views and serve as vertices of planar patches that suitably represent scene surfaces, while reducing the redundancy in the description of 3D shapes. In other words, the extracted features will off er good tracking properties while providing the basis for 3D reconstruction with minimum model complexity
Resumo:
The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.
Resumo:
In this paper we present a Bayesian image reconstruction algorithm with entropy prior (FMAPE) that uses a space-variant hyperparameter. The spatial variation of the hyperparameter allows different degrees of resolution in areas of different statistical characteristics, thus avoiding the large residuals resulting from algorithms that use a constant hyperparameter. In the first implementation of the algorithm, we begin by segmenting a Maximum Likelihood Estimator (MLE) reconstruction. The segmentation method is based on using a wavelet decomposition and a self-organizing neural network. The result is a predetermined number of extended regions plus a small region for each star or bright object. To assign a different value of the hyperparameter to each extended region and star, we use either feasibility tests or cross-validation methods. Once the set of hyperparameters is obtained, we carried out the final Bayesian reconstruction, leading to a reconstruction with decreased bias and excellent visual characteristics. The method has been applied to data from the non-refurbished Hubble Space Telescope. The method can be also applied to ground-based images.
Resumo:
This paper describes the development and applications of a super-resolution method, known as Super-Resolution Variable-Pixel Linear Reconstruction. The algorithm works combining different lower resolution images in order to obtain, as a result, a higher resolution image. We show that it can make significant spatial resolution improvements to satellite images of the Earth¿s surface allowing recognition of objects with size approaching the limiting spatial resolution of the lower resolution images. The algorithm is based on the Variable-Pixel Linear Reconstruction algorithm developed by Fruchter and Hook, a well-known method in astronomy but never used for Earth remote sensing purposes. The algorithm preserves photometry, can weight input images according to the statistical significance of each pixel, and removes the effect of geometric distortion on both image shape and photometry. In this paper, we describe its development for remote sensing purposes, show the usefulness of the algorithm working with images as different to the astronomical images as the remote sensing ones, and show applications to: 1) a set of simulated multispectral images obtained from a real Quickbird image; and 2) a set of multispectral real Landsat Enhanced Thematic Mapper Plus (ETM+) images. These examples show that the algorithm provides a substantial improvement in limiting spatial resolution for both simulated and real data sets without significantly altering the multispectral content of the input low-resolution images, without amplifying the noise, and with very few artifacts.
Resumo:
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.