49 resultados para Inference mechanisms
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
The paper presents a competence-based instructional design system and a way to provide a personalization of navigation in the course content. The navigation aid tool builds on the competence graph and the student model, which includes the elements of uncertainty in the assessment of students. An individualized navigation graph is constructed for each student, suggesting the competences the student is more prepared to study. We use fuzzy set theory for dealing with uncertainty. The marks of the assessment tests are transformed into linguistic terms and used for assigning values to linguistic variables. For each competence, the level of difficulty and the level of knowing its prerequisites are calculated based on the assessment marks. Using these linguistic variables and approximate reasoning (fuzzy IF-THEN rules), a crisp category is assigned to each competence regarding its level of recommendation.
Resumo:
This paper examines the governance of Spanish Banks around two main issues. First, does a poor economic performance activate those governance interventions that favor the removal of executive directors and the merger of non-performing banks? And second, does the relationship between governance intervention and economic performance vary with the ownership form of the bank? Our results show that a bad performance does activate governance mechanisms in banks, although for the case of Savings Banks intervention is confined to a merger or acquisition. Nevertheless, the distinct ownership structure of Savings Banks does not fully protect non-performing banks from disappearing. Product-market competition compensates for those weak internal governance mechanisms that result from an ownership form which gives voice to several stakeholder groups.
Resumo:
Ma (1996) studied the random order mechanism, a matching mechanism suggested by Roth and Vande Vate (1990) for marriage markets. By means of an example he showed that the random order mechanism does not always reach all stable matchings. Although Ma's (1996) result is true, we show that the probability distribution he presented - and therefore the proof of his Claim 2 - is not correct. The mistake in the calculations by Ma (1996) is due to the fact that even though the example looks very symmetric, some of the calculations are not as ''symmetric.''
Resumo:
For the many-to-one matching model in which firms have substitutable and quota q-separable preferences over subsets of workers we show that the workers-optimal stable mechanism is group strategy-proof for the workers. In order to prove this result, we also show that under this domain of preferences (which contains the domain of responsive preferences of the college admissions problem) the workers-optimal stable matching is weakly Pareto optimal for the workers and the Blocking Lemma holds as well. We exhibit an example showing that none of these three results remain true if the preferences of firms are substitutable but not quota q-separable.
Resumo:
This paper studies behavior in experiments with a linear voluntary contributions mechanism for public goods conducted in Japan, the Netherlands, Spain and the USA. The same experimental design was used in the four countries. Our 'contribution function' design allows us to obtain a view of subjects' behavior from two complementary points of view. If yields information about situations where, in purely pecuniary terms, it is a dominant strategy to contribute all the endowment and about situations where it is a dominant strategy to contribute nothing. Our results show, first, that differences in behavior across countries are minor. We find that when people play "the same game" they behave similarly. Second, for all four countries our data are inconsistent with the explanation that subjects contribute only out of confusion. A common cooperative motivation is needed to explain the date.
Resumo:
According to the account of the European Union (EU) decision making proposed in this paper, this is a bargaining process during which actors shift their policy positions with a view to reaching agreements on controversial issues.
Resumo:
We present experimental and theoretical analyses of data requirements for haplotype inference algorithms. Our experiments include a broad range of problem sizes under two standard models of tree distribution and were designed to yield statistically robust results despite the size of the sample space. Our results validate Gusfield's conjecture that a population size of n log n is required to give (with high probability) sufficient information to deduce the n haplotypes and their complete evolutionary history. The experimental results inspired our experimental finding with theoretical bounds on the population size. We also analyze the population size required to deduce some fixed fraction of the evolutionary history of a set of n haplotypes and establish linear bounds on the required sample size. These linear bounds are also shown theoretically.
Resumo:
Consider a model with parameter phi, and an auxiliary model with parameter theta. Let phi be a randomly sampled from a given density over the known parameter space. Monte Carlo methods can be used to draw simulated data and compute the corresponding estimate of theta, say theta_tilde. A large set of tuples (phi, theta_tilde) can be generated in this manner. Nonparametric methods may be use to fit the function E(phi|theta_tilde=a), using these tuples. It is proposed to estimate phi using the fitted E(phi|theta_tilde=theta_hat), where theta_hat is the auxiliary estimate, using the real sample data. This is a consistent and asymptotically normally distributed estimator, under certain assumptions. Monte Carlo results for dynamic panel data and vector autoregressions show that this estimator can have very attractive small sample properties. Confidence intervals can be constructed using the quantiles of the phi for which theta_tilde is close to theta_hat. Such confidence intervals are found to have very accurate coverage.
Resumo:
El objetivo de esta investigación es aportar evidencia sobre las fuentes de las economías de aglomeración para el caso español. De todas las maneras posibles que se han tomado en la literatura para medir las economías de aglomeración, nosotros lo analizamos a partir de las decisiones de localización de las empresas manufactureras. La literatura reciente ha puesto de relieve que el análisis basado en la disyuntiva localización / urbanización (relaciones dentro de un mismo sector) no es suficiente para entender las economías de aglomeración. Sin embargo, las relaciones entre los diferentes sectores sí resultan significativas al examinar por qué las empresas que pertenecen a diferentes sectores se localizan unas al lado de las otras. Con esto en mente, intentamos explicar que relaciones entre diferentes sectores pueden explicar coaglomeración. Para ello, nos centramos en aquellas relaciones entre sectores definidos a partir de los mecanismos de aglomeración de Marshall, es decir, labor market, input sharing y knowledge spillovers. Trabajamos con el labor market pooling en la medida en que los dos sectores utilizan los mismos trabajadores (clasificación de ocupaciones). Con el segundo mecanismo de Marshall, input sharing, introducimos cómo dos sectores tienen una relación de comprador / vendedor. Por último, nos referimos a dos sectores que utilizan las mismas tecnologías en cuanto a los knowledge spillovers. Con el fin de capturar todos los efectos de los mecanismos de aglomeracion en España, en esta investigación trabajamos con dos ámbitos geográficos, los municipios y los mercados de trabajo locales. La literatura existente nunca se ha puesto de acuerdo en cual es el ámbito geográfico en el que mejor trabajan los mecanismos Marshall, por lo que hemos cubierto todas las unidades geográficas potenciales.
Resumo:
We propose a smooth multibidding mechanism for environments where a group of agents have to choose one out of several projects (possibly with the help of a social planner). Our proposal is related to the multibidding mechanism (Pérez-Castrillo and Wettstein, 2002) but it is "smoother" in the sense that small variations in an agent's bids do not lead to dramatic changes in the probability of selecting a project. This mechanism is shown to possess several interesting properties. Unlike in the study by Pérez Castrillo and Wettstein (2002), the equilibrium outcome is unique. Second, it ensures an equal sharing of the surplus that it induces. Finally, it enables reaching an outcome as close to effciency as is desired.
Resumo:
Given a sample from a fully specified parametric model, let Zn be a given finite-dimensional statistic - for example, an initial estimator or a set of sample moments. We propose to (re-)estimate the parameters of the model by maximizing the likelihood of Zn. We call this the maximum indirect likelihood (MIL) estimator. We also propose a computationally tractable Bayesian version of the estimator which we refer to as a Bayesian Indirect Likelihood (BIL) estimator. In most cases, the density of the statistic will be of unknown form, and we develop simulated versions of the MIL and BIL estimators. We show that the indirect likelihood estimators are consistent and asymptotically normally distributed, with the same asymptotic variance as that of the corresponding efficient two-step GMM estimator based on the same statistic. However, our likelihood-based estimators, by taking into account the full finite-sample distribution of the statistic, are higher order efficient relative to GMM-type estimators. Furthermore, in many cases they enjoy a bias reduction property similar to that of the indirect inference estimator. Monte Carlo results for a number of applications including dynamic and nonlinear panel data models, a structural auction model and two DSGE models show that the proposed estimators indeed have attractive finite sample properties.
Resumo:
Low concentrations of elements in geochemical analyses have the peculiarity of beingcompositional data and, for a given level of significance, are likely to be beyond thecapabilities of laboratories to distinguish between minute concentrations and completeabsence, thus preventing laboratories from reporting extremely low concentrations of theanalyte. Instead, what is reported is the detection limit, which is the minimumconcentration that conclusively differentiates between presence and absence of theelement. A spatially distributed exhaustive sample is employed in this study to generateunbiased sub-samples, which are further censored to observe the effect that differentdetection limits and sample sizes have on the inference of population distributionsstarting from geochemical analyses having specimens below detection limit (nondetects).The isometric logratio transformation is used to convert the compositional data in thesimplex to samples in real space, thus allowing the practitioner to properly borrow fromthe large source of statistical techniques valid only in real space. The bootstrap method isused to numerically investigate the reliability of inferring several distributionalparameters employing different forms of imputation for the censored data. The casestudy illustrates that, in general, best results are obtained when imputations are madeusing the distribution best fitting the readings above detection limit and exposes theproblems of other more widely used practices. When the sample is spatially correlated, itis necessary to combine the bootstrap with stochastic simulation
Resumo:
First: A continuous-time version of Kyle's model (Kyle 1985), known as the Back's model (Back 1992), of asset pricing with asymmetric information, is studied. A larger class of price processes and of noise traders' processes are studied. The price process, as in Kyle's model, is allowed to depend on the path of the market order. The process of the noise traders' is an inhomogeneous Lévy process. Solutions are found by the Hamilton-Jacobi-Bellman equations. With the insider being risk-neutral, the price pressure is constant, and there is no equilibirium in the presence of jumps. If the insider is risk-averse, there is no equilibirium in the presence of either jumps or drifts. Also, it is analised when the release time is unknown. A general relation is established between the problem of finding an equilibrium and of enlargement of filtrations. Random announcement time is random is also considered. In such a case the market is not fully efficient and there exists equilibrium if the sensitivity of prices with respect to the global demand is time decreasing according with the distribution of the random time. Second: Power variations. it is considered, the asymptotic behavior of the power variation of processes of the form _integral_0^t u(s-)dS(s), where S_ is an alpha-stable process with index of stability 0&alpha&2 and the integral is an Itô integral. Stable convergence of corresponding fluctuations is established. These results provide statistical tools to infer the process u from discrete observations. Third: A bond market is studied where short rates r(t) evolve as an integral of g(t-s)sigma(s) with respect to W(ds), where g and sigma are deterministic and W is the stochastic Wiener measure. Processes of this type are particular cases of ambit processes. These processes are in general not of the semimartingale kind.
Resumo:
In this technical report, we approach one of the practical aspects when it comes to represent users' interests from their tagging activity, namely the categorization of tags into high-level categories of interest. The reason is that the representation of user profiles on the basis of the myriad of tags available on the Web is certainly unfeasible from various practical perspectives; mainly concerningthe unavailability of data to reliably, accurately measure interests across such fine-grained categorization, and, should the data be available, its overwhelming computational intractability. Motivated by this, our study presents the results of a categorization process whereby a collection of tags posted at BibSonomy #http://www.bibsonomy.org# are classified into 5 categories of interest. The methodology used to conduct such categorization is in line with other works in the field.
Resumo:
Pseudomonas fluorescens EPS62e was selected during a screening procedure for its high efficacy in controlling infections by Erwinia amylovora, the causal agent of fire blight disease, on different plant materials. In field trials carried out in pear trees during bloom, EPS62e colonized flowers until the carrying capacity, providing a moderate efficacy of fire-blight control. The putative mechanisms of EPS62e antagonism against E. amylovora were studied. EPS62e did not produce antimicrobial compounds described in P. fluorescens species and only developed antagonism in King’s B medium, where it produced siderophores. Interaction experiments in culture plate wells including a membrane filter, which physically separated the cultures, confirmed that inhibition of E. amylovora requires cell-to-cell contact. The spectrum of nutrient assimilation indicated that EPS62e used significantly more or different carbon sources than the pathogen. The maximum growth rate and affinity for nutrients in immature fruit extract were higher in EPS62e than in E. amylovora, but the cell yield was similar. The fitness of EPS62e and E. amylovora was studied upon inoculation in immature pear fruit wounds and hypanthia of intact flowers under controlled-environment conditions. When inoculated separately, EPS62e grew faster in flowers, whereas E. amylovora grew faster in fruit wounds because of its rapid spread to adjacent tissues. However, in preventive inoculations of EPS62e, subsequent growth of EPS101 was significantly inhibited. It is concluded that cell-to-cell interference as well as differences in growth potential and the spectrum and efficiency of nutrient use are mechanisms of antagonism of EPS62e against E. amylovora