914 resultados para Analyses errors
Resumo:
We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keepng memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.
Resumo:
This paper studies behavior in experiments with a linear voluntary contributions mechanism for public goods conducted in Japan, the Netherlands, Spain and the USA. The same experimental design was used in the four countries. Our 'contribution function' design allows us to obtain a view of subjects' behavior from two complementary points of view. If yields information about situations where, in purely pecuniary terms, it is a dominant strategy to contribute all the endowment and about situations where it is a dominant strategy to contribute nothing. Our results show, first, that differences in behavior across countries are minor. We find that when people play "the same game" they behave similarly. Second, for all four countries our data are inconsistent with the explanation that subjects contribute only out of confusion. A common cooperative motivation is needed to explain the date.
Resumo:
We present experimental and theoretical analyses of data requirements for haplotype inference algorithms. Our experiments include a broad range of problem sizes under two standard models of tree distribution and were designed to yield statistically robust results despite the size of the sample space. Our results validate Gusfield's conjecture that a population size of n log n is required to give (with high probability) sufficient information to deduce the n haplotypes and their complete evolutionary history. The experimental results inspired our experimental finding with theoretical bounds on the population size. We also analyze the population size required to deduce some fixed fraction of the evolutionary history of a set of n haplotypes and establish linear bounds on the required sample size. These linear bounds are also shown theoretically.
Resumo:
MicroRNAs (miRNAs) have been shown to play important roles in both brain development and the regulation of adult neural cell functions. However, a systematic analysis of brain miRNA functions has been hindered by a lack of comprehensive information regarding the distribution of miRNAs in neuronal versus glial cells. To address this issue, we performed microarray analyses of miRNA expression in the four principal cell types of the CNS (neurons, astrocytes, oligodendrocytes, and microglia) using primary cultures from postnatal d 1 rat cortex. These analyses revealed that neural miRNA expression is highly cell-type specific, with 116 of the 351 miRNAs examined being differentially expressed fivefold or more across the four cell types. We also demonstrate that individual neuron-enriched or neuron-diminished RNAs had a significant impact on the specification of neuronal phenotype: overexpression of the neuron-enriched miRNAs miR-376a and miR-434 increased the differentiation of neural stem cells into neurons, whereas the opposite effect was observed for the glia-enriched miRNAs miR-223, miR-146a, miR-19, and miR-32. In addition, glia-enriched miRNAs were shown to inhibit aberrant glial expression of neuronal proteins and phenotypes, as exemplified by miR-146a, which inhibited neuroligin 1-dependent synaptogenesis. This study identifies new nervous system functions of specific miRNAs, reveals the global extent to which the brain may use differential miRNA expression to regulate neural cell-type-specific phenotypes, and provides an important data resource that defines the compartmentalization of brain miRNAs across different cell types.
Resumo:
Eukaryotic cells generate energy in the form of ATP, through a network of mitochondrial complexes and electron carriers known as the oxidative phosphorylation system. In mammals, mitochondrial complex I (CI) is the largest component of this system, comprising 45 different subunits encoded by mitochondrial and nuclear DNA. Humans diagnosed with mutations in the gene NDUFS4, encoding a nuclear DNA-encoded subunit of CI (NADH dehydrogenase ubiquinone Fe-S protein 4), typically suffer from Leigh syndrome, a neurodegenerative disease with onset in infancy or early childhood. Mitochondria from NDUFS4 patients usually lack detectable NDUFS4 protein and show a CI stability/assembly defect. Here, we describe a recessive mouse phenotype caused by the insertion of a transposable element into Ndufs4, identified by a novel combined linkage and expression analysis. Designated Ndufs4(fky), the mutation leads to aberrant transcript splicing and absence of NDUFS4 protein in all tissues tested of homozygous mice. Physical and behavioral symptoms displayed by Ndufs4(fky/fky) mice include temporary fur loss, growth retardation, unsteady gait, and abnormal body posture when suspended by the tail. Analysis of CI in Ndufs4(fky/fky) mice using blue native PAGE revealed the presence of a faster migrating crippled complex. This crippled CI was shown to lack subunits of the "N assembly module", which contains the NADH binding site, but contained two assembly factors not present in intact CI. Metabolomic analysis of the blood by tandem mass spectrometry showed increased hydroxyacylcarnitine species, implying that the CI defect leads to an imbalanced NADH/NAD(+) ratio that inhibits mitochondrial fatty acid β-oxidation.
Resumo:
Among the largest resources for biological sequence data is the large amount of expressed sequence tags (ESTs) available in public and proprietary databases. ESTs provide information on transcripts but for technical reasons they often contain sequencing errors. Therefore, when analyzing EST sequences computationally, such errors must be taken into account. Earlier attempts to model error prone coding regions have shown good performance in detecting and predicting these while correcting sequencing errors using codon usage frequencies. In the research presented here, we improve the detection of translation start and stop sites by integrating a more complex mRNA model with codon usage bias based error correction into one hidden Markov model (HMM), thus generalizing this error correction approach to more complex HMMs. We show that our method maintains the performance in detecting coding sequences.
Resumo:
Generalized multiresolution analyses are increasing sequences of subspaces of a Hilbert space H that fail to be multiresolution analyses in the sense of wavelet theory because the core subspace does not have an orthonormal basis generated by a fixed scaling function. Previous authors have studied a multiplicity function m which, loosely speaking, measures the failure of the GMRA to be an MRA. When the Hilbert space H is L2(Rn), the possible multiplicity functions have been characterized by Baggett and Merrill. Here we start with a function m satisfying a consistency condition which is known to be necessary, and build a GMRA in an abstract Hilbert space with multiplicity function m.
Resumo:
This paper analyzes the persistence of shocks that affect the real exchange rates for a panel of seventeen OECD developed countries during the post-Bretton Woods era. The adoption of a panel data framework allows us to distinguish two different sources of shocks, i.e. the idiosyncratic and the common shocks, each of which may have di¤erent persistence patterns on the real exchange rates. We first investigate the stochastic properties of the panel data set using panel stationarity tests that simultaneously consider both the presence of cross-section dependence and multiple structural breaks that have not received much attention in previous persistence analyses. Empirical results indicate that real exchange rates are non-stationary when the analysis does not account for structural breaks, although this conclusion is reversed when they are modeled. Consequently, misspecification errors due to the non-consideration of structural breaks leads to upward biased shocks' persistence measures. The persistence measures for the idiosyncratic and common shocks have been estimated in this paper always turn out to be less than one year.
Resumo:
OBJECTIVE: To investigate the planning of subgroup analyses in protocols of randomised controlled trials and the agreement with corresponding full journal publications. DESIGN: Cohort of protocols of randomised controlled trial and subsequent full journal publications. SETTING: Six research ethics committees in Switzerland, Germany, and Canada. DATA SOURCES: 894 protocols of randomised controlled trial involving patients approved by participating research ethics committees between 2000 and 2003 and 515 subsequent full journal publications. RESULTS: Of 894 protocols of randomised controlled trials, 252 (28.2%) included one or more planned subgroup analyses. Of those, 17 (6.7%) provided a clear hypothesis for at least one subgroup analysis, 10 (4.0%) anticipated the direction of a subgroup effect, and 87 (34.5%) planned a statistical test for interaction. Industry sponsored trials more often planned subgroup analyses compared with investigator sponsored trials (195/551 (35.4%) v 57/343 (16.6%), P<0.001). Of 515 identified journal publications, 246 (47.8%) reported at least one subgroup analysis. In 81 (32.9%) of the 246 publications reporting subgroup analyses, authors stated that subgroup analyses were prespecified, but this was not supported by 28 (34.6%) corresponding protocols. In 86 publications, authors claimed a subgroup effect, but only 36 (41.9%) corresponding protocols reported a planned subgroup analysis. CONCLUSIONS: Subgroup analyses are insufficiently described in the protocols of randomised controlled trials submitted to research ethics committees, and investigators rarely specify the anticipated direction of subgroup effects. More than one third of statements in publications of randomised controlled trials about subgroup prespecification had no documentation in the corresponding protocols. Definitive judgments regarding credibility of claimed subgroup effects are not possible without access to protocols and analysis plans of randomised controlled trials.
Resumo:
In previous work we have applied the environmental multi-region input-output (MRIO) method proposed by Turner et al (2007) to examine the ‘CO2 trade balance’ between Scotland and the Rest of the UK. In McGregor et al (2008) we construct an interregional economy-environment input-output (IO) and social accounting matrix (SAM) framework that allows us to investigate methods of attributing responsibility for pollution generation in the UK at the regional level. This facilitates analysis of the nature and significance of environmental spillovers and the existence of an environmental ‘trade balance’ between regions. While the existence of significant data problems mean that the quantitative results of this study should be regarded as provisional, we argue that the use of such a framework allows us to begin to consider questions such as the extent to which a devolved authority like the Scottish Parliament can and should be responsible for contributing to national targets for reductions in emissions levels (e.g. the UK commitment to the Kyoto Protocol) when it is limited in the way it can control emissions, particularly with respect to changes in demand elsewhere in the UK. However, while such analysis is useful in terms of accounting for pollution flows in the single time period that the accounts relate to, it is limited when the focus is on modelling the impacts of any marginal change in activity. This is because a conventional demand-driven IO model assumes an entirely passive supply-side in the economy (i.e. all supply is infinitely elastic) and is further restricted by the assumption of universal Leontief (fixed proportions) technology implied by the use of the A and multiplier matrices. In this paper we argue that where analysis of marginal changes in activity is required, a more flexible interregional computable general equilibrium approach that models behavioural relationships in a more realistic and theory-consistent manner, is more appropriate and informative. To illustrate our analysis, we compare the results of introducing a positive demand stimulus in the UK economy using both IO and CGE interregional models of Scotland and the rest of the UK. In the case of the latter, we demonstrate how more theory consistent modelling of both demand and supply side behaviour at the regional and national levels affect model results, including the impact on the interregional CO2 ‘trade balance’.
Resumo:
We study a psychologically based foundation for choice errors. The decision maker applies a preference ranking after forming a 'consideration set' prior to choosing an alternative. Membership of the consideration set is determined both by the alternative specific salience and by the rationality of the agent (his general propensity to consider all alternatives). The model turns out to include a logit formulation as a special case. In general, it has a rich set of implications both for exogenous parameters and for a situation in which alternatives can a¤ect their own salience (salience games). Such implications are relevant to assess the link between 'revealed' preferences and 'true' preferences: for example, less rational agents may paradoxically express their preference through choice more truthfully than more rational agents.
Resumo:
The aim of the paper is to identify the added value from using general equilibrium techniques to consider the economy-wide impacts of increased efficiency in household energy use. We take as an illustrative case study the effect of a 5% improvement in household energy efficiency on the UK economy. This impact is measured through simulations that use models that have increasing degrees of endogeneity but are calibrated on a common data set. That is to say, we calculate rebound effects for models that progress from the most basic partial equilibrium approach to a fully specified general equilibrium treatment. The size of the rebound effect on total energy use depends upon: the elasticity of substitution of energy in household consumption; the energy intensity of the different elements of household consumption demand; and the impact of changes in income, economic activity and relative prices. A general equilibrium model is required to capture these final three impacts.
Resumo:
Using survey expectations data and Markov-switching models, this paper evaluates the characteristics and evolution of investors' forecast errors about the yen/dollar exchange rate. Since our model is derived from the uncovered interest rate parity (UIRP) condition and our data cover a period of low interest rates, this study is also related to the forward premium puzzle and the currency carry trade strategy. We obtain the following results. First, with the same forecast horizon, exchange rate forecasts are homogeneous among different industry types, but within the same industry, exchange rate forecasts differ if the forecast time horizon is different. In particular, investors tend to undervalue the future exchange rate for long term forecast horizons; however, in the short run they tend to overvalue the future exchange rate. Second, while forecast errors are found to be partly driven by interest rate spreads, evidence against the UIRP is provided regardless of the forecasting time horizon; the forward premium puzzle becomes more significant in shorter term forecasting errors. Consistent with this finding, our coefficients on interest rate spreads provide indirect evidence of the yen carry trade over only a short term forecast horizon. Furthermore, the carry trade seems to be active when there is a clear indication that the interest rate will be low in the future.
Resumo:
This paper provides a general treatment of the implications for welfare of legal uncertainty. We distinguish legal uncertainty from decision errors: though the former can be influenced by the latter, the latter are neither necessary nor sufficient for the existence of legal uncertainty. We show that an increase in decision errors will always reduce welfare. However, for any given level of decision errors, information structures involving more legal uncertainty can improve welfare. This holds always, even when there is complete legal uncertainty, when sanctions on socially harmful actions are set at their optimal level. This transforms radically one’s perception about the “costs” of legal uncertainty. We also provide general proofs for two results, previously established under restrictive assumptions. The first is that Effects-Based enforcement procedures may welfare dominate Per Se (or object-based) procedures and will always do so when sanctions are optimally set. The second is that optimal sanctions may well be higher under enforcement procedures involving more legal uncertainty.