878 resultados para Uncertainty in generation
Resumo:
Sign language animations can lead to better accessibility of information and services for people who are deaf and have low literacy skills in spoken/written languages. Due to the distinct word-order, syntax, and lexicon of the sign language from the spoken/written language, many deaf people find it difficult to comprehend the text on a computer screen or captions on a television. Animated characters performing sign language in a comprehensible way could make this information accessible. Facial expressions and other non-manual components play an important role in the naturalness and understandability of these animations. Their coordination to the manual signs is crucial for the interpretation of the signed message. Software to advance the support of facial expressions in generation of sign language animation could make this technology more acceptable for deaf people. In this survey, we discuss the challenges in facial expression synthesis and we compare and critique the state of the art projects on generating facial expressions in sign language animations. Beginning with an overview of facial expressions linguistics, sign language animation technologies, and some background on animating facial expressions, a discussion of the search strategy and criteria used to select the five projects that are the primary focus of this survey follows. This survey continues on to introduce the work from the five projects under consideration. Their contributions are compared in terms of support for specific sign language, categories of facial expressions investigated, focus range in the animation generation, use of annotated corpora, input data or hypothesis for their approach, and other factors. Strengths and drawbacks of individual projects are identified in the perspectives above. This survey concludes with our current research focus in this area and future prospects.
Resumo:
Much has been written about Samuel Beckett’s Waiting for Godot, but as far as I am aware no one has compared the two characters of Vladimir and Estragon in order to analyse what makes Vladimir more willing to wait than Estragon. This essay claims that Vladimir is more willing to wait because he cannot deal with the fact that they might be waiting in vain and he involves himself more in his surrounding than Estragon. It is Vladimir who waits for Godot, not Estragon, and Vladimir believes that Godot will have all the answers. This will be explored by examining four topics, all of which will be dealt with from a psychoanalytical point of view and in relation to waiting. Consciousness in relation to the decision to wait; Uncertainty in relation to the unknown outcome of waiting; Coping mechanisms in relation to ways of dealing with waiting; Ways of waiting in relation to waiting-time and two kinds of waiting-characters.
Resumo:
This thesis develops and evaluates statistical methods for different types of genetic analyses, including quantitative trait loci (QTL) analysis, genome-wide association study (GWAS), and genomic evaluation. The main contribution of the thesis is to provide novel insights in modeling genetic variance, especially via random effects models. In variance component QTL analysis, a full likelihood model accounting for uncertainty in the identity-by-descent (IBD) matrix was developed. It was found to be able to correctly adjust the bias in genetic variance component estimation and gain power in QTL mapping in terms of precision. Double hierarchical generalized linear models, and a non-iterative simplified version, were implemented and applied to fit data of an entire genome. These whole genome models were shown to have good performance in both QTL mapping and genomic prediction. A re-analysis of a publicly available GWAS data set identified significant loci in Arabidopsis that control phenotypic variance instead of mean, which validated the idea of variance-controlling genes. The works in the thesis are accompanied by R packages available online, including a general statistical tool for fitting random effects models (hglm), an efficient generalized ridge regression for high-dimensional data (bigRR), a double-layer mixed model for genomic data analysis (iQTL), a stochastic IBD matrix calculator (MCIBD), a computational interface for QTL mapping (qtl.outbred), and a GWAS analysis tool for mapping variance-controlling loci (vGWAS).
Resumo:
This paper proposes a spatial-temporal downscaling approach to construction of the intensity-duration-frequency (IDF) relations at a local site in the context of climate change and variability. More specifically, the proposed approach is based on a combination of a spatial downscaling method to link large-scale climate variables given by General Circulation Model (GCM) simulations with daily extreme precipitations at a site and a temporal downscaling procedure to describe the relationships between daily and sub-daily extreme precipitations based on the scaling General Extreme Value (GEV) distribution. The feasibility and accuracy of the suggested method were assessed using rainfall data available at eight stations in Quebec (Canada) for the 1961-2000 period and climate simulations under four different climate change scenarios provided by the Canadian (CGCM3) and UK (HadCM3) GCM models. Results of this application have indicated that it is feasible to link sub-daily extreme rainfalls at a local site with large-scale GCM-based daily climate predictors for the construction of the IDF relations for present (1961-1990) and future (2020s, 2050s, and 2080s) periods at a given site under different climate change scenarios. In addition, it was found that annual maximum rainfalls downscaled from the HadCM3 displayed a smaller change in the future, while those values estimated from the CGCM3 indicated a large increasing trend for future periods. This result has demonstrated the presence of high uncertainty in climate simulations provided by different GCMs. In summary, the proposed spatial-temporal downscaling method provided an essential tool for the estimation of extreme rainfalls that are required for various climate-related impact assessment studies for a given region.
Resumo:
This study presents an approach to combine uncertainties of the hydrological model outputs predicted from a number of machine learning models. The machine learning based uncertainty prediction approach is very useful for estimation of hydrological models' uncertainty in particular hydro-metrological situation in real-time application [1]. In this approach the hydrological model realizations from Monte Carlo simulations are used to build different machine learning uncertainty models to predict uncertainty (quantiles of pdf) of the a deterministic output from hydrological model . Uncertainty models are trained using antecedent precipitation and streamflows as inputs. The trained models are then employed to predict the model output uncertainty which is specific for the new input data. We used three machine learning models namely artificial neural networks, model tree, locally weighted regression to predict output uncertainties. These three models produce similar verification results, which can be improved by merging their outputs dynamically. We propose an approach to form a committee of the three models to combine their outputs. The approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model in the Brue catchment in UK and the Bagmati catchment in Nepal. The verification results show that merged output is better than an individual model output. [1] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press, 2013.
Resumo:
A procedure for characterizing global uncertainty of a rainfall-runoff simulation model based on using grey numbers is presented. By using the grey numbers technique the uncertainty is characterized by an interval; once the parameters of the rainfall-runoff model have been properly defined as grey numbers, by using the grey mathematics and functions it is possible to obtain simulated discharges in the form of grey numbers whose envelope defines a band which represents the vagueness/uncertainty associated with the simulated variable. The grey numbers representing the model parameters are estimated in such a way that the band obtained from the envelope of simulated grey discharges includes an assigned percentage of observed discharge values and is at the same time as narrow as possible. The approach is applied to a real case study highlighting that a rigorous application of the procedure for direct simulation through the rainfall-runoff model with grey parameters involves long computational times. However, these times can be significantly reduced using a simplified computing procedure with minimal approximations in the quantification of the grey numbers representing the simulated discharges. Relying on this simplified procedure, the conceptual rainfall-runoff grey model is thus calibrated and the uncertainty bands obtained both downstream of the calibration process and downstream of the validation process are compared with those obtained by using a well-established approach, like the GLUE approach, for characterizing uncertainty. The results of the comparison show that the proposed approach may represent a valid tool for characterizing the global uncertainty associable with the output of a rainfall-runoff simulation model.
Resumo:
We define Nash equilibrium for two-person normal form games in the presence of uncertainty, in the sense of Knight(1921). We use the fonna1iution of uncertainty due to Schmeidler and Gilboa. We show tbat there exist Nash equilibria for any degree of uncertainty, as measured by the uncertainty aversion (Dow anel Wer1ang(l992a». We show by example tbat prudent behaviour (maxmin) can be obtained as an outcome even when it is not rationaliuble in the usual sense. Next, we break down backward industion in the twice repeated prisoner's dilemma. We link these results with those on cooperation in the finitely repeated prisoner's dilemma obtained by Kreps-Milgrom-Roberts-Wdson(1982), and withthe 1iterature on epistemological conditions underlying Nash equilibrium. The knowledge notion implicit in this mode1 of equilibrium does not display logical omniscience.
Resumo:
We present two alternative definitions of Nash equilibrium for two person games in the presence af uncertainty, in the sense of Knight. We use the formalization of uncertainty due to Schmeidler and Gilboa. We show that, with one of the definitions, prudent behaviour (maxmin) can be obtained as an outcome even when it is not rationalizable in the usual sense. Most striking is that with the Same definition we break down backward induction in the twice repeated prisoner's dilemma. We also link these results with the Kreps-Milgrom-Roberts-Wilson explanation of cooperation in the finitely repeated prisoner's dilemma.
Resumo:
We define a subgame perfect Nash equilibrium under Knightian uncertainty for two players, by means of a recursive backward induction procedure. We prove an extension of the Zermelo-von Neumann-Kuhn Theorem for games of perfect information, i. e., that the recursive procedure generates a Nash equilibrium under uncertainty (Dow and Werlang(1994)) of the whole game. We apply the notion for two well known games: the chain store and the centipede. On the one hand, we show that subgame perfection under Knightian uncertainty explains the chain store paradox in a one shot version. On the other hand, we show that subgame perfection under uncertainty does not account for the leaving behavior observed in the centipede game. This is in contrast to Dow, Orioli and Werlang(1996) where we explain by means of Nash equilibria under uncertainty (but not subgame perfect) the experiments of McKelvey and Palfrey(1992). Finally, we show that there may be nontrivial subgame perfect equilibria under uncertainty in more complex extensive form games, as in the case of the finitely repeated prisoner's dilemma, which accounts for cooperation in early stages of the game.
Resumo:
This paper contributes to the literature on aid and economic growth. We posit that it is not the levei of aid flows per se but the stability of such flows that determines the impact of aid on economic growth. Three measures of aid instability are employed. One is a simple deviation from trend, and measures overall instability. The other measures are based on auto-regressive estimates to capture deviations from an expected trend. These measures are intended to proxy for uncertainty in aid receipts. We posit that such uncertainty will influence the relationship between aid and investment and how recipient governments respond to aid, and will therefore affect how aid impacts on growth. We estimate a standard cross-country growth regression including the leveI of aid, and find aid to be insignificant (in line with other results in the literature). We then introduce measures of instability. Aid remains insignificant when we account for overall instability. However, when we account for uncertainty (which is negative and significant), we find that aid has a significant positive effect on growth. We conduct stability tests that show that the significance of aid is largely due to its effect on the volume of investment. The finding that uncertainty of aid receipts reduces the effectiveness of aid is robust. When we control for this, aid appears to have a significant positive influence on growth. When the regression is estimated for the sub-sample of African countries these findings hold, although the effectiveness of aid appears weaker than for the full sample.
Resumo:
We define a subgame perfect Nash equilibrium under Knightian uncertainty for two players, by means of a recursive backward induction procedure. We prove an extension of the Zermelo-von Neumann-Kuhn Theorem for games of perfect information, i. e., that the recursive procedure generates a Nash equilibrium under uncertainty (Dow and Werlang(1994)) of the whole game. We apply the notion for two well known games: the chain store and the centipede. On the one hand, we show that subgame perfection under Knightian uncertainty explains the chain store paradox in a one shot version. On the other hand, we show that subgame perfection under uncertainty does not account for the leaving behavior observed in the centipede game. This is in contrast to Dow, Orioli and Werlang(1996) where we explain by means of Nash equilibria under uncertainty (but not subgame perfect) the experiments of McKelvey and Palfrey(1992). Finally, we show that there may be nontrivial subgame perfect equilibria under uncertainty in more complex extensive form games, as in the case of the finitely repeated prisoner's dilemma, which accounts for cooperation in early stages of the game .
Resumo:
Genomewide marker information can improve the reliability of breeding value predictions for young selection candidates in genomic selection. However, the cost of genotyping limits its use to elite animals, and how such selective genotyping affects predictive ability of genomic selection models is an open question. We performed a simulation study to evaluate the quality of breeding value predictions for selection candidates based on different selective genotyping strategies in a population undergoing selection. The genome consisted of 10 chromosomes of 100 cM each. After 5,000 generations of random mating with a population size of 100 (50 males and 50 females), generation G(0) (reference population) was produced via a full factorial mating between the 50 males and 50 females from generation 5,000. Different levels of selection intensities (animals with the largest yield deviation value) in G(0) or random sampling (no selection) were used to produce offspring of G(0) generation (G(1)). Five genotyping strategies were used to choose 500 animals in G(0) to be genotyped: 1) Random: randomly selected animals, 2) Top: animals with largest yield deviation values, 3) Bottom: animals with lowest yield deviations values, 4) Extreme: animals with the 250 largest and the 250 lowest yield deviations values, and 5) Less Related: less genetically related animals. The number of individuals in G(0) and G(1) was fixed at 2,500 each, and different levels of heritability were considered (0.10, 0.25, and 0.50). Additionally, all 5 selective genotyping strategies (Random, Top, Bottom, Extreme, and Less Related) were applied to an indicator trait in generation G(0), and the results were evaluated for the target trait in generation G(1), with the genetic correlation between the 2 traits set to 0.50. The 5 genotyping strategies applied to individuals in G(0) (reference population) were compared in terms of their ability to predict the genetic values of the animals in G(1) (selection candidates). Lower correlations between genomic-based estimates of breeding values (GEBV) and true breeding values (TBV) were obtained when using the Bottom strategy. For Random, Extreme, and Less Related strategies, the correlation between GEBV and TBV became slightly larger as selection intensity decreased and was largest when no selection occurred. These 3 strategies were better than the Top approach. In addition, the Extreme, Random, and Less Related strategies had smaller predictive mean squared errors (PMSE) followed by the Top and Bottom methods. Overall, the Extreme genotyping strategy led to the best predictive ability of breeding values, indicating that animals with extreme yield deviations values in a reference population are the most informative when training genomic selection models.
Resumo:
Chicken is one of the most important sources of animal protein for human consumption, and breeding programmes have been responsible for constant improvements in production efficiency and product quality. Furthermore, chicken has largely contributed to fundamental discoveries in biology for the last 100 years. In this article we review recent developments in poultry genomics and their contribution to adding functional information to the already existing structural genomics, including the availability of the complete genome sequence, a comprehensive collection of mRNA sequences ( ESTs), microarray platforms, and their use to complement QTL mapping strategies in the identification of genes that underlie complex traits. Efforts of the Brazilian Poultry Genomics Programme in this area resulted in generation of a resource population, which was used for identification of Quantitative Trait Loci ( QTL) regions, generation of ESTs and candidate gene studies that contributed to furthering our understanding of the complex biological processes involved in growth and muscular development in chicken.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A emissão de CO2 do solo apresenta alta variabilidade espacial, devido à grande dependência espacial observada nas propriedades do solo que a influenciam. Neste estudo, objetivou-se: caracterizar e relacionar a variabilidade espacial da respiração do solo e propriedades relacionadas; avaliar a acurácia dos resultados fornecidos pelo método da krigagem ordinária e simulação sequencial gaussiana; e avaliar a incerteza na predição da variabilidade espacial da emissão de CO2 do solo e demais propriedades utilizando a simulação sequencial gaussiana. O estudo foi conduzido em uma malha amostral irregular com 141 pontos, instalada sobre a cultura de cana-de-açúcar. Nesses pontos foram avaliados a emissão de CO2 do solo, a temperatura do solo, a porosidade livre de água, o teor de matéria orgânica e a densidade do solo. Todas as variáveis apresentaram estrutura de dependência espacial. A emissão de CO2 do solo mostrou correlações positivas com a matéria orgânica (r = 0,25, p < 0,05) e a porosidade livre de água (r = 0,27, p <0,01) e negativa com a densidade do solo (r = -0,41, p < 0,01). No entanto, quando os valores estimados espacialmente (N=8833) são considerados, a porosidade livre de água passa a ser a principal variável responsável pelas características espaciais da respiração do solo, apresentando correlação de 0,26 (p < 0,01). As simulações individuais propiciaram, para todas as variáveis analisadas, melhor reprodução das funções de distribuição acumuladas e dos variogramas, em comparação à krigagem e estimativa E-type. As maiores incertezas na predição da emissão de CO2 estiveram associadas às regiões da área estudada com maiores valores observados e estimados, produzindo estimativas, ao longo do período estudado, de 0,18 a 1,85 t CO2 ha-1, dependendo dos diferentes cenários simulados. O conhecimento das incertezas gerado por meio dos diferentes cenários de estimativa pode ser incluído em inventários de gases do efeito estufa, resultando em estimativas mais conservadoras do potencial de emissão desses gases.