966 resultados para Bayesian point estimate
Resumo:
This paper addresses the estimation of parameters of a Bayesian network from incomplete data. The task is usually tackled by running the Expectation-Maximization (EM) algorithm several times in order to obtain a high log-likelihood estimate. We argue that choosing the maximum log-likelihood estimate (as well as the maximum penalized log-likelihood and the maximum a posteriori estimate) has severe drawbacks, being affected both by overfitting and model uncertainty. Two ideas are discussed to overcome these issues: a maximum entropy approach and a Bayesian model averaging approach. Both ideas can be easily applied on top of EM, while the entropy idea can be also implemented in a more sophisticated way, through a dedicated non-linear solver. A vast set of experiments shows that these ideas produce significantly better estimates and inferences than the traditional and widely used maximum (penalized) log-likelihood and maximum a posteriori estimates. In particular, if EM is adopted as optimization engine, the model averaging approach is the best performing one; its performance is matched by the entropy approach when implemented using the non-linear solver. The results suggest that the applicability of these ideas is immediate (they are easy to implement and to integrate in currently available inference engines) and that they constitute a better way to learn Bayesian network parameters.
Resumo:
Diagnostic test sensitivity and specificity are probabilistic estimates with far reaching implications for disease control, management and genetic studies. In the absence of 'gold standard' tests, traditional Bayesian latent class models may be used to assess diagnostic test accuracies through the comparison of two or more tests performed on the same groups of individuals. The aim of this study was to extend such models to estimate diagnostic test parameters and true cohort-specific prevalence, using disease surveillance data. The traditional Hui-Walter latent class methodology was extended to allow for features seen in such data, including (i) unrecorded data (i.e. data for a second test available only on a subset of the sampled population) and (ii) cohort-specific sensitivities and specificities. The model was applied with and without the modelling of conditional dependence between tests. The utility of the extended model was demonstrated through application to bovine tuberculosis surveillance data from Northern and the Republic of Ireland. Simulation coupled with re-sampling techniques, demonstrated that the extended model has good predictive power to estimate the diagnostic parameters and true herd-level prevalence from surveillance data. Our methodology can aid in the interpretation of disease surveillance data, and the results can potentially refine disease control strategies.
Resumo:
Thesis (Ph.D.)--University of Washington, 2015
Resumo:
RESUMO: A estrutura demográfica portuguesa é marcada por baixas taxas de natalidade e mortalidade, onde a população idosa representa uma fatia cada vez mais representativa, fruto de uma maior longevidade. A incidência do cancro, na sua generalidade, é maior precisamente nessa classe etária. A par de outras doenças igualmente lesivas (e.g. cardiovasculares, degenerativas) cuja incidência aumenta com a idade, o cancro merece relevo. Estudos epidemiológicos apresentam o cancro como líder mundial na mortalidade. Em países desenvolvidos, o seu peso representa 25% do número total de óbitos, percentagem essa que mais que duplica noutros países. A obesidade, a baixa ingestão de frutas e vegetais, o sedentarismo, o consumo de tabaco e a ingestão de álcool, configuram-se como cinco dos fatores de risco presentes em 30% das mortes diagnosticadas por cancro. A nível mundial e, em particular no Sul de Portugal, os cancros do estômago, recto e cólon apresentam elevadas taxas de incidência e de mortalidade. Do ponto de vista estritamente económico, o cancro é a doença que mais recursos consome enquanto que do ponto de vista físico e psicológico é uma doença que não limita o seu raio de ação ao doente. O cancro é, portanto, uma doença sempre atual e cada vez mais presente, pois reflete os hábitos e o ambiente de uma sociedade, não obstante as características intrínsecas a cada indivíduo. A adoção de metodologia estatística aplicada à modelação de dados oncológicos é, sobretudo, valiosa e pertinente quando a informação é oriunda de Registos de Cancro de Base Populacional (RCBP). A pertinência é justificada pelo fato destes registos permitirem aferir numa população específica, o risco desta sofrer e/ou vir a sofrer de uma dada neoplasia. O peso que as neoplasias do estômago, cólon e recto assumem foi um dos elementos que motivou o presente estudo que tem por objetivo analisar tendências, projeções, sobrevivências relativas e a distribuição espacial destas neoplasias. Foram considerados neste estudo todos os casos diagnosticados no período 1998-2006, pelo RCBP da região sul de Portugal (ROR-Sul). O estudo descritivo inicial das taxas de incidência e da tendência em cada uma das referidas neoplasias teve como base uma única variável temporal - o ano de diagnóstico - também designada por período. Todavia, uma metodologia que contemple apenas uma única variável temporal é limitativa. No cancro, para além do período, a idade à data do diagnóstico e a coorte de nascimento, são variáveis temporais que poderão prestar um contributo adicional na caracterização das taxas de incidência. A relevância assumida por estas variáveis temporais justificou a sua inclusão numaclasse de modelos designada por modelos Idade-Período-Coorte (Age-Period-Cohort models - APC), utilizada na modelação das taxas de incidência para as neoplasias em estudo. Os referidos modelos permitem ultrapassar o problema de relações não lineares e/ou de mudanças súbitas na tendência linear das taxas. Nos modelos APC foram consideradas a abordagem clássica e a abordagem com recurso a funções suavizadoras. A modelação das taxas foi estratificada por sexo. Foram ainda estudados os respectivos submodelos (apenas com uma ou duas variáveis temporais). Conhecido o comportamento das taxas de incidência, uma questão subsequente prende-se com a sua projeção em períodos futuros. Porém, o efeito de mudanças estruturais na população, ao qual Portugal não é alheio, altera substancialmente o número esperado de casos futuros com cancro. Estimativas da incidência de cancro a nível mundial obtidas a partir de projeções demográficas apontam para um aumento de 25% dos casos de cancro nas próximas duas décadas. Embora a projeção da incidência esteja associada a alguma incerteza, as projeções auxiliam no planeamento de políticas de saúde para a afetação de recursos e permitem a avaliação de cenários e de intervenções que tenham como objetivo a redução do impacto do cancro. O desconhecimento de projeções da taxa de incidência destas neoplasias na área abrangida pelo ROR-Sul, levou à utilização de modelos de projeção que diferem entre si quanto à sua estrutura, linearidade (ou não) dos seus coeficientes e comportamento das taxas na série histórica de dados (e.g. crescente, decrescente ou estável). Os referidos modelos pautaram-se por duas abordagens: (i)modelos lineares no que concerne ao tempo e (ii) extrapolação de efeitos temporais identificados pelos modelos APC para períodos futuros. Foi feita a projeção das taxas de incidência para os anos de 2007 a 2010 tendo em conta o género, idade e neoplasia. É ainda apresentada uma estimativa do impacto económico destas neoplasias no período de projeção. Uma questão pertinente e habitual no contexto clínico e a que o presente estudo pretende dar resposta, reside em saber qual a contribuição da neoplasia em si para a sobrevivência do doente. Nesse sentido, a mortalidade por causa específica é habitualmente utilizada para estimar a mortalidade atribuível apenas ao cancro em estudo. Porém, existem muitas situações em que a causa de morte é desconhecida e, mesmo que esta informação esteja disponível através dos certificados de óbito, não é fácil distinguir os casos em que a principal causa de morte é devida ao cancro. A sobrevivência relativa surge como uma medida objetiva que não necessita do conhecimento da causa específica da morte para o seu cálculo e dar-nos-á uma estimativa da probabilidade de sobrevivência caso o cancro em análise, num cenário hipotético, seja a única causa de morte. Desconhecida a principal causa de morte nos casos diagnosticados com cancro no registo ROR-Sul, foi determinada a sobrevivência relativa para cada uma das neoplasias em estudo, para um período de follow-up de 5 anos, tendo em conta o sexo, a idade e cada uma das regiões que constituem o registo. Foi adotada uma análise por período e as abordagens convencional e por modelos. No epílogo deste estudo, é analisada a influência da variabilidade espaço-temporal nas taxas de incidência. O longo período de latência das doenças oncológicas, a dificuldade em identificar mudanças súbitas no comportamento das taxas, populações com dimensão e riscos reduzidos, são alguns dos elementos que dificultam a análise da variação temporal das taxas. Nalguns casos, estas variações podem ser reflexo de flutuações aleatórias. O efeito da componente temporal aferida pelos modelos APC dá-nos um retrato incompleto da incidência do cancro. A etiologia desta doença, quando conhecida, está associada com alguma frequência a fatores de risco tais como condições socioeconómicas, hábitos alimentares e estilo de vida, atividade profissional, localização geográfica e componente genética. O “contributo”, dos fatores de risco é, por vezes, determinante e não deve ser ignorado. Surge, assim, a necessidade em complementar o estudo temporal das taxas com uma abordagem de cariz espacial. Assim, procurar-se-á aferir se as variações nas taxas de incidência observadas entre os concelhos inseridos na área do registo ROR-Sul poderiam ser explicadas quer pela variabilidade temporal e geográfica quer por fatores socioeconómicos ou, ainda, pelos desiguais estilos de vida. Foram utilizados os Modelos Bayesianos Hierárquicos Espaço-Temporais com o objetivo de identificar tendências espaço-temporais nas taxas de incidência bem como quantificar alguns fatores de risco ajustados à influência simultânea da região e do tempo. Os resultados obtidos pela implementação de todas estas metodologias considera-se ser uma mais valia para o conhecimento destas neoplasias em Portugal.------------ABSTRACT: mortality rates, with the elderly being an increasingly representative sector of the population, mainly due to greater longevity. The incidence of cancer, in general, is greater precisely in that age group. Alongside with other equally damaging diseases (e.g. cardiovascular,degenerative), whose incidence rates increases with age, cancer is of special note. In epidemiological studies, cancer is the global leader in mortality. In developed countries its weight represents 25% of the total number of deaths, with this percentage being doubled in other countries. Obesity, a reduce consumption of fruit and vegetables, physical inactivity, smoking and alcohol consumption, are the five risk factors present in 30% of deaths due to cancer. Globally, and in particular in the South of Portugal, the stomach, rectum and colon cancer have high incidence and mortality rates. From a strictly economic perspective, cancer is the disease that consumes more resources, while from a physical and psychological point of view, it is a disease that is not limited to the patient. Cancer is therefore na up to date disease and one of increased importance, since it reflects the habits and the environment of a society, regardless the intrinsic characteristics of each individual. The adoption of statistical methodology applied to cancer data modelling is especially valuable and relevant when the information comes from population-based cancer registries (PBCR). In such cases, these registries allow for the assessment of the risk and the suffering associated to a given neoplasm in a specific population. The weight that stomach, colon and rectum cancers assume in Portugal was one of the motivations of the present study, that focus on analyzing trends, projections, relative survival and spatial distribution of these neoplasms. The data considered in this study, are all cases diagnosed between 1998 and 2006, by the PBCR of Portugal, ROR-Sul.Only year of diagnosis, also called period, was the only time variable considered in the initial descriptive analysis of the incidence rates and trends for each of the three neoplasms considered. However, a methodology that only considers one single time variable will probably fall short on the conclusions that could be drawn from the data under study. In cancer, apart from the variable period, the age at diagnosis and the birth cohort are also temporal variables and may provide an additional contribution to the characterization of the incidence. The relevance assumed by these temporal variables justified its inclusion in a class of models called Age-Period-Cohort models (APC). This class of models was used for the analysis of the incidence rates of the three cancers under study. APC models allow to model nonlinearity and/or sudden changes in linear relationships of rate trends. Two approaches of APC models were considered: the classical and the one using smoothing functions. The models were stratified by gender and, when justified, further studies explored other sub-models where only one or two temporal variables were considered. After the analysis of the incidence rates, a subsequent goal is related to their projections in future periods. Although the effect of structural changes in the population, of which Portugal is not oblivious, may substantially change the expected number of future cancer cases, the results of these projections could help planning health policies with the proper allocation of resources, allowing for the evaluation of scenarios and interventions that aim to reduce the impact of cancer in a population. Worth noting that cancer incidence worldwide obtained from demographic projections point out to an increase of 25% of cancer cases in the next two decades. The lack of projections of incidence rates of the three cancers under study in the area covered by ROR-Sul, led us to use a variety of forecasting models that differ in the nature and structure. For example, linearity or nonlinearity in their coefficients and the trend of the incidence rates in historical data series (e.g. increasing, decreasing or stable).The models followed two approaches: (i) linear models regarding time and (ii) extrapolation of temporal effects identified by the APC models for future periods. The study provide incidence rates projections and the numbers of newly diagnosed cases for the year, 2007 to 2010, taking into account gender, age and the type of cancer. In addition, an estimate of the economic impact of these neoplasms is presented for the projection period considered. This research also try to address a relevant and common clinical question in these type of studies, regarding the contribution of the type of cancer to the patient survival. In such studies, the primary cause of death is commonly used to estimate the mortality specifically due to the cancer. However, there are many situations in which the cause of death is unknown, or, even if this information is available through the death certificates, it is not easy to distinguish the cases where the primary cause of death is the cancer. With this in mind, the relative survival is an alternative measure that does not need the knowledge of the specific cause of death to be calculated. This estimate will represent the survival probability in the hypothetical scenario of a certain cancer be the only cause of death. For the patients with unknown cause of death that were diagnosed with cancer in the ROR-Sul, the relative survival was calculated for each of the cancers under study, for a follow-up period of 5 years, considering gender, age and each one of the regions that are part the registry. A period analysis was undertaken, considering both the conventional and the model approaches. In final part of this study, we analyzed the influence of space-time variability in the incidence rates. The long latency period of oncologic diseases, the difficulty in identifying subtle changes in the rates behavior, populations of reduced size and low risk are some of the elements that can be a challenge in the analysis of temporal variations in rates, that, in some cases, can reflect simple random fluctuations. The effect of the temporal component measured by the APC models gives an incomplete picture of the cancer incidence. The etiology of this disease, when known, is frequently associated to risk factors such as socioeconomic conditions, eating habits and lifestyle, occupation, geographic location and genetic component. The "contribution"of such risk factors is sometimes decisive in the evolution of the disease and should not be ignored. Therefore, there was the need to consider an additional approach in this study, one of spatial nature, addressing the fact that changes in incidence rates observed in the ROR-Sul area, could be explained either by temporal and geographical variability or by unequal socio-economic or lifestyle factors. Thus, Bayesian hierarchical space-time models were used with the purpose of identifying space-time trends in incidence rates together with the the analysis of the effect of the risk factors considered in the study. The results obtained and the implementation of all these methodologies are considered to be an added value to the knowledge of these neoplasms in Portugal.
Resumo:
Purpose: While imatinib has revolutionized the treatment of chronic myeloid leukaemia (CML) and gastrointestinal stromal tumors (GIST), its pharmacokinetic-pharmacodynamic relationships have been poorly studied. This study aimed to explore the issue in oncologic patients, and to evaluate the specific influence of the target genotype in a GIST subpopulation. Patients and methods: Data from 59 patients (321 plasma samples) were collected during a previous pharmacokinetic study. Based on a population model purposely developed, individual post-hoc Bayesian estimates of pharmacokinetic parameters were derived, and used to estimate drug exposure (AUC; area under curve). Free fraction parameters were deduced from a model incorporating plasma alpha1-acid glycoprotein levels. Associations between AUC (or clearance) and therapeutic response (coded on a 3-point scale), or tolerability (4-point scale), were explored by ordered logistic regression. Influence of KIT genotype on response was also assessed in GIST patients. Results: Total and free drug exposure correlated with the number of side effects (p < 0.005). A relationship with response was not evident in the whole patient set (with good-responders tending to receive lower doses and bad-responders higher doses). In GIST patients however, higher free drug exposure predicted better responses. A strong association was notably observed in patients harboring an exon 9 mutation or a wild type KIT, known to decrease tumor sensitivity towards imatinib (p < 0.005). Conclusions: Our results are arguments to further evaluate the potential benefit of a therapeutic monitoring program for imatinib. Our data also suggest that stratification by genotype will be important in future trials.
Resumo:
This article extends existing discussion in literature on probabilistic inference and decision making with respect to continuous hypotheses that are prevalent in forensic toxicology. As a main aim, this research investigates the properties of a widely followed approach for quantifying the level of toxic substances in blood samples, and to compare this procedure with a Bayesian probabilistic approach. As an example, attention is confined to the presence of toxic substances, such as THC, in blood from car drivers. In this context, the interpretation of results from laboratory analyses needs to take into account legal requirements for establishing the 'presence' of target substances in blood. In a first part, the performance of the proposed Bayesian model for the estimation of an unknown parameter (here, the amount of a toxic substance) is illustrated and compared with the currently used method. The model is then used in a second part to approach-in a rational way-the decision component of the problem, that is judicial questions of the kind 'Is the quantity of THC measured in the blood over the legal threshold of 1.5 μg/l?'. This is pointed out through a practical example.
Resumo:
The purpose of this study is to examine the impact of the choice of cut-off points, sampling procedures, and the business cycle on the accuracy of bankruptcy prediction models. Misclassification can result in erroneous predictions leading to prohibitive costs to firms, investors and the economy. To test the impact of the choice of cut-off points and sampling procedures, three bankruptcy prediction models are assessed- Bayesian, Hazard and Mixed Logit. A salient feature of the study is that the analysis includes both parametric and nonparametric bankruptcy prediction models. A sample of firms from Lynn M. LoPucki Bankruptcy Research Database in the U. S. was used to evaluate the relative performance of the three models. The choice of a cut-off point and sampling procedures were found to affect the rankings of the various models. In general, the results indicate that the empirical cut-off point estimated from the training sample resulted in the lowest misclassification costs for all three models. Although the Hazard and Mixed Logit models resulted in lower costs of misclassification in the randomly selected samples, the Mixed Logit model did not perform as well across varying business-cycles. In general, the Hazard model has the highest predictive power. However, the higher predictive power of the Bayesian model, when the ratio of the cost of Type I errors to the cost of Type II errors is high, is relatively consistent across all sampling methods. Such an advantage of the Bayesian model may make it more attractive in the current economic environment. This study extends recent research comparing the performance of bankruptcy prediction models by identifying under what conditions a model performs better. It also allays a range of user groups, including auditors, shareholders, employees, suppliers, rating agencies, and creditors' concerns with respect to assessing failure risk.
Resumo:
This paper employs the one-sector Real Business Cycle model as a testing ground for four different procedures to estimate Dynamic Stochastic General Equilibrium (DSGE) models. The procedures are: 1 ) Maximum Likelihood, with and without measurement errors and incorporating Bayesian priors, 2) Generalized Method of Moments, 3) Simulated Method of Moments, and 4) Indirect Inference. Monte Carlo analysis indicates that all procedures deliver reasonably good estimates under the null hypothesis. However, there are substantial differences in statistical and computational efficiency in the small samples currently available to estimate DSGE models. GMM and SMM appear to be more robust to misspecification than the alternative procedures. The implications of the stochastic singularity of DSGE models for each estimation method are fully discussed.
Resumo:
Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique. Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières. Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés. Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière.
Resumo:
Cette recherche s’intéresse à la valeur d’usage du toit vert dans l’arrondissement du Plateau Mont-Royal, de la Ville de Montréal. Spécifiquement, elle s’intéresse à l’état problématique qu’est l’étalement urbain en tentant l’estimation de la valeur d’usage du toit vert, de la cour arrière et du balcon-terrasse soutenue par le dispositif argumentaire immobilier propre à chacun des aménagements. L’étalement urbain est la source de graves problèmes et la résorption de ses effets néfastes est devenue une priorité dans l’aménagement du territoire. L’une des principales raisons sousjacentes à l’exode urbain est la valeur d’usage accordée à la parcelle extérieure qu’offre l’habitat unifamilial pavillonnaire. Dans cette situation, la question est de savoir si l’insertion d’espaces verts privés en milieu urbain peut participer à la résorption de l’exode urbain. Or, dans l’agglomération, le sol manque. Le toit vert privé apparaît comme une alternative astucieuse, bien que limitée, au terrain entourant la maison. Reste à savoir si les gens le valorisent tel un espace vert privé d’extrême proximité, à même le sol. À la lueur de l’analyse, il advient que le toit vert ne présente pas de valeur d’usage comparable à celle de la cour arrière dans le contexte observé, précisément parce que leurs publics cibles s’opposent d’emblée. En revanche, le balcon-terrasse et le toit vert semblent être, en fonction des données construites, des aménagements à valeur d’usage comparable.
Resumo:
In order to estimate the motion of an object, the visual system needs to combine multiple local measurements, each of which carries some degree of ambiguity. We present a model of motion perception whereby measurements from different image regions are combined according to a Bayesian estimator --- the estimated motion maximizes the posterior probability assuming a prior favoring slow and smooth velocities. In reviewing a large number of previously published phenomena we find that the Bayesian estimator predicts a wide range of psychophysical results. This suggests that the seemingly complex set of illusions arise from a single computational strategy that is optimal under reasonable assumptions.
Resumo:
The statistical analysis of literary style is the part of stylometry that compares measurable characteristics in a text that are rarely controlled by the author, with those in other texts. When the goal is to settle authorship questions, these characteristics should relate to the author’s style and not to the genre, epoch or editor, and they should be such that their variation between authors is larger than the variation within comparable texts from the same author. For an overview of the literature on stylometry and some of the techniques involved, see for example Mosteller and Wallace (1964, 82), Herdan (1964), Morton (1978), Holmes (1985), Oakes (1998) or Lebart, Salem and Berry (1998). Tirant lo Blanc, a chivalry book, is the main work in catalan literature and it was hailed to be “the best book of its kind in the world” by Cervantes in Don Quixote. Considered by writters like Vargas Llosa or Damaso Alonso to be the first modern novel in Europe, it has been translated several times into Spanish, Italian and French, with modern English translations by Rosenthal (1996) and La Fontaine (1993). The main body of this book was written between 1460 and 1465, but it was not printed until 1490. There is an intense and long lasting debate around its authorship sprouting from its first edition, where its introduction states that the whole book is the work of Martorell (1413?-1468), while at the end it is stated that the last one fourth of the book is by Galba (?-1490), after the death of Martorell. Some of the authors that support the theory of single authorship are Riquer (1990), Chiner (1993) and Badia (1993), while some of those supporting the double authorship are Riquer (1947), Coromines (1956) and Ferrando (1995). For an overview of this debate, see Riquer (1990). Neither of the two candidate authors left any text comparable to the one under study, and therefore discriminant analysis can not be used to help classify chapters by author. By using sample texts encompassing about ten percent of the book, and looking at word length and at the use of 44 conjunctions, prepositions and articles, Ginebra and Cabos (1998) detect heterogeneities that might indicate the existence of two authors. By analyzing the diversity of the vocabulary, Riba and Ginebra (2000) estimates that stylistic boundary to be near chapter 383. Following the lead of the extensive literature, this paper looks into word length, the use of the most frequent words and into the use of vowels in each chapter of the book. Given that the features selected are categorical, that leads to three contingency tables of ordered rows and therefore to three sequences of multinomial observations. Section 2 explores these sequences graphically, observing a clear shift in their distribution. Section 3 describes the problem of the estimation of a suden change-point in those sequences, in the following sections we propose various ways to estimate change-points in multinomial sequences; the method in section 4 involves fitting models for polytomous data, the one in Section 5 fits gamma models onto the sequence of Chi-square distances between each row profiles and the average profile, the one in Section 6 fits models onto the sequence of values taken by the first component of the correspondence analysis as well as onto sequences of other summary measures like the average word length. In Section 7 we fit models onto the marginal binomial sequences to identify the features that distinguish the chapters before and after that boundary. Most methods rely heavily on the use of generalized linear models
Resumo:
The log-ratio methodology makes available powerful tools for analyzing compositional data. Nevertheless, the use of this methodology is only possible for those data sets without null values. Consequently, in those data sets where the zeros are present, a previous treatment becomes necessary. Last advances in the treatment of compositional zeros have been centered especially in the zeros of structural nature and in the rounded zeros. These tools do not contemplate the particular case of count compositional data sets with null values. In this work we deal with \count zeros" and we introduce a treatment based on a mixed Bayesian-multiplicative estimation. We use the Dirichlet probability distribution as a prior and we estimate the posterior probabilities. Then we apply a multiplicative modi¯cation for the non-zero values. We present a case study where this new methodology is applied. Key words: count data, multiplicative replacement, composition, log-ratio analysis
Resumo:
Asset correlations are of critical importance in quantifying portfolio credit risk and economic capitalin financial institutions. Estimation of asset correlation with rating transition data has focusedon the point estimation of the correlation without giving any consideration to the uncertaintyaround these point estimates. In this article we use Bayesian methods to estimate a dynamicfactor model for default risk using rating data (McNeil et al., 2005; McNeil and Wendin, 2007).Bayesian methods allow us to formally incorporate human judgement in the estimation of assetcorrelation, through the prior distribution and fully characterize a confidence set for the correlations.Results indicate: i) a two factor model rather than the one factor model, as proposed bythe Basel II framework, better represents the historical default data. ii) importance of unobservedfactors in this type of models is reinforced and point out that the levels of the implied asset correlationscritically depend on the latent state variable used to capture the dynamics of default,as well as other assumptions on the statistical model. iii) the posterior distributions of the assetcorrelations show that the Basel recommended bounds, for this parameter, undermine the levelof systemic risk.
Resumo:
This thesis proposes a solution to the problem of estimating the motion of an Unmanned Underwater Vehicle (UUV). Our approach is based on the integration of the incremental measurements which are provided by a vision system. When the vehicle is close to the underwater terrain, it constructs a visual map (so called "mosaic") of the area where the mission takes place while, at the same time, it localizes itself on this map, following the Concurrent Mapping and Localization strategy. The proposed methodology to achieve this goal is based on a feature-based mosaicking algorithm. A down-looking camera is attached to the underwater vehicle. As the vehicle moves, a sequence of images of the sea-floor is acquired by the camera. For every image of the sequence, a set of characteristic features is detected by means of a corner detector. Then, their correspondences are found in the next image of the sequence. Solving the correspondence problem in an accurate and reliable way is a difficult task in computer vision. We consider different alternatives to solve this problem by introducing a detailed analysis of the textural characteristics of the image. This is done in two phases: first comparing different texture operators individually, and next selecting those that best characterize the point/matching pair and using them together to obtain a more robust characterization. Various alternatives are also studied to merge the information provided by the individual texture operators. Finally, the best approach in terms of robustness and efficiency is proposed. After the correspondences have been solved, for every pair of consecutive images we obtain a list of image features in the first image and their matchings in the next frame. Our aim is now to recover the apparent motion of the camera from these features. Although an accurate texture analysis is devoted to the matching pro-cedure, some false matches (known as outliers) could still appear among the right correspon-dences. For this reason, a robust estimation technique is used to estimate the planar transformation (homography) which explains the dominant motion of the image. Next, this homography is used to warp the processed image to the common mosaic frame, constructing a composite image formed by every frame of the sequence. With the aim of estimating the position of the vehicle as the mosaic is being constructed, the 3D motion of the vehicle can be computed from the measurements obtained by a sonar altimeter and the incremental motion computed from the homography. Unfortunately, as the mosaic increases in size, image local alignment errors increase the inaccuracies associated to the position of the vehicle. Occasionally, the trajectory described by the vehicle may cross over itself. In this situation new information is available, and the system can readjust the position estimates. Our proposal consists not only in localizing the vehicle, but also in readjusting the trajectory described by the vehicle when crossover information is obtained. This is achieved by implementing an Augmented State Kalman Filter (ASKF). Kalman filtering appears as an adequate framework to deal with position estimates and their associated covariances. Finally, some experimental results are shown. A laboratory setup has been used to analyze and evaluate the accuracy of the mosaicking system. This setup enables a quantitative measurement of the accumulated errors of the mosaics created in the lab. Then, the results obtained from real sea trials using the URIS underwater vehicle are shown.