947 resultados para Hierarchical Bayesian Methods
Resumo:
Two probabilistic interpretations of the n-tuple recognition method are put forward in order to allow this technique to be analysed with the same Bayesian methods used in connection with other neural network models. Elementary demonstrations are then given of the use of maximum likelihood and maximum entropy methods for tuning the model parameters and assisting their interpretation. One of the models can be used to illustrate the significance of overlapping n-tuple samples with respect to correlations in the patterns.
Resumo:
The problem of evaluating different learning rules and other statistical estimators is analysed. A new general theory of statistical inference is developed by combining Bayesian decision theory with information geometry. It is coherent and invariant. For each sample a unique ideal estimate exists and is given by an average over the posterior. An optimal estimate within a model is given by a projection of the ideal estimate. The ideal estimate is a sufficient statistic of the posterior, so practical learning rules are functions of the ideal estimator. If the sole purpose of learning is to extract information from the data, the learning rule must also approximate the ideal estimator. This framework is applicable to both Bayesian and non-Bayesian methods, with arbitrary statistical models, and to supervised, unsupervised and reinforcement learning schemes.
Resumo:
We present results that compare the performance of neural networks trained with two Bayesian methods, (i) the Evidence Framework of MacKay (1992) and (ii) a Markov Chain Monte Carlo method due to Neal (1996) on a task of classifying segmented outdoor images. We also investigate the use of the Automatic Relevance Determination method for input feature selection.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Introducción: El Cáncer es prevenible en algunos casos, si se evita la exposición a sustancias cancerígenas en el medio ambiente. En Colombia, Cundinamarca es uno de los departamentos con mayores incrementos en la tasa de mortalidad y en el municipio de Sibaté, habitantes han manifestado preocupación por el incremento de la enfermedad. En el campo de la salud ambiental mundial, la georreferenciación aplicada al estudio de fenómenos en salud, ha tenido éxito con resultados válidos. El estudio propuso usar herramientas de información geográfica, para generar análisis de tiempo y espacio que hicieran visible el comportamiento del cáncer en Sibaté y sustentaran hipótesis de influencias ambientales sobre concentraciones de casos. Objetivo: Obtener incidencia y prevalencia de casos de cáncer en habitantes de Sibaté y georreferenciar los casos en un periodo de 5 años, con base en indagación de registros. Metodología: Estudio exploratorio descriptivo de corte transversal,sobre todos los diagnósticos de cáncer entre los años 2010 a 2014, encontrados en los archivos de la Secretaria de Salud municipal. Se incluyeron unicamente quienes tuvieron residencia permanente en el municipio y fueron diagnosticados con cáncer entre los años de 2010 a 2104. Sobre cada caso se obtuvo género, edad, estrato socioeconómico, nivel académico, ocupación y estado civil. Para el análisis de tiempo se usó la fecha de diagnóstico y para el análisis de espacio, la dirección de residencia, tipo de cáncer y coordenada geográfica. Se generaron coordenadas geográficas con un equipo GPS Garmin y se crearon mapas con los puntos de la ubicación de las viviendas de los pacientes. Se proceso la información, con Epi Info 7 Resultados: Se encontraron 107 casos de cáncer registrados en la Secretaria de Salud de Sibaté, 66 mujeres, 41 hombres. Sin división de género, el 30.93% de la población presento cáncer del sistema reproductor, el 18,56% digestivo y el 17,53% tegumentario. Se presentaron 2 grandes casos de agrupaciones espaciales en el territorio estudiado, una en el Barrio Pablo Neruda con 12 (21,05%) casos y en el casco Urbano de Sibaté con 38 (66,67%) casos. Conclusión: Se corroboro que el análisis geográfico con variables espacio temporales y de exposición, puede ser la herramienta para generar hipótesis sobre asociaciones de casos de cáncer con factores ambientales.
Resumo:
The flock-level sensitivity of pooled faecal culture and serological testing using AGID for the detection of ovine Johne's disease-infected flocks were estimated using non-gold-standard methods. The two tests were compared in an extensive field trial in 296 flocks in New South Wales during 1998. In each flock, a sample of sheep was selected and tested for ovine Johne's disease using both the AGID and pooled faecal culture. The flock-specificity of pooled faecal culture also was estimated from results of surveillance and market-assurance testing in New South Wales. The overall flock-sensitivity of pooled faecal culture was 92% (95% CI: 82.4 and 97.4%) compared to 61% (50.5 and 70.9%) for serology (assuming that both tests were 100% specific). In low-prevalence flocks (estimated prevalence
Resumo:
8th International Conference of Education, Research and Innovation. 18-20 November, 2015, Seville, Spain.
Resumo:
RESUMO: A estrutura demográfica portuguesa é marcada por baixas taxas de natalidade e mortalidade, onde a população idosa representa uma fatia cada vez mais representativa, fruto de uma maior longevidade. A incidência do cancro, na sua generalidade, é maior precisamente nessa classe etária. A par de outras doenças igualmente lesivas (e.g. cardiovasculares, degenerativas) cuja incidência aumenta com a idade, o cancro merece relevo. Estudos epidemiológicos apresentam o cancro como líder mundial na mortalidade. Em países desenvolvidos, o seu peso representa 25% do número total de óbitos, percentagem essa que mais que duplica noutros países. A obesidade, a baixa ingestão de frutas e vegetais, o sedentarismo, o consumo de tabaco e a ingestão de álcool, configuram-se como cinco dos fatores de risco presentes em 30% das mortes diagnosticadas por cancro. A nível mundial e, em particular no Sul de Portugal, os cancros do estômago, recto e cólon apresentam elevadas taxas de incidência e de mortalidade. Do ponto de vista estritamente económico, o cancro é a doença que mais recursos consome enquanto que do ponto de vista físico e psicológico é uma doença que não limita o seu raio de ação ao doente. O cancro é, portanto, uma doença sempre atual e cada vez mais presente, pois reflete os hábitos e o ambiente de uma sociedade, não obstante as características intrínsecas a cada indivíduo. A adoção de metodologia estatística aplicada à modelação de dados oncológicos é, sobretudo, valiosa e pertinente quando a informação é oriunda de Registos de Cancro de Base Populacional (RCBP). A pertinência é justificada pelo fato destes registos permitirem aferir numa população específica, o risco desta sofrer e/ou vir a sofrer de uma dada neoplasia. O peso que as neoplasias do estômago, cólon e recto assumem foi um dos elementos que motivou o presente estudo que tem por objetivo analisar tendências, projeções, sobrevivências relativas e a distribuição espacial destas neoplasias. Foram considerados neste estudo todos os casos diagnosticados no período 1998-2006, pelo RCBP da região sul de Portugal (ROR-Sul). O estudo descritivo inicial das taxas de incidência e da tendência em cada uma das referidas neoplasias teve como base uma única variável temporal - o ano de diagnóstico - também designada por período. Todavia, uma metodologia que contemple apenas uma única variável temporal é limitativa. No cancro, para além do período, a idade à data do diagnóstico e a coorte de nascimento, são variáveis temporais que poderão prestar um contributo adicional na caracterização das taxas de incidência. A relevância assumida por estas variáveis temporais justificou a sua inclusão numaclasse de modelos designada por modelos Idade-Período-Coorte (Age-Period-Cohort models - APC), utilizada na modelação das taxas de incidência para as neoplasias em estudo. Os referidos modelos permitem ultrapassar o problema de relações não lineares e/ou de mudanças súbitas na tendência linear das taxas. Nos modelos APC foram consideradas a abordagem clássica e a abordagem com recurso a funções suavizadoras. A modelação das taxas foi estratificada por sexo. Foram ainda estudados os respectivos submodelos (apenas com uma ou duas variáveis temporais). Conhecido o comportamento das taxas de incidência, uma questão subsequente prende-se com a sua projeção em períodos futuros. Porém, o efeito de mudanças estruturais na população, ao qual Portugal não é alheio, altera substancialmente o número esperado de casos futuros com cancro. Estimativas da incidência de cancro a nível mundial obtidas a partir de projeções demográficas apontam para um aumento de 25% dos casos de cancro nas próximas duas décadas. Embora a projeção da incidência esteja associada a alguma incerteza, as projeções auxiliam no planeamento de políticas de saúde para a afetação de recursos e permitem a avaliação de cenários e de intervenções que tenham como objetivo a redução do impacto do cancro. O desconhecimento de projeções da taxa de incidência destas neoplasias na área abrangida pelo ROR-Sul, levou à utilização de modelos de projeção que diferem entre si quanto à sua estrutura, linearidade (ou não) dos seus coeficientes e comportamento das taxas na série histórica de dados (e.g. crescente, decrescente ou estável). Os referidos modelos pautaram-se por duas abordagens: (i)modelos lineares no que concerne ao tempo e (ii) extrapolação de efeitos temporais identificados pelos modelos APC para períodos futuros. Foi feita a projeção das taxas de incidência para os anos de 2007 a 2010 tendo em conta o género, idade e neoplasia. É ainda apresentada uma estimativa do impacto económico destas neoplasias no período de projeção. Uma questão pertinente e habitual no contexto clínico e a que o presente estudo pretende dar resposta, reside em saber qual a contribuição da neoplasia em si para a sobrevivência do doente. Nesse sentido, a mortalidade por causa específica é habitualmente utilizada para estimar a mortalidade atribuível apenas ao cancro em estudo. Porém, existem muitas situações em que a causa de morte é desconhecida e, mesmo que esta informação esteja disponível através dos certificados de óbito, não é fácil distinguir os casos em que a principal causa de morte é devida ao cancro. A sobrevivência relativa surge como uma medida objetiva que não necessita do conhecimento da causa específica da morte para o seu cálculo e dar-nos-á uma estimativa da probabilidade de sobrevivência caso o cancro em análise, num cenário hipotético, seja a única causa de morte. Desconhecida a principal causa de morte nos casos diagnosticados com cancro no registo ROR-Sul, foi determinada a sobrevivência relativa para cada uma das neoplasias em estudo, para um período de follow-up de 5 anos, tendo em conta o sexo, a idade e cada uma das regiões que constituem o registo. Foi adotada uma análise por período e as abordagens convencional e por modelos. No epílogo deste estudo, é analisada a influência da variabilidade espaço-temporal nas taxas de incidência. O longo período de latência das doenças oncológicas, a dificuldade em identificar mudanças súbitas no comportamento das taxas, populações com dimensão e riscos reduzidos, são alguns dos elementos que dificultam a análise da variação temporal das taxas. Nalguns casos, estas variações podem ser reflexo de flutuações aleatórias. O efeito da componente temporal aferida pelos modelos APC dá-nos um retrato incompleto da incidência do cancro. A etiologia desta doença, quando conhecida, está associada com alguma frequência a fatores de risco tais como condições socioeconómicas, hábitos alimentares e estilo de vida, atividade profissional, localização geográfica e componente genética. O “contributo”, dos fatores de risco é, por vezes, determinante e não deve ser ignorado. Surge, assim, a necessidade em complementar o estudo temporal das taxas com uma abordagem de cariz espacial. Assim, procurar-se-á aferir se as variações nas taxas de incidência observadas entre os concelhos inseridos na área do registo ROR-Sul poderiam ser explicadas quer pela variabilidade temporal e geográfica quer por fatores socioeconómicos ou, ainda, pelos desiguais estilos de vida. Foram utilizados os Modelos Bayesianos Hierárquicos Espaço-Temporais com o objetivo de identificar tendências espaço-temporais nas taxas de incidência bem como quantificar alguns fatores de risco ajustados à influência simultânea da região e do tempo. Os resultados obtidos pela implementação de todas estas metodologias considera-se ser uma mais valia para o conhecimento destas neoplasias em Portugal.------------ABSTRACT: mortality rates, with the elderly being an increasingly representative sector of the population, mainly due to greater longevity. The incidence of cancer, in general, is greater precisely in that age group. Alongside with other equally damaging diseases (e.g. cardiovascular,degenerative), whose incidence rates increases with age, cancer is of special note. In epidemiological studies, cancer is the global leader in mortality. In developed countries its weight represents 25% of the total number of deaths, with this percentage being doubled in other countries. Obesity, a reduce consumption of fruit and vegetables, physical inactivity, smoking and alcohol consumption, are the five risk factors present in 30% of deaths due to cancer. Globally, and in particular in the South of Portugal, the stomach, rectum and colon cancer have high incidence and mortality rates. From a strictly economic perspective, cancer is the disease that consumes more resources, while from a physical and psychological point of view, it is a disease that is not limited to the patient. Cancer is therefore na up to date disease and one of increased importance, since it reflects the habits and the environment of a society, regardless the intrinsic characteristics of each individual. The adoption of statistical methodology applied to cancer data modelling is especially valuable and relevant when the information comes from population-based cancer registries (PBCR). In such cases, these registries allow for the assessment of the risk and the suffering associated to a given neoplasm in a specific population. The weight that stomach, colon and rectum cancers assume in Portugal was one of the motivations of the present study, that focus on analyzing trends, projections, relative survival and spatial distribution of these neoplasms. The data considered in this study, are all cases diagnosed between 1998 and 2006, by the PBCR of Portugal, ROR-Sul.Only year of diagnosis, also called period, was the only time variable considered in the initial descriptive analysis of the incidence rates and trends for each of the three neoplasms considered. However, a methodology that only considers one single time variable will probably fall short on the conclusions that could be drawn from the data under study. In cancer, apart from the variable period, the age at diagnosis and the birth cohort are also temporal variables and may provide an additional contribution to the characterization of the incidence. The relevance assumed by these temporal variables justified its inclusion in a class of models called Age-Period-Cohort models (APC). This class of models was used for the analysis of the incidence rates of the three cancers under study. APC models allow to model nonlinearity and/or sudden changes in linear relationships of rate trends. Two approaches of APC models were considered: the classical and the one using smoothing functions. The models were stratified by gender and, when justified, further studies explored other sub-models where only one or two temporal variables were considered. After the analysis of the incidence rates, a subsequent goal is related to their projections in future periods. Although the effect of structural changes in the population, of which Portugal is not oblivious, may substantially change the expected number of future cancer cases, the results of these projections could help planning health policies with the proper allocation of resources, allowing for the evaluation of scenarios and interventions that aim to reduce the impact of cancer in a population. Worth noting that cancer incidence worldwide obtained from demographic projections point out to an increase of 25% of cancer cases in the next two decades. The lack of projections of incidence rates of the three cancers under study in the area covered by ROR-Sul, led us to use a variety of forecasting models that differ in the nature and structure. For example, linearity or nonlinearity in their coefficients and the trend of the incidence rates in historical data series (e.g. increasing, decreasing or stable).The models followed two approaches: (i) linear models regarding time and (ii) extrapolation of temporal effects identified by the APC models for future periods. The study provide incidence rates projections and the numbers of newly diagnosed cases for the year, 2007 to 2010, taking into account gender, age and the type of cancer. In addition, an estimate of the economic impact of these neoplasms is presented for the projection period considered. This research also try to address a relevant and common clinical question in these type of studies, regarding the contribution of the type of cancer to the patient survival. In such studies, the primary cause of death is commonly used to estimate the mortality specifically due to the cancer. However, there are many situations in which the cause of death is unknown, or, even if this information is available through the death certificates, it is not easy to distinguish the cases where the primary cause of death is the cancer. With this in mind, the relative survival is an alternative measure that does not need the knowledge of the specific cause of death to be calculated. This estimate will represent the survival probability in the hypothetical scenario of a certain cancer be the only cause of death. For the patients with unknown cause of death that were diagnosed with cancer in the ROR-Sul, the relative survival was calculated for each of the cancers under study, for a follow-up period of 5 years, considering gender, age and each one of the regions that are part the registry. A period analysis was undertaken, considering both the conventional and the model approaches. In final part of this study, we analyzed the influence of space-time variability in the incidence rates. The long latency period of oncologic diseases, the difficulty in identifying subtle changes in the rates behavior, populations of reduced size and low risk are some of the elements that can be a challenge in the analysis of temporal variations in rates, that, in some cases, can reflect simple random fluctuations. The effect of the temporal component measured by the APC models gives an incomplete picture of the cancer incidence. The etiology of this disease, when known, is frequently associated to risk factors such as socioeconomic conditions, eating habits and lifestyle, occupation, geographic location and genetic component. The "contribution"of such risk factors is sometimes decisive in the evolution of the disease and should not be ignored. Therefore, there was the need to consider an additional approach in this study, one of spatial nature, addressing the fact that changes in incidence rates observed in the ROR-Sul area, could be explained either by temporal and geographical variability or by unequal socio-economic or lifestyle factors. Thus, Bayesian hierarchical space-time models were used with the purpose of identifying space-time trends in incidence rates together with the the analysis of the effect of the risk factors considered in the study. The results obtained and the implementation of all these methodologies are considered to be an added value to the knowledge of these neoplasms in Portugal.
Resumo:
INTRODUCTION: Malaria is a serious problem in the Brazilian Amazon region, and the detection of possible risk factors could be of great interest for public health authorities. The objective of this article was to investigate the association between environmental variables and the yearly registers of malaria in the Amazon region using Bayesian spatiotemporal methods. METHODS: We used Poisson spatiotemporal regression models to analyze the Brazilian Amazon forest malaria count for the period from 1999 to 2008. In this study, we included some covariates that could be important in the yearly prediction of malaria, such as deforestation rate. We obtained the inferences using a Bayesian approach and Markov Chain Monte Carlo (MCMC) methods to simulate samples for the joint posterior distribution of interest. The discrimination of different models was also discussed. RESULTS: The model proposed here suggests that deforestation rate, the number of inhabitants per km², and the human development index (HDI) are important in the prediction of malaria cases. CONCLUSIONS: It is possible to conclude that human development, population growth, deforestation, and their associated ecological alterations are conducive to increasing malaria risk. We conclude that the use of Poisson regression models that capture the spatial and temporal effects under the Bayesian paradigm is a good strategy for modeling malaria counts.
Resumo:
The assessment of existing timber structures is often limited to information obtained from non or semi destructive testing, as mechanical testing is in many cases not possible due to its destructive nature. Therefore, the available data provides only an indirect measurement of the reference mechanical properties of timber elements, often obtained through empirical based correlations. Moreover, the data must result from the combination of different tests, as to provide a reliable source of information for a structural analysis. Even if general guidelines are available for each typology of testing, there is still a need for a global methodology allowing to combine information from different sources and infer upon that information in a decision process. In this scope, the present work presents the implementation of a probabilistic based framework for safety assessment of existing timber elements. This methodology combines information gathered in different scales and follows a probabilistic framework allowing for the structural assessment of existing timber elements with possibility of inference and updating of its mechanical properties, through Bayesian methods. The probabilistic based framework is based in four main steps: (i) scale of information; (ii) measurement data; (iii) probability assignment; and (iv) structural analysis. In this work, the proposed methodology is implemented in a case study. Data was obtained through a multi-scale experimental campaign made to old chestnut timber beams accounting correlations of non and semi-destructive tests with mechanical properties. Finally, different inference scenarios are discussed aiming at the characterization of the safety level of the elements.
Resumo:
The brown crab (Cancer pagurus) fishery in Ireland is one of the most important financially and socio-economically, with the species worth approximately €15m per year in the first half of the decade. Only mackerel (Scomber scombrus) and Dublin Bay prawn (Nephrops norvegicus) are of greater value. Despite this, very little research has been conducted to describe the stock structure of brown crab on a national scale. In this study a country-wide assessment of genetic population structure was carried out. Sampling was conducted from commercial fishing boats from 11/06 to 04/08 at seven sample sites representing the central Irish brown crab fisheries, with one sample site from the UK also included in the study. Six microsatellite markers, specifically developed for brown crab, were used to assess genetic diversity and estimate population differentiation parameters. Significant genetic structuring was found using F-statistics (Fst = 0.007) and exact tests, but not with Bayesian methods. Samples from the UK and Wexford were found to be genetically distinct from all other populations. Three northern populations from Malm Head and Stanton Bank were genetically similar with Fst estimates suggesting connectivity between them. Also, Stanton Bank, again on the basis of Fst estimates, appeared to be connected to populations down the west coast of Ireland, as far south as Kerry. Two Galway samples, one inside and one outside of Galway Bay, were genetically differentiated despite their close geographic proximity. It is hypothesised that a persistent northerly summer current could transport pelagic larvae from populations along the southwest and west coasts of Ireland towards Stanton Bank in the North, resulting in the apparent connectivity observed in this study.
Resumo:
Employing an endogenous growth model with human capital, this paper explores how productivity shocks in the goods and human capital producing sectors contribute to explaining aggregate fluctuations in output, consumption, investment and hours. Given the importance of accounting for both the dynamics and the trends in the data not captured by the theoretical growth model, we introduce a vector error correction model (VECM) of the measurement errors and estimate the model’s posterior density function using Bayesian methods. To contextualize our findings with those in the literature, we also assess whether the endogenous growth model or the standard real business cycle model better explains the observed variation in these aggregates. In addressing these issues we contribute to both the methods of analysis and the ongoing debate regarding the effects of innovations to productivity on macroeconomic activity.
Resumo:
An expanding literature articulates the view that Taylor rules are helpful in predicting exchange rates. In a changing world however, Taylor rule parameters may be subject to structural instabilities, for example during the Global Financial Crisis. This paper forecasts exchange rates using such Taylor rules with Time Varying Parameters (TVP) estimated by Bayesian methods. In core out-of-sample results, we improve upon a random walk benchmark for at least half, and for as many as eight out of ten, of the currencies considered. This contrasts with a constant parameter Taylor rule model that yields a more limited improvement upon the benchmark. In further results, Purchasing Power Parity and Uncovered Interest Rate Parity TVP models beat a random walk benchmark, implying our methods have some generality in exchange rate prediction.
Resumo:
This paper extends the Nelson-Siegel linear factor model by developing a flexible macro-finance framework for modeling and forecasting the term structure of US interest rates. Our approach is robust to parameter uncertainty and structural change, as we consider instabilities in parameters and volatilities, and our model averaging method allows for investors' model uncertainty over time. Our time-varying parameter Nelson-Siegel Dynamic Model Averaging (NS-DMA) predicts yields better than standard benchmarks and successfully captures plausible time-varying term premia in real time. The proposed model has significant in-sample and out-of-sample predictability for excess bond returns, and the predictability is of economic value.
Resumo:
We present a real data set of claims amounts where costs related to damage are recorded separately from those related to medical expenses. Only claims with positive costs are considered here. Two approaches to density estimation are presented: a classical parametric and a semi-parametric method, based on transformation kernel density estimation. We explore the data set with standard univariate methods. We also propose ways to select the bandwidth and transformation parameters in the univariate case based on Bayesian methods. We indicate how to compare the results of alternative methods both looking at the shape of the overall density domain and exploring the density estimates in the right tail.