953 resultados para Statistical Inference


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Instantaneous natural mortality rates and a nonparametric hunting mortality function are estimated from a multiple-year tagging experiment with arbitrary, time-dependent fishing or hunting mortality. Our theory allows animals to be tagged over a range of times in each year, and to take time to mix into the population. Animals are recovered by hunting or fishing, and death events from natural causes occur but are not observed. We combine a long-standing approach based on yearly totals, described by Brownie et al. (1985, Statistical Inference from Band Recovery Data: A Handbook, Second edition, United States Fish and Wildlife Service, Washington, Resource Publication, 156), with an exact-time-of-recovery approach originated by Hearn, Sandland and Hampton (1987, Journal du Conseil International pour l'Exploration de la Mer, 43, 107-117), who modeled times at liberty without regard to time of tagging. Our model allows for exact times of release and recovery, incomplete reporting of recoveries, and potential tag shedding. We apply our methods to data on the heavily exploited southern bluefin tuna (Thunnus maccoyii).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis the use of the Bayesian approach to statistical inference in fisheries stock assessment is studied. The work was conducted in collaboration of the Finnish Game and Fisheries Research Institute by using the problem of monitoring and prediction of the juvenile salmon population in the River Tornionjoki as an example application. The River Tornionjoki is the largest salmon river flowing into the Baltic Sea. This thesis tackles the issues of model formulation and model checking as well as computational problems related to Bayesian modelling in the context of fisheries stock assessment. Each article of the thesis provides a novel method either for extracting information from data obtained via a particular type of sampling system or for integrating the information about the fish stock from multiple sources in terms of a population dynamics model. Mark-recapture and removal sampling schemes and a random catch sampling method are covered for the estimation of the population size. In addition, a method for estimating the stock composition of a salmon catch based on DNA samples is also presented. For most of the articles, Markov chain Monte Carlo (MCMC) simulation has been used as a tool to approximate the posterior distribution. Problems arising from the sampling method are also briefly discussed and potential solutions for these problems are proposed. Special emphasis in the discussion is given to the philosophical foundation of the Bayesian approach in the context of fisheries stock assessment. It is argued that the role of subjective prior knowledge needed in practically all parts of a Bayesian model should be recognized and consequently fully utilised in the process of model formulation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Minimum Description Length (MDL) is an information-theoretic principle that can be used for model selection and other statistical inference tasks. There are various ways to use the principle in practice. One theoretically valid way is to use the normalized maximum likelihood (NML) criterion. Due to computational difficulties, this approach has not been used very often. This thesis presents efficient floating-point algorithms that make it possible to compute the NML for multinomial, Naive Bayes and Bayesian forest models. None of the presented algorithms rely on asymptotic analysis and with the first two model classes we also discuss how to compute exact rational number solutions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Markov random fields (MRF) are popular in image processing applications to describe spatial dependencies between image units. Here, we take a look at the theory and the models of MRFs with an application to improve forest inventory estimates. Typically, autocorrelation between study units is a nuisance in statistical inference, but we take an advantage of the dependencies to smooth noisy measurements by borrowing information from the neighbouring units. We build a stochastic spatial model, which we estimate with a Markov chain Monte Carlo simulation method. The smooth values are validated against another data set increasing our confidence that the estimates are more accurate than the originals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Minimization problems with respect to a one-parameter family of generalized relative entropies are studied. These relative entropies, which we term relative alpha-entropies (denoted I-alpha), arise as redundancies under mismatched compression when cumulants of compressed lengths are considered instead of expected compressed lengths. These parametric relative entropies are a generalization of the usual relative entropy (Kullback-Leibler divergence). Just like relative entropy, these relative alpha-entropies behave like squared Euclidean distance and satisfy the Pythagorean property. Minimizers of these relative alpha-entropies on closed and convex sets are shown to exist. Such minimizations generalize the maximum Renyi or Tsallis entropy principle. The minimizing probability distribution (termed forward I-alpha-projection) for a linear family is shown to obey a power-law. Other results in connection with statistical inference, namely subspace transitivity and iterated projections, are also established. In a companion paper, a related minimization problem of interest in robust statistics that leads to a reverse I-alpha-projection is studied.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The application of Bayes' Theorem to signal processing provides a consistent framework for proceeding from prior knowledge to a posterior inference conditioned on both the prior knowledge and the observed signal data. The first part of the lecture will illustrate how the Bayesian methodology can be applied to a variety of signal processing problems. The second part of the lecture will introduce the concept of Markov Chain Monte-Carlo (MCMC) methods which is an effective approach to overcoming many of the analytical and computational problems inherent in statistical inference. Such techniques are at the centre of the rapidly developing area of Bayesian signal processing which, with the continual increase in available computational power, is likely to provide the underlying framework for most signal processing applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Medicina Dentária

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A probabilistic, nonlinear supervised learning model is proposed: the Specialized Mappings Architecture (SMA). The SMA employs a set of several forward mapping functions that are estimated automatically from training data. Each specialized function maps certain domains of the input space (e.g., image features) onto the output space (e.g., articulated body parameters). The SMA can model ambiguous, one-to-many mappings that may yield multiple valid output hypotheses. Once learned, the mapping functions generate a set of output hypotheses for a given input via a statistical inference procedure. The SMA inference procedure incorporates an inverse mapping or feedback function in evaluating the likelihood of each of the hypothesis. Possible feedback functions include computer graphics rendering routines that can generate images for given hypotheses. The SMA employs a variant of the Expectation-Maximization algorithm for simultaneous learning of the specialized domains along with the mapping functions, and approximate strategies for inference. The framework is demonstrated in a computer vision system that can estimate the articulated pose parameters of a human’s body or hands, given silhouettes from a single image. The accuracy and stability of the SMA are also tested using synthetic images of human bodies and hands, where ground truth is known.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This brief examines the application of nonlinear statistical process control to the detection and diagnosis of faults in automotive engines. In this statistical framework, the computed score variables may have a complicated nonparametric distri- bution function, which hampers statistical inference, notably for fault detection and diagnosis. This brief shows that introducing the statistical local approach into nonlinear statistical process control produces statistics that follow a normal distribution, thereby enabling a simple statistical inference for fault detection. Further, for fault diagnosis, this brief introduces a compensation scheme that approximates the fault condition signature. Experimental results from a Volkswagen 1.9-L turbo-charged diesel engine are included.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nonlinear principal component analysis (PCA) based on neural networks has drawn significant attention as a monitoring tool for complex nonlinear processes, but there remains a difficulty with determining the optimal network topology. This paper exploits the advantages of the Fast Recursive Algorithm, where the number of nodes, the location of centres, and the weights between the hidden layer and the output layer can be identified simultaneously for the radial basis function (RBF) networks. The topology problem for the nonlinear PCA based on neural networks can thus be solved. Another problem with nonlinear PCA is that the derived nonlinear scores may not be statistically independent or follow a simple parametric distribution. This hinders its applications in process monitoring since the simplicity of applying predetermined probability distribution functions is lost. This paper proposes the use of a support vector data description and shows that transforming the nonlinear principal components into a feature space allows a simple statistical inference. Results from both simulated and industrial data confirm the efficacy of the proposed method for solving nonlinear principal component problems, compared with linear PCA and kernel PCA.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The quick, easy way to master all the statistics you'll ever need The bad news first: if you want a psychology degree you'll need to know statistics. Now for the good news: Psychology Statistics For Dummies. Featuring jargon-free explanations, step-by-step instructions and dozens of real-life examples, Psychology Statistics For Dummies makes the knotty world of statistics a lot less baffling. Rather than padding the text with concepts and procedures irrelevant to the task, the authors focus only on the statistics psychology students need to know. As an alternative to typical, lead-heavy statistics texts or supplements to assigned course reading, this is one book psychology students won't want to be without. Ease into statistics – start out with an introduction to how statistics are used by psychologists, including the types of variables they use and how they measure them Get your feet wet – quickly learn the basics of descriptive statistics, such as central tendency and measures of dispersion, along with common ways of graphically depicting information Meet your new best friend – learn the ins and outs of SPSS, the most popular statistics software package among psychology students, including how to input, manipulate and analyse data Analyse this – get up to speed on statistical analysis core concepts, such as probability and inference, hypothesis testing, distributions, Z-scores and effect sizes Correlate that – get the lowdown on common procedures for defining relationships between variables, including linear regressions, associations between categorical data and more Analyse by inference – master key methods in inferential statistics, including techniques for analysing independent groups designs and repeated-measures research designs Open the book and find: Ways to describe statistical data How to use SPSS statistical software Probability theory and statistical inference Descriptive statistics basics How to test hypotheses Correlations and other relationships between variables Core concepts in statistical analysis for psychology Analysing research designs Learn to: Use SPSS to analyse data Master statistical methods and procedures using psychology-based explanations and examples Create better reports Identify key concepts and pass your course

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents an in-depth study on the effect that composition and properties of recycled coarse aggregates from previous concrete structures, together with water/cement ratio (w/c) and a replacement ratio of coarse aggregate, have on compressive strength, its evolution through time, and its variability. A rigorous approach through statistical inference based on multiple linear regression has identified the key factors. A predictive equation is given for compressive strength when recycled coarse aggregates are used. The w/c and replacement ratio are the capital factors affecting concrete compressive strength. Their effect is significantly modified by the properties and composition of the recycled aggregates used. An equation that accurately predicts concrete compressive strength in terms of these parameters is presented. Particular attention has been paid to the complex effect that old concrete and adhered mortar have on concrete compressive strength and its mid-term evolution. It has been confirmed that the presence of contaminants tends to increase variability of compressive strength values.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

RESUMO - INTRODUÇÃO: Para garantir a qualidade e universalidade dos cuidados de Saúde, é fundamental, que os recursos disponíveis sejam bem utilizados, evitando desperdícios. Os resultados de um hospital estão diretamente ligados aos resultados do Bloco Operatório. A taxa de utilização e a taxa de cancelamentos são indicadores da atividade dos Blocos Operatórios. Realizou-se um estudo piloto no Hospital Dr. José de Almeida em Cascais. OBJECTIVO: Conhecer as taxas de cancelamento de cirurgias no próprio dia. METODOLOGIA: Estudo descritivo longitudinal, retrospectivo, quantitativo, às cirurgias agendadas, entre 1 de Janeiro e 31 de Março de 2012. Utilizou-se a estatística descritiva e a inferência estatística através de testes de independência de variáveis. RESULTADOS: 1524 cirurgias agendadas, canceladas 205, com 100 cancelamentos no próprio dia. Os resultados revelam na globalidade taxas de cancelamento (13,45%) inferiores às fornecidas pelo Ministério da Saúde em relação a 2008, 2009 e 2010, com 28,4%, 26,0% e 26,6% respectivamente. No entanto, as taxas de cancelamento no próprio dia são semelhantes 48,78% no estudo e, entre 44,1% e 50,5% nos dados do Ministério da Saúde. Os estudos internacionais consultados revelam taxas globais de 0,34% na China (Sung,2010) num hospitalar multidisciplinar e 30,3% (taxa de cancelamentos no dia) na Índia (Garg, 2009). CONCLUSÃO: A imputação do motivo de cancelamento é feita ao “serviço/ hospital” ou a “outros”, o utente apresenta uma taxa baixa de imputação do motivo de cancelamento. Foi encontrada uma relação de dependência entre as variáveis, com exceção da relação entre data do cancelamento e a especialidade cirúrgica. Assim, considera-se pertinente a realização de um estudo mais aprofundado e abrangente deste fenómeno nas instituições de saúde em Portugal.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction: Non-invasive brain imaging techniques often contrast experimental conditions across a cohort of participants, obfuscating distinctions in individual performance and brain mechanisms that are better characterised by the inter-trial variability. To overcome such limitations, we developed topographic analysis methods for single-trial EEG data [1]. So far this was typically based on time-frequency analysis of single-electrode data or single independent components. The method's efficacy is demonstrated for event-related responses to environmental sounds, hitherto studied at an average event-related potential (ERP) level. Methods: Nine healthy subjects participated to the experiment. Auditory meaningful sounds of common objects were used for a target detection task [2]. On each block, subjects were asked to discriminate target sounds, which were living or man-made auditory objects. Continuous 64-channel EEG was acquired during the task. Two datasets were considered for each subject including single-trial of the two conditions, living and man-made. The analysis comprised two steps. In the first part, a mixture of Gaussians analysis [3] provided representative topographies for each subject. In the second step, conditional probabilities for each Gaussian provided statistical inference on the structure of these topographies across trials, time, and experimental conditions. Similar analysis was conducted at group-level. Results: Results show that the occurrence of each map is structured in time and consistent across trials both at the single-subject and at group level. Conducting separate analyses of ERPs at single-subject and group levels, we could quantify the consistency of identified topographies and their time course of activation within and across participants as well as experimental conditions. A general agreement was found with previous analysis at average ERP level. Conclusions: This novel approach to single-trial analysis promises to have impact on several domains. In clinical research, it gives the possibility to statistically evaluate single-subject data, an essential tool for analysing patients with specific deficits and impairments and their deviation from normative standards. In cognitive neuroscience, it provides a novel tool for understanding behaviour and brain activity interdependencies at both single-subject and at group levels. In basic neurophysiology, it provides a new representation of ERPs and promises to cast light on the mechanisms of its generation and inter-individual variability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work presents new, efficient Markov chain Monte Carlo (MCMC) simulation methods for statistical analysis in various modelling applications. When using MCMC methods, the model is simulated repeatedly to explore the probability distribution describing the uncertainties in model parameters and predictions. In adaptive MCMC methods based on the Metropolis-Hastings algorithm, the proposal distribution needed by the algorithm learns from the target distribution as the simulation proceeds. Adaptive MCMC methods have been subject of intensive research lately, as they open a way for essentially easier use of the methodology. The lack of user-friendly computer programs has been a main obstacle for wider acceptance of the methods. This work provides two new adaptive MCMC methods: DRAM and AARJ. The DRAM method has been built especially to work in high dimensional and non-linear problems. The AARJ method is an extension to DRAM for model selection problems, where the mathematical formulation of the model is uncertain and we want simultaneously to fit several different models to the same observations. The methods were developed while keeping in mind the needs of modelling applications typical in environmental sciences. The development work has been pursued while working with several application projects. The applications presented in this work are: a winter time oxygen concentration model for Lake Tuusulanjärvi and adaptive control of the aerator; a nutrition model for Lake Pyhäjärvi and lake management planning; validation of the algorithms of the GOMOS ozone remote sensing instrument on board the Envisat satellite of European Space Agency and the study of the effects of aerosol model selection on the GOMOS algorithm.