984 resultados para Mathematical statistics.
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models. Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models. Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.
Resumo:
Firstly, we numerically model a practical 20 Gb/s undersea configuration employing the Return-to-Zero Differential Phase Shift Keying data format. The modelling is completed using the Split-Step Fourier Method to solve the Generalised Nonlinear Schrdinger Equation. We optimise the dispersion map and per-channel launch power of these channels and investigate how the choice of pre/post compensation can influence the performance. After obtaining these optimal configurations, we investigate the Bit Error Rate estimation of these systems and we see that estimation based on Gaussian electrical current systems is appropriate for systems of this type, indicating quasi-linear behaviour. The introduction of narrower pulses due to the deployment of quasi-linear transmission decreases the tolerance to chromatic dispersion and intra-channel nonlinearity. We used tools from Mathematical Statistics to study the behaviour of these channels in order to develop new methods to estimate Bit Error Rate. In the final section, we consider the estimation of Eye Closure Penalty, a popular measure of signal distortion. Using a numerical example and assuming the symmetry of eye closure, we see that we can simply estimate Eye Closure Penalty using Gaussian statistics. We also see that the statistics of the logical ones dominates the statistics of the logical ones dominates the statistics of signal distortion in the case of Return-to-Zero On-Off Keying configurations.
Resumo:
The paper is dedicated to the theory which describes physical phenomena in non-constant statistical conditions. The theory is a new direction in probability theory and mathematical statistics that gives new possibilities for presentation of physical world by hyper-random models. These models take into consideration the changing of object’s properties, as well as uncertainty of statistical conditions.
Resumo:
* The research is supported partly by INTAS: 04-77-7173 project, http://www.intas.be
Resumo:
2000 Mathematics Subject Classi cation: 62D05.
Resumo:
Causal inference with a continuous treatment is a relatively under-explored problem. In this dissertation, we adopt the potential outcomes framework. Potential outcomes are responses that would be seen for a unit under all possible treatments. In an observational study where the treatment is continuous, the potential outcomes are an uncountably infinite set indexed by treatment dose. We parameterize this unobservable set as a linear combination of a finite number of basis functions whose coefficients vary across units. This leads to new techniques for estimating the population average dose-response function (ADRF). Some techniques require a model for the treatment assignment given covariates, some require a model for predicting the potential outcomes from covariates, and some require both. We develop these techniques using a framework of estimating functions, compare them to existing methods for continuous treatments, and simulate their performance in a population where the ADRF is linear and the models for the treatment and/or outcomes may be misspecified. We also extend the comparisons to a data set of lottery winners in Massachusetts. Next, we describe the methods and functions in the R package causaldrf using data from the National Medical Expenditure Survey (NMES) and Infant Health and Development Program (IHDP) as examples. Additionally, we analyze the National Growth and Health Study (NGHS) data set and deal with the issue of missing data. Lastly, we discuss future research goals and possible extensions.
Resumo:
El sector agrícola ha constituido una de las principales fuentes de ingreso para la economía colombiana; sin embargo, carece de un mercado de derivados financieros sólido que permita proteger a los productores y exportadores frente al riesgo de la volatilidad del precio -- Con esta propuesta se busca estimar los rendimientos de conveniencia y los precios teóricos para los futuros de café en Colombia -- Para este propósito, inicialmente se describe el mercado de café en Colombia y posteriormente se modelan el precio del café y su volatilidad con base en variables como el clima y los niveles de inventario -- Finalmente, se estiman las bandas en las cuales oscilaría el precio en caso de que se cumplan ciertas condiciones de no arbitraje, a partir de la metodología diseñada por Díaz y Vanegas (2001) y complementada por Cárcamo y Franco (2012) -- A manera de ilustración, se incorporan los rendimientos de conveniencia y se expone un caso hipotético en un mercado de café en Colombia
Resumo:
This dissertation applies statistical methods to the evaluation of automatic summarization using data from the Text Analysis Conferences in 2008-2011. Several aspects of the evaluation framework itself are studied, including the statistical testing used to determine significant differences, the assessors, and the design of the experiment. In addition, a family of evaluation metrics is developed to predict the score an automatically generated summary would receive from a human judge and its results are demonstrated at the Text Analysis Conference. Finally, variations on the evaluation framework are studied and their relative merits considered. An over-arching theme of this dissertation is the application of standard statistical methods to data that does not conform to the usual testing assumptions.
Resumo:
In quantitative risk analysis, the problem of estimating small threshold exceedance probabilities and extreme quantiles arise ubiquitously in bio-surveillance, economics, natural disaster insurance actuary, quality control schemes, etc. A useful way to make an assessment of extreme events is to estimate the probabilities of exceeding large threshold values and extreme quantiles judged by interested authorities. Such information regarding extremes serves as essential guidance to interested authorities in decision making processes. However, in such a context, data are usually skewed in nature, and the rarity of exceedance of large threshold implies large fluctuations in the distribution's upper tail, precisely where the accuracy is desired mostly. Extreme Value Theory (EVT) is a branch of statistics that characterizes the behavior of upper or lower tails of probability distributions. However, existing methods in EVT for the estimation of small threshold exceedance probabilities and extreme quantiles often lead to poor predictive performance in cases where the underlying sample is not large enough or does not contain values in the distribution's tail. In this dissertation, we shall be concerned with an out of sample semiparametric (SP) method for the estimation of small threshold probabilities and extreme quantiles. The proposed SP method for interval estimation calls for the fusion or integration of a given data sample with external computer generated independent samples. Since more data are used, real as well as artificial, under certain conditions the method produces relatively short yet reliable confidence intervals for small exceedance probabilities and extreme quantiles.
Resumo:
En el contexto de las compañías aseguradoras, el capital representa la solidez y capacidad de una compañía para responder ante las obligaciones adquiridas con los clientes en escenarios de pérdidas inesperadas -- Con la experiencia de las pasadas crisis se ha venido aumentando la exigencia de capital y para estimar este capital, el marco regulatorio europeo propone una metodología basada en riesgos, la cual se conoce como Solvencia II -- Sin embargo, en Colombia la metodología exigida en la actualidad no contempla la totalidad de riesgos a los que se encuentra expuesta una compañía en este sector -- El propósito de este trabajo es determinar las bases para el cálculo del capital, basado en riesgo de una compañía aseguradora en Colombia, adaptando las exigencias propuestas por Solvencia II a las condiciones del mercado colombiano -- Lo anterior, se realiza cuantificando las principales variables de riesgo relacionadas con el entorno financiero y de negocio de las compañías en Colombia
Resumo:
This study sought to extend earlier work by Mulhern and Wylie (2004) to investigate a UK-wide sample of psychology undergraduates. A total of 890 participants from eight universities across the UK were tested on six broadly defined components of mathematical thinking relevant to the teaching of statistics in psychology - calculation, algebraic reasoning, graphical interpretation, proportionality and ratio, probability and sampling, and estimation. Results were consistent with Mulhern and Wylie's (2004) previously reported findings. Overall, participants across institutions exhibited marked deficiencies in many aspects of mathematical thinking. Results also revealed significant gender differences on calculation, proportionality and ratio, and estimation. Level of qualification in mathematics was found to predict overall performance. Analysis of the nature and content of errors revealed consistent patterns of misconceptions in core mathematical knowledge , likely to hamper the learning of statistics.
Resumo:
Mode of access: Internet.