601 resultados para Estimators


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Group testing has long been considered as a safe and sensible relative to one-at-a-time testing in applications where the prevalence rate p is small. In this thesis, we applied Bayes approach to estimate p using Beta-type prior distribution. First, we showed two Bayes estimators of p from prior on p derived from two different loss functions. Second, we presented two more Bayes estimators of p from prior on π according to two loss functions. We also displayed credible and HPD interval for p. In addition, we did intensive numerical studies. All results showed that the Bayes estimator was preferred over the usual maximum likelihood estimator (MLE) for small p. We also presented the optimal β for different p, m, and k.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let (X, Y) be bivariate normal random vectors which represent the responses as a result of Treatment 1 and Treatment 2. The statistical inference about the bivariate normal distribution parameters involving missing data with both treatment samples is considered. Assuming the correlation coefficient ρ of the bivariate population is known, the MLE of population means and variance (ξ, η, and σ2) are obtained. Inferences about these parameters are presented. Procedures of constructing confidence interval for the difference of population means ξ – η and testing hypothesis about ξ – η are established. The performances of the new estimators and testing procedure are compared numerically with the method proposed in Looney and Jones (2003) on the basis of extensive Monte Carlo simulation. Simulation studies indicate that the testing power of the method proposed in this thesis study is higher.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multivariate normal distribution is commonly encountered in any field, a frequent issue is the missing values in practice. The purpose of this research was to estimate the parameters in three-dimensional covariance permutation-symmetric normal distribution with complete data and all possible patterns of incomplete data. In this study, MLE with missing data were derived, and the properties of the MLE as well as the sampling distributions were obtained. A Monte Carlo simulation study was used to evaluate the performance of the considered estimators for both cases when ρ was known and unknown. All results indicated that, compared to estimators in the case of omitting observations with missing data, the estimators derived in this article led to better performance. Furthermore, when ρ was unknown, using the estimate of ρ would lead to the same conclusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Suppose two or more variables are jointly normally distributed. If there is a common relationship between these variables it would be very important to quantify this relationship by a parameter called the correlation coefficient which measures its strength, and the use of it can develop an equation for predicting, and ultimately draw testable conclusion about the parent population. This research focused on the correlation coefficient ρ for the bivariate and trivariate normal distribution when equal variances and equal covariances are considered. Particularly, we derived the maximum Likelihood Estimators (MLE) of the distribution parameters assuming all of them are unknown, and we studied the properties and asymptotic distribution of . Showing this asymptotic normality, we were able to construct confidence intervals of the correlation coefficient ρ and test hypothesis about ρ. With a series of simulations, the performance of our new estimators were studied and were compared with those estimators that already exist in the literature. The results indicated that the MLE has a better or similar performance than the others.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to the rapid changes that governs the Swedish financial sector such as financial deregulations and technological innovations, it is imperative to examine the extent to which the Swedish Financial institutions had performed amid these changes. For this to be accomplish, the work investigates what are the determinants of performance for Swedish Financial Monetary Institutions? Assumptions were derived from theoretical and empirical literatures to investigate the authenticity of this research question using seven explanatory variables. Two models were specified using Returns on Asset (ROA) and Return on Equity (ROE) as the main performance indicators and for the sake of reliability and validity, three different estimators such as Ordinary Least Square (OLS), Generalized Least Square (GLS) and Feasible Generalized Least Square (FGLS) were employed. The Akaike Information Criterion (AIC) was also used to verify which specification explains performance better while performing robustness check of parameter estimates was done by correcting for standard errors. Based on the findings, ROA specification proves to have the lowest Akaike Information Criterion (AIC) and Standard errors compared to ROE specification. Under ROA, two variables; the profit margins and the Interest coverage ratio proves to be statistically significant while under ROE just the interest coverage ratio (ICR) for all the estimators proves significant. The result also shows that the FGLS is the most efficient estimator, then follows the GLS and the last OLS. when corrected for SE robust, the gearing ratio which measures the capital structure becomes significant under ROA and its estimate become positive under ROE robust. Conclusions were drawn that, within the period of study three variables (ICR, profit margins and gearing) shows significant and four variables were insignificant. The overall findings show that the institutions strive to their best to maximize returns but these returns were just normal to cover their costs of operation. Much should be done as per the ASC theory to avoid liquidity and credit risks problems. Again, estimated values of ICR and profit margins shows that a considerable amount of efforts with sound financial policies are required to increase performance by one percentage point. Areas of further research could be how the individual stochastic factors such as the Dupont model, repo rates, inflation, GDP etc. can influence performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta investigación analiza el impacto del Programa de Alimentación Escolar en el trabajo infantil en Colombia a través de varias técnicas de evaluación de impacto que incluyen emparejamiento simple, emparejamiento genético y emparejamiento con reducción de sesgo. En particular, se encuentra que este programa disminuye la probabilidad de que los escolares trabajen alrededor de un 4%. Además, se explora que el trabajo infantil se reduce gracias a que el programa aumenta la seguridad alimentaria, lo que consecuentemente cambia las decisiones de los hogares y anula la carga laboral en los infantes. Son numerosos los avances en primera infancia llevados a cabo por el Estado, sin embargo, estos resultados sirven de base para construir un marco conceptual en el que se deben rescatar y promover las políticas públicas alimentarias en toda la edad escolar.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dada la persistencia de las diferencias en ingresos laborales por regiones en Colombia, el presente artículo propone cuantificar la magnitud de este diferencial que es atribuida a la diferencia en estructuras de mercado laboral, entendiendo esta última como la diferencia en los retornos a las características de la fuerza laboral. Para ello se propone el uso de un método de descomposición del tipo Oaxaca- Blinder y se compara a Bogotá –la ciudad con mayores ingresos laborales- con otras ciudades principales. Los resultados obtenidos al conducir el ejercicio de descomposición muestran que las diferencias en estructura están a favor de Bogotá y que estas explican más de la mitad de la diferencia total, indicando que si se quieren reducir las disparidades de ingresos laborales entre ciudades no es suficiente con calificar la fuerza laboral y que es necesario indagar por las causas que hacen que los retornos a las características difieran entre ciudades.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In aircraft components maintenance shops, components are distributed amongst repair groups and their respective technicians based on the type of repair, on the technicians skills and workload, and on the customer required dates. This distribution planning is typically done in an empirical manner based on the group leader’s past experience. Such a procedure does not provide any performance guarantees, leading frequently to undesirable delays on the delivery of the aircraft components. Among others, a fundamental challenge faced by the group leaders is to decide how to distribute the components that arrive without customer required dates. This paper addresses the problems of prioritizing the randomly arriving of aircraft components (with or without pre-assigned customer required dates) and of optimally distributing them amongst the technicians of the repair groups. We proposed a formula for prioritizing the list of repairs, pointing out the importance of selecting good estimators for the interarrival times between repair requests, the turn-around-times and the man hours for repair. In addition, a model for the assignment and scheduling problem is designed and a preliminary algorithm along with a numerical illustration is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A necessidade de conhecer uma população impulsiona um processo de recolha e análise de informação. Usualmente é muito difícil ou impossível estudar a totalidade da população, daí a importância do estudo com recurso a amostras. Conceber um estudo por amostragem é um processo complexo, desde antes da recolha dos dados até a fase de análise dos mesmos. Na maior parte dos estudos utilizam-se combinações de vários métodos probabilísticos de amostragem para seleção de uma amostra, que se pretende representativa da população, denominado delineamento de amostragem complexo. O conhecimento dos erros de amostragem é necessário à correta interpretação dos resultados de inquéritos e à avaliação dos seus planos de amostragem. Em amostras complexas, têm sido usadas aproximações ajustadas à natureza complexa do plano da amostra para a estimação da variância, sendo as mais utilizadas: o método de linearização Taylor e as técnicas de reamostragem e replicação. O principal objetivo deste trabalho é avaliar o desempenho dos estimadores usuais da variância em amostras complexas. Inspirado num conjunto de dados reais foram geradas três populações com características distintas, das quais foram sorteadas amostras com diferentes delineamentos de amostragem, na expectativa de obter alguma indicação sobre em que situações se deve optar por cada um dos estimadores da variância. Com base nos resultados obtidos, podemos concluir que o desempenho dos estimadores da variância da média amostral de Taylor, Jacknife e Bootstrap varia com o tipo de delineamento e população. De um modo geral, o estimador de Bootstrap é o menos preciso e em delineamentos estratificados os estimadores de Taylor e Jackknife fornecem os mesmos resultados; Evaluation of variance estimation methods in complex samples ABSTRACT: The need to know a population drives a process of collecting and analyzing information. Usually is to hard or even impossible to study the whole population, hence the importance of sampling. Framing a study by sampling is a complex process, from before the data collection until the data analysis. Many studies have used combinations of various probabilistic sampling methods for selecting a representative sample of the population, calling it complex sampling design. Knowledge of sampling errors is essential for correct interpretation of the survey results and evaluation of the sampling plans. In complex samples to estimate the variance has been approaches adjusted to the complex nature of the sample plane. The most common are: the linearization method of Taylor and techniques of resampling and replication. The main objective of this study is to evaluate the performance of usual estimators of the variance in complex samples. Inspired on real data we will generate three populations with distinct characteristics. From this populations will be drawn samples using different sampling designs. In the end we intend to get some lights about in which situations we should opt for each one of the variance estimators. Our results show that the performance of the variance estimators of sample mean Taylor, Jacknife and Bootstrap varies with the design and population. In general, the Bootstrap estimator is less precise and in stratified design Taylor and Jackknife estimators provide the same results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We use a probing strategy to estimate the time dependent traffic intensity in an Mt/Gt/1 queue, where the arrival rate and the general service-time distribution change from one time interval to another, and derive statistical properties of the proposed estimator. We present a method to detect a switch from a stationary interval to another using a sequence of probes to improve the estimation. At the end, we compare our results with two estimators proposed in the literature for the M/G/1 queue.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this Thesis we focus on non-standard signatures from CMB polarisation, which might hint at the existence of new phenomena beyond the standard models for Cosmology and Particle physics. With the Planck ESA mission, CMB temperature anisotropies have been observed at the cosmic variance limit, but polarisation remains to be further investigated. CMB polarisation data are important not only because they contribute to provide tighter constraints of cosmological parameters but also because they allow the investigation of physical processes that would be precluded if just the CMB temperature maps were considered. We take polarisation data into account to assess the statistical significance of the anomalies currently observed only in the CMB temperature map and to constrain the Cosmic Birefringence (CB) effect, which is expected in parity-violating extensions of the standard electromagnetism. In particular, we propose a new one-dimensional estimator for the lack of power anomaly capable of taking both temperature and polarisation into account jointly. With the aim of studying the anisotropic CB we develop and perform two different and complementary methods able to evaluate the power spectrum of the CB. Finally, by employing these estimators and methodologies on Planck data we provide new constraints beyond what already known in literature. The measure of CMB polarisation represents a technological challenge and to make accurate estimates, one has to keep an exquisite control of the systematic effects. In order to investigate the impact of spurious signal in forthcoming CMB polarisation experiments, we study the interplay between half-wave plates (HWP) non-idealities and the beams. Our analysis suggests that certain HWP configurations, depending on the complexity of Galactic foregrounds and the beam models, significantly impacts the B-mode reconstruction fidelity and could limit the capabilities of next-generation CMB experiments. We provide also a first study of the impact of non-ideal HWPs on CB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis deals with the problem of Model Selection (MS) motivated by information and prediction theory, focusing on parametric time series (TS) models. The main contribution of the thesis is the extension to the multivariate case of the Misspecification-Resistant Information Criterion (MRIC), a criterion introduced recently that solves Akaike’s original research problem posed 50 years ago, which led to the definition of the AIC. The importance of MS is witnessed by the huge amount of literature devoted to it and published in scientific journals of many different disciplines. Despite such a widespread treatment, the contributions that adopt a mathematically rigorous approach are not so numerous and one of the aims of this project is to review and assess them. Chapter 2 discusses methodological aspects of MS from information theory. Information criteria (IC) for the i.i.d. setting are surveyed along with their asymptotic properties; and the cases of small samples, misspecification, further estimators. Chapter 3 surveys criteria for TS. IC and prediction criteria are considered for: univariate models (AR, ARMA) in the time and frequency domain, parametric multivariate (VARMA, VAR); nonparametric nonlinear (NAR); and high-dimensional models. The MRIC answers Akaike’s original question on efficient criteria, for possibly-misspecified (PM) univariate TS models in multi-step prediction with high-dimensional data and nonlinear models. Chapter 4 extends the MRIC to PM multivariate TS models for multi-step prediction introducing the Vectorial MRIC (VMRIC). We show that the VMRIC is asymptotically efficient by proving the decomposition of the MSPE matrix and the consistency of its Method-of-Moments Estimator (MoME), for Least Squares multi-step prediction with univariate regressor. Chapter 5 extends the VMRIC to the general multiple regressor case, by showing that the MSPE matrix decomposition holds, obtaining consistency for its MoME, and proving its efficiency. The chapter concludes with a digression on the conditions for PM VARX models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Brain functioning relies on the interaction of several neural populations connected through complex connectivity networks, enabling the transmission and integration of information. Recent advances in neuroimaging techniques, such as electroencephalography (EEG), have deepened our understanding of the reciprocal roles played by brain regions during cognitive processes. The underlying idea of this PhD research is that EEG-related functional connectivity (FC) changes in the brain may incorporate important neuromarkers of behavior and cognition, as well as brain disorders, even at subclinical levels. However, a complete understanding of the reliability of the wide range of existing connectivity estimation techniques is still lacking. The first part of this work addresses this limitation by employing Neural Mass Models (NMMs), which simulate EEG activity and offer a unique tool to study interconnected networks of brain regions in controlled conditions. NMMs were employed to test FC estimators like Transfer Entropy and Granger Causality in linear and nonlinear conditions. Results revealed that connectivity estimates reflect information transmission between brain regions, a quantity that can be significantly different from the connectivity strength, and that Granger causality outperforms the other estimators. A second objective of this thesis was to assess brain connectivity and network changes on EEG data reconstructed at the cortical level. Functional brain connectivity has been estimated through Granger Causality, in both temporal and spectral domains, with the following goals: a) detect task-dependent functional connectivity network changes, focusing on internal-external attention competition and fear conditioning and reversal; b) identify resting-state network alterations in a subclinical population with high autistic traits. Connectivity-based neuromarkers, compared to the canonical EEG analysis, can provide deeper insights into brain mechanisms and may drive future diagnostic methods and therapeutic interventions. However, further methodological studies are required to fully understand the accuracy and information captured by FC estimates, especially concerning nonlinear phenomena.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main topic of this thesis is confounding in linear regression models. It arises when a relationship between an observed process, the covariate, and an outcome process, the response, is influenced by an unmeasured process, the confounder, associated with both. Consequently, the estimators for the regression coefficients of the measured covariates might be severely biased, less efficient and characterized by misleading interpretations. Confounding is an issue when the primary target of the work is the estimation of the regression parameters. The central point of the dissertation is the evaluation of the sampling properties of parameter estimators. This work aims to extend the spatial confounding framework to general structured settings and to understand the behaviour of confounding as a function of the data generating process structure parameters in several scenarios focusing on the joint covariate-confounder structure. In line with the spatial statistics literature, our purpose is to quantify the sampling properties of the regression coefficient estimators and, in turn, to identify the most prominent quantities depending on the generative mechanism impacting confounding. Once the sampling properties of the estimator conditionally on the covariate process are derived as ratios of dependent quadratic forms in Gaussian random variables, we provide an analytic expression of the marginal sampling properties of the estimator using Carlson’s R function. Additionally, we propose a representative quantity for the magnitude of confounding as a proxy of the bias, its first-order Laplace approximation. To conclude, we work under several frameworks considering spatial and temporal data with specific assumptions regarding the covariance and cross-covariance functions used to generate the processes involved. This study allows us to claim that the variability of the confounder-covariate interaction and of the covariate plays the most relevant role in determining the principal marker of the magnitude of confounding.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this PhD thesis a new firm level conditional risk measure is developed. It is named Joint Value at Risk (JVaR) and is defined as a quantile of a conditional distribution of interest, where the conditioning event is a latent upper tail event. It addresses the problem of how risk changes under extreme volatility scenarios. The properties of JVaR are studied based on a stochastic volatility representation of the underlying process. We prove that JVaR is leverage consistent, i.e. it is an increasing function of the dependence parameter in the stochastic representation. A feasible class of nonparametric M-estimators is introduced by exploiting the elicitability of quantiles and the stochastic ordering theory. Consistency and asymptotic normality of the two stage M-estimator are derived, and a simulation study is reported to illustrate its finite-sample properties. Parametric estimation methods are also discussed. The relation with the VaR is exploited to introduce a volatility contribution measure, and a tail risk measure is also proposed. The analysis of the dynamic JVaR is presented based on asymmetric stochastic volatility models. Empirical results with S&P500 data show that accounting for extreme volatility levels is relevant to better characterize the evolution of risk. The work is complemented by a review of the literature, where we provide an overview on quantile risk measures, elicitable functionals and several stochastic orderings.