903 resultados para Tests for Continuous Lifetime Data
Resumo:
Mode of access: Internet.
Resumo:
The purpose of this study was to correct some mistakes in the literature and derive a necessary and sufficient condition for the MRL to follow the roller-coaster pattern of the corresponding failure rate function. It was also desired to find the conditions under which the discrete failure rate function has an upside-down bathtub shape if corresponding MRL function has a bathtub shape. The study showed that if discrete MRL has a bathtub shape, then under some conditions the corresponding failure rate function has an upside-down bathtub shape. Also the study corrected some mistakes in proofs of Tang, Lu and Chew (1999) and established a necessary and sufficient condition for the MRL to follow the roller-coaster pattern of the corresponding failure rate function. Similarly, some mistakes in Gupta and Gupta (2000) are corrected, with the ensuing results being expanded and proved thoroughly to establish the relationship between the crossing points of the failure rate and associated MRL functions. The new results derived in this study will be useful to model various lifetime data that occur in environmental studies, medical research, electronics engineering, and in many other areas of science and technology.
Resumo:
The Birnbaum-Saunders distribution has been used quite effectively to model times to failure for materials subject to fatigue and for modeling lifetime data. In this paper we obtain asymptotic expansions, up to order n(-1/2) and under a sequence of Pitman alternatives, for the non-null distribution functions of the likelihood ratio, Wald, score and gradient test statistics in the Birnbaum-Saunders regression model. The asymptotic distributions of all four statistics are obtained for testing a subset of regression parameters and for testing the shape parameter. Monte Carlo simulation is presented in order to compare the finite-sample performance of these tests. We also present two empirical applications. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The Birnbaum-Saunders regression model is becoming increasingly popular in lifetime analyses and reliability studies. In this model, the signed likelihood ratio statistic provides the basis for testing inference and construction of confidence limits for a single parameter of interest. We focus on the small sample case, where the standard normal distribution gives a poor approximation to the true distribution of the statistic. We derive three adjusted signed likelihood ratio statistics that lead to very accurate inference even for very small samples. Two empirical applications are presented. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this article, we deal with the issue of performing accurate small-sample inference in the Birnbaum-Saunders regression model, which can be useful for modeling lifetime or reliability data. We derive a Bartlett-type correction for the score test and numerically compare the corrected test with the usual score test and some other competitors.
Resumo:
Um dos modos de avaliar a confiabilidade de um produto é verificando o comportamento das falhas em testes de uso contínuo. Contudo, essa informação não permite saber em que data a falha ocorrerá. Para resolver esse impasse abordamos na presente dissertação a modelagem dos tempos de falha ao longo do calendário. A modelagem desses tempos permite uma melhor administração do sistema de garantia e assistência técnica, além de possibilitar a empresa estimar e monitorar a confiabilidade do produto. Para proceder com a modelagem, é preciso, inicialmente, conhecer a distribuição de três variáveis: o tempo de vida do produto, em horas de uso contínuo; o tempo de uso do produto, em horas por dia; e o intervalo de tempo entre a manufatura e a venda, em dias. Conhecendo o comportamento dessa variáveis duas alternativas de solução são apresentadas: (a) Modelagem via simulação de Monte Carlo e (b) Modelagem através de solução matemática fechada. São discutidos os casos em que há um ou vários grupos de clientes que utilizam o produto.
Resumo:
The purpose of this study is to investigate the effects of predictor variable correlations and patterns of missingness with dichotomous and/or continuous data in small samples when missing data is multiply imputed. Missing data of predictor variables is multiply imputed under three different multivariate models: the multivariate normal model for continuous data, the multinomial model for dichotomous data and the general location model for mixed dichotomous and continuous data. Subsequent to the multiple imputation process, Type I error rates of the regression coefficients obtained with logistic regression analysis are estimated under various conditions of correlation structure, sample size, type of data and patterns of missing data. The distributional properties of average mean, variance and correlations among the predictor variables are assessed after the multiple imputation process. ^ For continuous predictor data under the multivariate normal model, Type I error rates are generally within the nominal values with samples of size n = 100. Smaller samples of size n = 50 resulted in more conservative estimates (i.e., lower than the nominal value). Correlation and variance estimates of the original data are retained after multiple imputation with less than 50% missing continuous predictor data. For dichotomous predictor data under the multinomial model, Type I error rates are generally conservative, which in part is due to the sparseness of the data. The correlation structure for the predictor variables is not well retained on multiply-imputed data from small samples with more than 50% missing data with this model. For mixed continuous and dichotomous predictor data, the results are similar to those found under the multivariate normal model for continuous data and under the multinomial model for dichotomous data. With all data types, a fully-observed variable included with variables subject to missingness in the multiple imputation process and subsequent statistical analysis provided liberal (larger than nominal values) Type I error rates under a specific pattern of missing data. It is suggested that future studies focus on the effects of multiple imputation in multivariate settings with more realistic data characteristics and a variety of multivariate analyses, assessing both Type I error and power. ^
Resumo:
Abstract
Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.
The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.
The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.
The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.
Resumo:
Introduction: There is ongoing debate regarding the ideal sequence, volume, and concentration of irrigants, length of time for irrigation, and irrigation technique to achieve debridement of the root canal system. The aim of this study was to verify the impact of the final rinse technique on smear layer removal ability of 17% ethylenediaminetetraacetic acid (EDTA). Methods: Sixteen single-rooted human teeth were instrumented and divided into 2 groups at the final rinse step according to the following final rinse techniques used: continuous rinse group, continuous rinse with EDTA during 3 minutes, and rinse and soaking group, rinse with 1 mL of EDTA, soaking of the canal for 2 minutes and 30 seconds, and rinse completion with the remaining 4 mL for 30 seconds. The specimens were split lengthwise and observed under scanning electron microscope. Results: Data were analyzed with Kruskal-Wallis and Dunn tests. The continuous rinse group presented more debris-free surfaces when compared with the rinse and soaking group (P <. 01). When the root canal areas were compared within the groups, no statistical differences were found (P > .05). Conclusions: It can be concluded that a continuous rinse with 5 mL of EDTA for 3 minutes can more efficiently remove the smear layer from root canal walls. (J Endod 2010;36:512-514)
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Interferon-beta (IFN-beta) therapy for multiple sclerosis (MS) is associated with a potential for induction of neutralizing antibodies (NAbs). Because immune reactivity depends on changes in lipoprotein metabolism, we investigated whether plasma lipoprotein profiles could be associated with the development of NAbs. Thirty-one female MS patients treated with subcutaneously administered IFN-beta were included. Demographic and clinical characteristics were compared between NAbs response groups using t tests for continuous and logistic regression analysis and Fisher's exact tests for categorical data, respectively. Multivariate logistic regression was used to evaluate the effect of potential confounders. Patients who developed NAbs had lower apoE levels before treatment, 67 (47-74) mg/L median (interquartile range), and at the moment of NAb analysis, 53 (50-84) mg/L, in comparison to those who remained NAb-negative, 83 (68-107) mg/L, P = 0.03, and 76 (66-87) mg/L, P = 0.04, respectively. When adjusting for age and smoking for a one-standard deviation decrease in apoE levels, a 5.6-fold increase in the odds of becoming NAb-positive was detected: odds ratios (OR) 0.18 (95% CI 0.04-0.77), P = 0.04. When adjusting for apoE, smoking habit became associated with NAb induction: OR 5.6 (95% CI 1.3-87), P = 0.03. These results suggest that apoE-containing lipoprotein metabolism and, possibly, tobacco smoking may be associated with risk of NAb production in female MS patients treated with IFN-beta.
Resumo:
Background: As the long-term efficacy of stereotactic body radiation therapy (SBRT) becomes established and other prostate cancer treatment approaches are refined and improved, examination of quality of life (QOL) following prostate cancer treatment is critical in driving both patient and clinical treatment decisions. We present the first study to compare QOL after SBRT and radical prostatectomy, with QOL assessed at approximately the same times pre- and post-treatment and using the same validated QOL instrument. Methods: Patients with clinically localized prostate cancer were treated with either radical prostatectomy (n = 123 Spanish patients) or SBRT (n = 216 American patients). QOL was assessed using the Expanded Prostate Cancer Index Composite (EPIC) grouped into urinary, sexual, and bowel domains. For comparison purposes, SBRT EPIC data at baseline, 3 weeks, 5, 11, 24, and 36 months were compared to surgery data at baseline, 1, 6, 12, 24,and 36 months. Differences in patient characteristics between the two groups were assessed using Chi-squared tests for categorical variables and t-tests for continuous variables. Generalized estimating equation (GEE) models were constructed for each EPIC scale to account for correlation among repeated measures and used to assess the effect of treatment on QOL. Results: The largest differences in QOL occurred in the first 16 months after treatment, with larger declines following surgery in urinary and sexual QOL as compared to SBRT, and a larger decline in bowel QOL following SBRT as compared to surgery. Long-term urinary and sexual QOL declines remained clinically significantly lower for surgery patients but not for SBRT patients. Conclusions: Overall, these results may have implications for patient and physician clinical decision making which are often influenced by QOL. These differences in sexual, urinary and bowel QOL should be closely considered in selecting the right treatment, especially in evaluating the value of non-invasive treatments, such as SBRT.