839 resultados para Size and Power Tests


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Birnbaum-Saunders distribution has been used quite effectively to model times to failure for materials subject to fatigue and for modeling lifetime data. In this paper we obtain asymptotic expansions, up to order n(-1/2) and under a sequence of Pitman alternatives, for the non-null distribution functions of the likelihood ratio, Wald, score and gradient test statistics in the Birnbaum-Saunders regression model. The asymptotic distributions of all four statistics are obtained for testing a subset of regression parameters and for testing the shape parameter. Monte Carlo simulation is presented in order to compare the finite-sample performance of these tests. We also present two empirical applications. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sizes and power of selected two-sample tests of the equality of survival distributions are compared by simulation for small samples from unequally, randomly-censored exponential distributions. The tests investigated include parametric tests (F, Score, Likelihood, Asymptotic), logrank tests (Mantel, Peto-Peto), and Wilcoxon-Type tests (Gehan, Prentice). Equal sized samples, n = 18, 16, 32 with 1000 (size) and 500 (power) simulation trials, are compared for 16 combinations of the censoring proportions 0%, 20%, 40%, and 60%. For n = 8 and 16, the Asymptotic, Peto-Peto, and Wilcoxon tests perform at nominal 5% size expectations, but the F, Score and Mantel tests exceeded 5% size confidence limits for 1/3 of the censoring combinations. For n = 32, all tests showed proper size, with the Peto-Peto test most conservative in the presence of unequal censoring. Powers of all tests are compared for exponential hazard ratios of 1.4 and 2.0. There is little difference in power characteristics of the tests within the classes of tests considered. The Mantel test showed 90% to 95% power efficiency relative to parametric tests. Wilcoxon-type tests have the lowest relative power but are robust to differential censoring patterns. A modified Peto-Peto test shows power comparable to the Mantel test. For n = 32, a specific Weibull-exponential comparison of crossing survival curves suggests that the relative powers of logrank and Wilcoxon-type tests are dependent on the scale parameter of the Weibull distribution. Wilcoxon-type tests appear more powerful than logrank tests in the case of late-crossing and less powerful for early-crossing survival curves. Guidelines for the appropriate selection of two-sample tests are given. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experiments have been carried out to investigate the polar distribution of atomic material ablated during the pulsed laser deposition of Cu in vacuum. Data were obtained as functions of focused laser spot size and power density. Thin films were deposited onto flat glass substrates and thickness profiles were transformed into polar atomic flux distributions of the form f(theta)=cos(n) theta. At constant focused laser power density on target, I=4.7+/-0.3X10(8) W/cm(2), polar distributions were found to broaden with a reduction in the focused laser spot size. The polar distribution exponent n varied from 15+/-2 to 7+/-1 for focused laser spot diameter variation from 2.5 to 1.4 mm, respectively, with the laser beam exhibiting a circular aspect on target. With the focused laser spot size held constant at phi=1.8 mm, polar distributions were observed to broaden with a reduction in the focused laser power density on target, with the associated polar distribution exponent n varying from 13+/-1.5 to 8+/-1 for focused laser power density variation from 8.3+/-0.3X10(8) to 2.2+/-0.1X10(8) W/cm(2) respectively. Data were compared with an analytical model available within the literature, which correctly predicts broadening of the polar distribution with a reduction in focused laser spot size and with a reduction in focused laser power density, although the experimentally observed magnitude was greater than that predicted in both cases. (C) 1996 American Institute of Physics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantile regression (QR) was first introduced by Roger Koenker and Gilbert Bassett in 1978. It is robust to outliers which affect least squares estimator on a large scale in linear regression. Instead of modeling mean of the response, QR provides an alternative way to model the relationship between quantiles of the response and covariates. Therefore, QR can be widely used to solve problems in econometrics, environmental sciences and health sciences. Sample size is an important factor in the planning stage of experimental design and observational studies. In ordinary linear regression, sample size may be determined based on either precision analysis or power analysis with closed form formulas. There are also methods that calculate sample size based on precision analysis for QR like C.Jennen-Steinmetz and S.Wellek (2005). A method to estimate sample size for QR based on power analysis was proposed by Shao and Wang (2009). In this paper, a new method is proposed to calculate sample size based on power analysis under hypothesis test of covariate effects. Even though error distribution assumption is not necessary for QR analysis itself, researchers have to make assumptions of error distribution and covariate structure in the planning stage of a study to obtain a reasonable estimate of sample size. In this project, both parametric and nonparametric methods are provided to estimate error distribution. Since the method proposed can be implemented in R, user is able to choose either parametric distribution or nonparametric kernel density estimation for error distribution. User also needs to specify the covariate structure and effect size to carry out sample size and power calculation. The performance of the method proposed is further evaluated using numerical simulation. The results suggest that the sample sizes obtained from our method provide empirical powers that are closed to the nominal power level, for example, 80%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The financial crisis of 2007-2008 led to extraordinary government intervention in firms and markets. The scope and depth of government action rivaled that of the Great Depression. Many traded markets experienced dramatic declines in liquidity leading to the existence of conditions normally assumed to be promptly removed via the actions of profit seeking arbitrageurs. These extreme events motivate the three essays in this work. The first essay seeks and fails to find evidence of investor behavior consistent with the broad 'Too Big To Fail' policies enacted during the crisis by government agents. Only in limited circumstances, where government guarantees such as deposit insurance or U.S. Treasury lending lines already existed, did investors impart a premium to the debt security prices of firms under stress. The second essay introduces the Inflation Indexed Swap Basis (IIS Basis) in examining the large differences between cash and derivative markets based upon future U.S. inflation as measured by the Consumer Price Index (CPI). It reports the consistent positive value of this measure as well as the very large positive values it reached in the fourth quarter of 2008 after Lehman Brothers went bankrupt. It concludes that the IIS Basis continues to exist due to limitations in market liquidity and hedging alternatives. The third essay explores the methodology of performing debt based event studies utilizing credit default swaps (CDS). It provides practical implementation advice to researchers to address limited source data and/or small target firm sample size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We derive asymptotic expansions for the nonnull distribution functions of the likelihood ratio, Wald, score and gradient test statistics in the class of dispersion models, under a sequence of Pitman alternatives. The asymptotic distributions of these statistics are obtained for testing a subset of regression parameters and for testing the precision parameter. Based on these nonnull asymptotic expansions, the power of all four tests, which are equivalent to first order, are compared. Furthermore, in order to compare the finite-sample performance of these tests in this class of models, Monte Carlo simulations are presented. An empirical application to a real data set is considered for illustrative purposes. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Obese children move less and with greater difficulty than normal-weight counterparts but expend comparable energy. Increased metabolic costs have been attributed to poor biomechanics but few studies have investigated the influence of obesity on mechanical demands of gait. This study sought to assess three-dimensional lower extremity joint powers in two walking cadences in 28 obese and normal-weight children. 3D-motion analysis was conducted for five trials of barefoot walking at self-selected and 30% greater than self-selected cadences. Mechanical power was calculated at the hip, knee, and ankle in sagittal, frontal and transverse planes. Significant group differences were seen for all power phases in the sagittal plane, hip and knee power at weight acceptance and hip power at propulsion in the frontal plane, and knee power during mid-stance in the transverse plane. After adjusting for body weight, group differences existed in hip and knee power phases at weight acceptance in sagittal and frontal planes, respectively. Differences in cadence existed for all hip joint powers in the sagittal plane and frontal plane hip power at propulsion. Frontal plane knee power at weight acceptance and sagittal plane knee power at propulsion were significantly different between cadences. Larger joint powers in obese children contribute to difficulty performing locomotor tasks, potentially decreasing motivation to exercise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite their small size, some insects, such as crickets, can produce high amplitude mating songs by rubbing their wings together. By exploiting structural resonance for sound radiation, crickets broadcast species-specific songs at a sharply tuned frequency. Such songs enhance the range of signal transmission, contain information about the signaler's quality, and allow mate choice. The production of pure tones requires elaborate structural mechanisms that control and sustain resonance at the species-specific frequency. Tree crickets differ sharply from this scheme. Although they use a resonant system to produce sound, tree crickets can produce high amplitude songs at different frequencies, varying by as much as an octave. Based on an investigation of the driving mechanism and the resonant system, using laser Doppler vibrometry and finite element modeling, we show that it is the distinctive geometry of the crickets' forewings (the resonant system) that is responsible for their capacity to vary frequency. The long, enlarged wings enable the production of high amplitude songs; however, as a mechanical consequence of the high aspect ratio, the resonant structures have multiple resonant modes that are similar in frequency. The drive produced by the singing apparatus cannot, therefore, be locked to a single frequency, and different resonant modes can easily be engaged, allowing individual males to vary the carrier frequency of their songs. Such flexibility in sound production, decoupling body size and song frequency, has important implications for conventional views of mate choice, and offers inspiration for the design of miniature, multifrequency, resonant acoustic radiators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We extend the class of M-tests for a unit root analyzed by Perron and Ng (1996) and Ng and Perron (1997) to the case where a change in the trend function is allowed to occur at an unknown time. These tests M(GLS) adopt the GLS detrending approach of Dufour and King (1991) and Elliott, Rothenberg and Stock (1996) (ERS). Following Perron (1989), we consider two models : one allowing for a change in slope and the other for both a change in intercept and slope. We derive the asymptotic distribution of the tests as well as that of the feasible point optimal tests PT(GLS) suggested by ERS. The asymptotic critical values of the tests are tabulated. Also, we compute the non-centrality parameter used for the local GLS detrending that permits the tests to have 50% asymptotic power at that value. We show that the M(GLS) and PT(GLS) tests have an asymptotic power function close to the power envelope. An extensive simulation study analyzes the size and power in finite samples under various methods to select the truncation lag for the autoregressive spectral density estimator. An empirical application is also provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper employs an extensive Monte Carlo study to test the size and power of the BDS and close return methods of testing for departures from independent and identical distribution. It is found that the finite sample properties of the BDS test are far superior and that the close return method cannot be recommended as a model diagnostic. Neither test can be reliably used for very small samples, while the close return test has low power even at large sample sizes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines the local power of the likelihood ratio, Wald, score and gradient tests under the presence of a scalar parameter, phi say, that is orthogonal to the remaining parameters. We show that some of the coefficients that define the local powers remain unchanged regardless of whether phi is known or needs to be estimated, where as the others can be written as the sum of two terms, the first of which being the corresponding term obtained as if phi were known, and the second, an additional term yielded by the fact that phi is unknown. The contribution of each set of parameters on the local powers of the tests can then be examined. Various implications of our main result are stated and discussed. Several examples are presented for illustrative purposes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The determination of size as well as power of a test is a vital part of a Clinical Trial Design. This research focuses on the simulation of clinical trial data with time-to-event as the primary outcome. It investigates the impact of different recruitment patterns, and time dependent hazard structures on size and power of the log-rank test. A non-homogeneous Poisson process is used to simulate entry times according to the different accrual patterns. A Weibull distribution is employed to simulate survival times according to the different hazard structures. The current study utilizes simulation methods to evaluate the effect of different recruitment patterns on size and power estimates of the log-rank test. The size of the log-rank test is estimated by simulating survival times with identical hazard rates between the treatment and the control arm of the study resulting in a hazard ratio of one. Powers of the log-rank test at specific values of hazard ratio (≠1) are estimated by simulating survival times with different, but proportional hazard rates for the two arms of the study. Different shapes (constant, decreasing, or increasing) of the hazard function of the Weibull distribution are also considered to assess the effect of hazard structure on the size and power of the log-rank test. ^