120 resultados para Resampling


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The construction of a reliable, practically useful prediction rule for future response is heavily dependent on the "adequacy" of the fitted regression model. In this article, we consider the absolute prediction error, the expected value of the absolute difference between the future and predicted responses, as the model evaluation criterion. This prediction error is easier to interpret than the average squared error and is equivalent to the mis-classification error for the binary outcome. We show that the distributions of the apparent error and its cross-validation counterparts are approximately normal even under a misspecified fitted model. When the prediction rule is "unsmooth", the variance of the above normal distribution can be estimated well via a perturbation-resampling method. We also show how to approximate the distribution of the difference of the estimated prediction errors from two competing models. With two real examples, we demonstrate that the resulting interval estimates for prediction errors provide much more information about model adequacy than the point estimates alone.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Receiver Operating Characteristic (ROC) curve is a prominent tool for characterizing the accuracy of continuous diagnostic test. To account for factors that might invluence the test accuracy, various ROC regression methods have been proposed. However, as in any regression analysis, when the assumed models do not fit the data well, these methods may render invalid and misleading results. To date practical model checking techniques suitable for validating existing ROC regression models are not yet available. In this paper, we develop cumulative residual based procedures to graphically and numerically assess the goodness-of-fit for some commonly used ROC regression models, and show how specific components of these models can be examined within this framework. We derive asymptotic null distributions for the residual process and discuss resampling procedures to approximate these distributions in practice. We illustrate our methods with a dataset from the Cystic Fibrosis registry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces a novel approach to making inference about the regression parameters in the accelerated failure time (AFT) model for current status and interval censored data. The estimator is constructed by inverting a Wald type test for testing a null proportional hazards model. A numerically efficient Markov chain Monte Carlo (MCMC) based resampling method is proposed to simultaneously obtain the point estimator and a consistent estimator of its variance-covariance matrix. We illustrate our approach with interval censored data sets from two clinical studies. Extensive numerical studies are conducted to evaluate the finite sample performance of the new estimators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The herbaceous layer is a dynamic layer in a forest ecosystem which often contains the highest species richness in northern temperate forests. Few long-term studies exist in northern hardwood forests with consistent management practices to observe herbaceous species dynamics. The Ford Forest (Michigan Technological University) reached its 50th year of management during the winter of 2008-2009. Herbaceous species were sampled during the summers pre- and post-harvest. Distinct herbaceous communities developed in the 13-cm diameter-limit treatment and the uncut control. After the harvest, the diameter-limit treatments had herbaceous communities more similar to the 13-cm diameter-limit treatment than the uncut control; the herbaceous layer contained more exotic and early successional species. Fifty years of continuous management changed the herbaceous community especially in the diameter-limit treatments. Sites used in the development of habitat classification systems based on the presence and absence of certain herbaceous species can also be used to monitor vegetation change over time. The Guide to Forest Communities and Habitat Types of Michigan was developed to aid forest managers in understanding the potential productivity of a stand, and often aid in the development of ecologically-based forest management practices. Subsets of plots used to create the Western Upper Peninsula Guide were resampled after 10 years. During the resampling, both spring and summer vegetation were sampled and earthworm populations were estimated through liquid extraction. Spring sampling observed important spring ephemerals missed during summer sampling. More exotic species were present during the summer 2010 sampling than the summer 2000 sampling. Invasive European earthworms were also observed at all sample locations in all habitat types; earthworm densities increased with increasing habitat richness. To ensure the accuracy of the guide book, plots should be monitored to see how herbaceous communities are changing. These plots also offer unique opportunities to monitor for invasive species and the effects of a changing climate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Block bootstrap has been introduced in the literature for resampling dependent data, i.e. stationary processes. One of the main assumptions in block bootstrapping is that the blocks of observations are exchangeable, i.e. their joint distribution is immune to permutations. In this paper we propose a new Bayesian approach to block bootstrapping, starting from the construction of exchangeable blocks. Our sampling mechanism is based on a particular class of reinforced urn processes

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Voluntary control of information processing is crucial to allocate resources and prioritize the processes that are most important under a given situation; the algorithms underlying such control, however, are often not clear. We investigated possible algorithms of control for the performance of the majority function, in which participants searched for and identified one of two alternative categories (left or right pointing arrows) as composing the majority in each stimulus set. We manipulated the amount (set size of 1, 3, and 5) and content (ratio of left and right pointing arrows within a set) of the inputs to test competing hypotheses regarding mental operations for information processing. Using a novel measure based on computational load, we found that reaction time was best predicted by a grouping search algorithm as compared to alternative algorithms (i.e., exhaustive or self-terminating search). The grouping search algorithm involves sampling and resampling of the inputs before a decision is reached. These findings highlight the importance of investigating the implications of voluntary control via algorithms of mental operations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis covers a broad part of the field of computational photography, including video stabilization and image warping techniques, introductions to light field photography and the conversion of monocular images and videos into stereoscopic 3D content. We present a user assisted technique for stereoscopic 3D conversion from 2D images. Our approach exploits the geometric structure of perspective images including vanishing points. We allow a user to indicate lines, planes, and vanishing points in the input image, and directly employ these as guides of an image warp that produces a stereo image pair. Our method is most suitable for scenes with large scale structures such as buildings and is able to skip the step of constructing a depth map. Further, we propose a method to acquire 3D light fields using a hand-held camera, and describe several computational photography applications facilitated by our approach. As the input we take an image sequence from a camera translating along an approximately linear path with limited camera rotations. Users can acquire such data easily in a few seconds by moving a hand-held camera. We convert the input into a regularly sampled 3D light field by resampling and aligning them in the spatio-temporal domain. We also present a novel technique for high-quality disparity estimation from light fields. Finally, we show applications including digital refocusing and synthetic aperture blur, foreground removal, selective colorization, and others.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Following up genetic linkage studies to identify the underlying susceptibility gene(s) for complex disease traits is an arduous yet biologically and clinically important task. Complex traits, such as hypertension, are considered polygenic with many genes influencing risk, each with small effects. Chromosome 2 has been consistently identified as a genomic region with genetic linkage evidence suggesting that one or more loci contribute to blood pressure levels and hypertension status. Using combined positional candidate gene methods, the Family Blood Pressure Program has concentrated efforts in investigating this region of chromosome 2 in an effort to identify underlying candidate hypertension susceptibility gene(s). Initial informatics efforts identified the boundaries of the region and the known genes within it. A total of 82 polymorphic sites in eight positional candidate genes were genotyped in a large hypothesis-generating sample consisting of 1640 African Americans, 1339 whites, and 1616 Mexican Americans. To adjust for multiple comparisons, resampling-based false discovery adjustment was applied, extending traditional resampling methods to sibship samples. Following this adjustment for multiple comparisons, SLC4A5, a sodium bicarbonate transporter, was identified as a primary candidate gene for hypertension. Polymorphisms in SLC4A5 were subsequently genotyped and analyzed for validation in two populations of African Americans (N = 461; N = 778) and two of whites (N = 550; N = 967). Again, SNPs within SLC4A5 were significantly associated with blood pressure levels and hypertension status. While not identifying a single causal DNA sequence variation that is significantly associated with blood pressure levels and hypertension status across all samples, the results further implicate SLC4A5 as a candidate hypertension susceptibility gene, validating previous evidence for one or more genes on chromosome 2 that influence hypertension related phenotypes in the population-at-large. The methodology and results reported provide a case study of one approach for following up the results of genetic linkage analyses to identify genes influencing complex traits. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Data Envelopment Analysis (DEA) efficiency score obtained for an individual firm is a point estimate without any confidence interval around it. In recent years, researchers have resorted to bootstrapping in order to generate empirical distributions of efficiency scores. This procedure assumes that all firms have the same probability of getting an efficiency score from any specified interval within the [0,1] range. We propose a bootstrap procedure that empirically generates the conditional distribution of efficiency for each individual firm given systematic factors that influence its efficiency. Instead of resampling directly from the pooled DEA scores, we first regress these scores on a set of explanatory variables not included at the DEA stage and bootstrap the residuals from this regression. These pseudo-efficiency scores incorporate the systematic effects of unit-specific factors along with the contribution of the randomly drawn residual. Data from the U.S. airline industry are utilized in an empirical application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Standardization is a common method for adjusting confounding factors when comparing two or more exposure category to assess excess risk. Arbitrary choice of standard population in standardization introduces selection bias due to healthy worker effect. Small sample in specific groups also poses problems in estimating relative risk and the statistical significance is problematic. As an alternative, statistical models were proposed to overcome such limitations and find adjusted rates. In this dissertation, a multiplicative model is considered to address the issues related to standardized index namely: Standardized Mortality Ratio (SMR) and Comparative Mortality Factor (CMF). The model provides an alternative to conventional standardized technique. Maximum likelihood estimates of parameters of the model are used to construct an index similar to the SMR for estimating relative risk of exposure groups under comparison. Parametric Bootstrap resampling method is used to evaluate the goodness of fit of the model, behavior of estimated parameters and variability in relative risk on generated sample. The model provides an alternative to both direct and indirect standardization method. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In geographical epidemiology, maps of disease rates and disease risk provide a spatial perspective for researching disease etiology. For rare diseases or when the population base is small, the rate and risk estimates may be unstable. Empirical Bayesian (EB) methods have been used to spatially smooth the estimates by permitting an area estimate to "borrow strength" from its neighbors. Such EB methods include the use of a Gamma model, of a James-Stein estimator, and of a conditional autoregressive (CAR) process. A fully Bayesian analysis of the CAR process is proposed. One advantage of this fully Bayesian analysis is that it can be implemented simply by using repeated sampling from the posterior densities. Use of a Markov chain Monte Carlo technique such as Gibbs sampler was not necessary. Direct resampling from the posterior densities provides exact small sample inferences instead of the approximate asymptotic analyses of maximum likelihood methods (Clayton & Kaldor, 1987). Further, the proposed CAR model provides for covariates to be included in the model. A simulation demonstrates the effect of sample size on the fully Bayesian analysis of the CAR process. The methods are applied to lip cancer data from Scotland, and the results are compared. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Logistic regression is one of the most important tools in the analysis of epidemiological and clinical data. Such data often contain missing values for one or more variables. Common practice is to eliminate all individuals for whom any information is missing. This deletion approach does not make efficient use of available information and often introduces bias.^ Two methods were developed to estimate logistic regression coefficients for mixed dichotomous and continuous covariates including partially observed binary covariates. The data were assumed missing at random (MAR). One method (PD) used predictive distribution as weight to calculate the average of the logistic regressions performing on all possible values of missing observations, and the second method (RS) used a variant of resampling technique. Additional seven methods were compared with these two approaches in a simulation study. They are: (1) Analysis based on only the complete cases, (2) Substituting the mean of the observed values for the missing value, (3) An imputation technique based on the proportions of observed data, (4) Regressing the partially observed covariates on the remaining continuous covariates, (5) Regressing the partially observed covariates on the remaining continuous covariates conditional on response variable, (6) Regressing the partially observed covariates on the remaining continuous covariates and response variable, and (7) EM algorithm. Both proposed methods showed smaller standard errors (s.e.) for the coefficient involving the partially observed covariate and for the other coefficients as well. However, both methods, especially PD, are computationally demanding; thus for analysis of large data sets with partially observed covariates, further refinement of these approaches is needed. ^