5 resultados para non-parametric background modeling

em Duke University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: In a time-course microarray experiment, the expression level for each gene is observed across a number of time-points in order to characterize the temporal trajectories of the gene-expression profiles. For many of these experiments, the scientific aim is the identification of genes for which the trajectories depend on an experimental or phenotypic factor. There is an extensive recent body of literature on statistical methodology for addressing this analytical problem. Most of the existing methods are based on estimating the time-course trajectories using parametric or non-parametric mean regression methods. The sensitivity of these regression methods to outliers, an issue that is well documented in the statistical literature, should be of concern when analyzing microarray data. RESULTS: In this paper, we propose a robust testing method for identifying genes whose expression time profiles depend on a factor. Furthermore, we propose a multiple testing procedure to adjust for multiplicity. CONCLUSIONS: Through an extensive simulation study, we will illustrate the performance of our method. Finally, we will report the results from applying our method to a case study and discussing potential extensions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Evidence-based medication and lifestyle modification are important for secondary prevention of cardiovascular disease but are underutilized. Mobile health strategies could address this gap but existing evidence is mixed. Therefore, we piloted a pre-post study to assess the impact of patient-directed text messages as a means of improving medication adherence and modifying major health risk behaviors among coronary heart disease (CHD) patients in Hainan, China.

Methods: 92 CVD patients were surveyed between June and August 2015 (before the intervention) and then between October and December 2015 (after 12 week intervention) about (a) medication use (b) smoking status,(c) fruit and vegetable consumption, and (d) physical activity uptake. Acceptability of text-messaging intervention was assessed at follow-up. Descriptive statistics, along with paired comparisons between the pre and post outcomes were conducted using both parametric (t-test) and non-parametric (Wilcoxon signed rank test) methods.

Results: The number of respondents at follow-up was 82 (89% retention rate). Significant improvements were observed for medication adherence (P<0.001) and for the number of cigarettes smoked per day (P=.022). However there was no change in the number of smokers who quitted smoking at follow-up. There were insignificant changes for physical activity (P=0.91) and fruit and vegetable consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent evidence that echinoids of the genus Echinometra have moderate visual acuity that appears to be mediated by their spines screening off-axis light suggests that the urchin Strongylocentrotus purpuratus, with its higher spine density, may have even more acute spatial vision. We analyzed the movements of 39 specimens of S. purpuratus after they were placed in the center of a featureless tank containing a round, black target that had an angular diameter of 6.5 deg. or 10 deg. (solid angles of 0.01 sr and 0.024 sr, respectively). An average orientation vector for each urchin was determined by testing the animal four times, with the target placed successively at bearings of 0 deg., 90 deg., 180 deg. and 270 deg. (relative to magnetic east). The urchins showed no significant unimodal or axial orientation relative to any non-target feature of the environment or relative to the changing position of the 6.5 deg. target. However, the urchins were strongly axially oriented relative to the changing position of the 10 deg. target (mean axis from -1 to 179 deg.; 95% confidence interval +/- 12 deg.; P<0.001, Moore's non-parametric Hotelling's test), with 10 of the 20 urchins tested against that target choosing an average bearing within 10 deg. of either the target center or its opposite direction (two would be expected by chance). In addition, the average length of the 20 target-normalized bearings for the 10 deg. target (each the vector sum of the bearings for the four trials) were far higher than would be expected by chance (P<10(-10); Monte Carlo simulation), showing that each urchin, whether it moved towards or away from the target, did so with high consistency. These results strongly suggest that S. purpuratus detected the 10 deg. target, responding either by approaching it or fleeing it. Given that the urchins did not appear to respond to the 6.5 deg. target, it is likely that the 10 deg. target was close to the minimum detectable size for this species. Interestingly, measurements of the spine density of the regions of the test that faced horizontally predicted a similar visual resolution (8.3+/-0.5 deg. for the interambulacrum and 11+/-0.54 deg. for the ambulacrum). The function of this relatively low, but functional, acuity - on par with that of the chambered Nautilus and the horseshoe crab - is unclear but, given the bimodal response, is likely to be related to both shelter seeking and predator avoidance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Based on thermodynamic principles, we derive expressions quantifying the non-harmonic vibrational behavior of materials, which are rigorous yet easily evaluated from experimentally available data for the thermal expansion coefficient and the phonon density of states. These experimentally- derived quantities are valuable to benchmark first-principles theoretical predictions of harmonic and non-harmonic thermal behaviors using perturbation theory, ab initio molecular-dynamics, or Monte-Carlo simulations. We illustrate this analysis by computing the harmonic, dilational, and anharmonic contributions to the entropy, internal energy, and free energy of elemental aluminum and the ordered compound FeSi over a wide range of temperature. Results agree well with previous data in the literature and provide an efficient approach to estimate anharmonic effects in materials.