11 resultados para Statistical count
em Duke University
Resumo:
BACKGROUND: To our knowledge, the antiviral activity of pegylated interferon alfa-2a has not been studied in participants with untreated human immunodeficiency virus type 1 (HIV-1) infection but without chronic hepatitis C virus (HCV) infection. METHODS: Untreated HIV-1-infected volunteers without HCV infection received 180 microg of pegylated interferon alfa-2a weekly for 12 weeks. Changes in plasma HIV-1 RNA load, CD4(+) T cell counts, pharmacokinetics, pharmacodynamic measurements of 2',5'-oligoadenylate synthetase (OAS) activity, and induction levels of interferon-inducible genes (IFIGs) were measured. Nonparametric statistical analysis was performed. RESULTS: Eleven participants completed 12 weeks of therapy. The median plasma viral load decrease and change in CD4(+) T cell counts at week 12 were 0.61 log(10) copies/mL (90% confidence interval [CI], 0.20-1.18 log(10) copies/mL) and -44 cells/microL (90% CI, -95 to 85 cells/microL), respectively. There was no correlation between plasma viral load decreases and concurrent pegylated interferon plasma concentrations. However, participants with larger increases in OAS level exhibited greater decreases in plasma viral load at weeks 1 and 2 (r = -0.75 [90% CI, -0.93 to -0.28] and r = -0.61 [90% CI, -0.87 to -0.09], respectively; estimated Spearman rank correlation). Participants with higher baseline IFIG levels had smaller week 12 decreases in plasma viral load (0.66 log(10) copies/mL [90% CI, 0.06-0.91 log(10) copies/mL]), whereas those with larger IFIG induction levels exhibited larger decreases in plasma viral load (-0.74 log(10) copies/mL [90% CI, -0.93 to -0.21 log(10) copies/mL]). CONCLUSION: Pegylated interferon alfa-2a was well tolerated and exhibited statistically significant anti-HIV-1 activity in HIV-1-monoinfected patients. The anti-HIV-1 effect correlated with OAS protein levels (weeks 1 and 2) and IFIG induction levels (week 12) but not with pegylated interferon concentrations.
Resumo:
BACKGROUND: The rate of emergence of human pathogens is steadily increasing; most of these novel agents originate in wildlife. Bats, remarkably, are the natural reservoirs of many of the most pathogenic viruses in humans. There are two bat genome projects currently underway, a circumstance that promises to speed the discovery host factors important in the coevolution of bats with their viruses. These genomes, however, are not yet assembled and one of them will provide only low coverage, making the inference of most genes of immunological interest error-prone. Many more wildlife genome projects are underway and intend to provide only shallow coverage. RESULTS: We have developed a statistical method for the assembly of gene families from partial genomes. The method takes full advantage of the quality scores generated by base-calling software, incorporating them into a complete probabilistic error model, to overcome the limitation inherent in the inference of gene family members from partial sequence information. We validated the method by inferring the human IFNA genes from the genome trace archives, and used it to infer 61 type-I interferon genes, and single type-II interferon genes in the bats Pteropus vampyrus and Myotis lucifugus. We confirmed our inferences by direct cloning and sequencing of IFNA, IFNB, IFND, and IFNK in P. vampyrus, and by demonstrating transcription of some of the inferred genes by known interferon-inducing stimuli. CONCLUSION: The statistical trace assembler described here provides a reliable method for extracting information from the many available and forthcoming partial or shallow genome sequencing projects, thereby facilitating the study of a wider variety of organisms with ecological and biomedical significance to humans than would otherwise be possible.
Resumo:
This article describes advances in statistical computation for large-scale data analysis in structured Bayesian mixture models via graphics processing unit (GPU) programming. The developments are partly motivated by computational challenges arising in fitting models of increasing heterogeneity to increasingly large datasets. An example context concerns common biological studies using high-throughput technologies generating many, very large datasets and requiring increasingly high-dimensional mixture models with large numbers of mixture components.We outline important strategies and processes for GPU computation in Bayesian simulation and optimization approaches, give examples of the benefits of GPU implementations in terms of processing speed and scale-up in ability to analyze large datasets, and provide a detailed, tutorial-style exposition that will benefit readers interested in developing GPU-based approaches in other statistical models. Novel, GPU-oriented approaches to modifying existing algorithms software design can lead to vast speed-up and, critically, enable statistical analyses that presently will not be performed due to compute time limitations in traditional computational environments. Supplementalmaterials are provided with all source code, example data, and details that will enable readers to implement and explore the GPU approach in this mixture modeling context. © 2010 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Resumo:
The Census of Marine Life aids practical work of the Convention on Biological Diversity, discovers and tracks ocean biodiversity, and supports marine environmental planning.
Resumo:
BACKGROUND: Dropouts and missing data are nearly-ubiquitous in obesity randomized controlled trails, threatening validity and generalizability of conclusions. Herein, we meta-analytically evaluate the extent of missing data, the frequency with which various analytic methods are employed to accommodate dropouts, and the performance of multiple statistical methods. METHODOLOGY/PRINCIPAL FINDINGS: We searched PubMed and Cochrane databases (2000-2006) for articles published in English and manually searched bibliographic references. Articles of pharmaceutical randomized controlled trials with weight loss or weight gain prevention as major endpoints were included. Two authors independently reviewed each publication for inclusion. 121 articles met the inclusion criteria. Two authors independently extracted treatment, sample size, drop-out rates, study duration, and statistical method used to handle missing data from all articles and resolved disagreements by consensus. In the meta-analysis, drop-out rates were substantial with the survival (non-dropout) rates being approximated by an exponential decay curve (e(-lambdat)) where lambda was estimated to be .0088 (95% bootstrap confidence interval: .0076 to .0100) and t represents time in weeks. The estimated drop-out rate at 1 year was 37%. Most studies used last observation carried forward as the primary analytic method to handle missing data. We also obtained 12 raw obesity randomized controlled trial datasets for empirical analyses. Analyses of raw randomized controlled trial data suggested that both mixed models and multiple imputation performed well, but that multiple imputation may be more robust when missing data are extensive. CONCLUSION/SIGNIFICANCE: Our analysis offers an equation for predictions of dropout rates useful for future study planning. Our raw data analyses suggests that multiple imputation is better than other methods for handling missing data in obesity randomized controlled trials, followed closely by mixed models. We suggest these methods supplant last observation carried forward as the primary method of analysis.
Resumo:
A framework for adaptive and non-adaptive statistical compressive sensing is developed, where a statistical model replaces the standard sparsity model of classical compressive sensing. We propose within this framework optimal task-specific sensing protocols specifically and jointly designed for classification and reconstruction. A two-step adaptive sensing paradigm is developed, where online sensing is applied to detect the signal class in the first step, followed by a reconstruction step adapted to the detected class and the observed samples. The approach is based on information theory, here tailored for Gaussian mixture models (GMMs), where an information-theoretic objective relationship between the sensed signals and a representation of the specific task of interest is maximized. Experimental results using synthetic signals, Landsat satellite attributes, and natural images of different sizes and with different noise levels show the improvements achieved using the proposed framework when compared to more standard sensing protocols. The underlying formulation can be applied beyond GMMs, at the price of higher mathematical and computational complexity. © 1991-2012 IEEE.
Resumo:
BACKGROUND: Interleukin (IL)-15 is a chemotactic factor to T cells. It induces proliferation and promotes survival of activated T cells. IL-15 receptor blockade in mouse cardiac and islet allotransplant models has led to long-term engraftment and a regulatory T-cell environment. This study investigated the efficacy of IL-15 receptor blockade using Mut-IL-15/Fc in an outbred non-human primate model of renal allotransplantation. METHODS: Male cynomolgus macaque donor-recipient pairs were selected based on ABO typing, major histocompatibility complex class I typing, and carboxy-fluorescein diacetate succinimidyl ester-based mixed lymphocyte responses. Once animals were assigned to one of six treatment groups, they underwent renal transplantation and bilateral native nephrectomy. Serum creatinine level was monitored twice weekly and as indicated, and protocol biopsies were performed. Rejection was defined as a increase in serum creatinine to 1.5 mg/dL or higher and was confirmed histologically. Complete blood counts and flow cytometric analyses were performed periodically posttransplant; pharmacokinetic parameters of Mut-IL-15/Fc were assessed. RESULTS: Compared with control animals, Mut-IL-15/Fc-treated animals did not demonstrate increased graft survival despite adequate serum levels of Mut-IL-15/Fc. Flow cytometric analysis of white blood cell subgroups demonstrated a decrease in CD8 T-cell and natural killer cell numbers, although this did not reach statistical significance. Interestingly, two animals receiving Mut-IL-15/Fc developed infectious complications, but no infection was seen in control animals. Renal pathology varied widely. CONCLUSIONS: Peritransplant IL-15 receptor blockade does not prolong allograft survival in non-human primate renal transplantation; however, it reduces the number of CD8 T cells and natural killer cells in the peripheral blood.
Resumo:
X-ray crystallography is the predominant method for obtaining atomic-scale information about biological macromolecules. Despite the success of the technique, obtaining well diffracting crystals still critically limits going from protein to structure. In practice, the crystallization process proceeds through knowledge-informed empiricism. Better physico-chemical understanding remains elusive because of the large number of variables involved, hence little guidance is available to systematically identify solution conditions that promote crystallization. To help determine relationships between macromolecular properties and their crystallization propensity, we have trained statistical models on samples for 182 proteins supplied by the Northeast Structural Genomics consortium. Gaussian processes, which capture trends beyond the reach of linear statistical models, distinguish between two main physico-chemical mechanisms driving crystallization. One is characterized by low levels of side chain entropy and has been extensively reported in the literature. The other identifies specific electrostatic interactions not previously described in the crystallization context. Because evidence for two distinct mechanisms can be gleaned both from crystal contacts and from solution conditions leading to successful crystallization, the model offers future avenues for optimizing crystallization screens based on partial structural information. The availability of crystallization data coupled with structural outcomes analyzed through state-of-the-art statistical models may thus guide macromolecular crystallization toward a more rational basis.
Resumo:
For optimal solutions in health care, decision makers inevitably must evaluate trade-offs, which call for multi-attribute valuation methods. Researchers have proposed using best-worst scaling (BWS) methods which seek to extract information from respondents by asking them to identify the best and worst items in each choice set. While a companion paper describes the different types of BWS, application and their advantages and downsides, this contribution expounds their relationships with microeconomic theory, which also have implications for statistical inference. This article devotes to the microeconomic foundations of preference measurement, also addressing issues such as scale invariance and scale heterogeneity. Furthermore the paper discusses the basics of preference measurement using rating, ranking and stated choice data in the light of the findings of the preceding section. Moreover the paper gives an introduction to the use of stated choice data and juxtaposes BWS with the microeconomic foundations.