970 resultados para Zero-inflated Count Data
Resumo:
The purpose of this prospective study was to verify the changes in the preoperative and postoperative complete blood counts of patients with surgically treated facial fractures. Fifty consecutive patients with a mean age of 34 years who presented facial fractures and underwent surgical treatment were included. A complete blood count was performed, comprising the red and white blood cell count (cells/mu L), hemoglobin (g/dL), and hematocrit (%) levels. These data were obtained preoperatively and postoperatively during a 6-week period. Statistical analyses were performed using the Kruskal-Wallis and Mann-Whitney tests to identify the possible differences among the groups and among the periods of observation using the Friedman and Wilcoxon matched-pairs signed-ranks tests. The most common location of the fractures was the mandible (42.3%), followed by the zygomatic-orbital (36.5%) and associated locations (21.2%). Leukocytosis was associated with neutrophilia in the immediate postoperative period in all of the groups. There were no values below the reference limits of the values of hemoglobin, hematocrit, and erythrocytes, and no values above the reference limits for the remaining white blood cells, although significant differences among periods were observed in most cells, depending on the type of fracture. The primary findings were leukocytosis associated with neutrophilia, verified in the immediate postoperative period in all of the groups, and the influence of the type of fracture on the significant alterations observed among studied periods on the values of hemoglobin, hematocrit, erythrocytes, leukocytes, neutrophils, and lymphocytes.
Resumo:
Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiff(max)) for q not equal 1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiff(max) values were capable of distinguish HRV groups (p-values 5.10 x 10(-3); 1.11 x 10(-7), and 5.50 x 10(-7) for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4758815]
Resumo:
Clofazimine and clarithromycin are used to treat leprosy and infections caused by Mycobacterium avium complex. Little data on the toxicity of co-administration of these two drugs are available. Here we evaluated the potential adverse effects of polytherapy with these two drugs in male Wistar rats by determining WBCs counts and other blood cell counts, neutrophilic phagocytosis, and burst oxidative, by flow cytometry. We observed an increase in WBCs, in multiple-dose regimens, and in polymorphonuclear cells, in both single- clarithromycin only and multiple dose regimens. We also observed a reduction in mononuclear cell counts in single and multiple doses. The drugs seem to reverse the mononuclear and polymorphonuclear cell ratio. An increase in oxidative burst was observed in animals treated with the drugs administered either individually or combined. In conclusion, clofazimine and clarithromycin change WBCs counts. Our results may contribute for a better understanding of the mechanisms related to the effects of co-administrating the two drugs.
Resumo:
The study of proportions is a common topic in many fields of study. The standard beta distribution or the inflated beta distribution may be a reasonable choice to fit a proportion in most situations. However, they do not fit well variables that do not assume values in the open interval (0, c), 0 < c < 1. For these variables, the authors introduce the truncated inflated beta distribution (TBEINF). This proposed distribution is a mixture of the beta distribution bounded in the open interval (c, 1) and the trinomial distribution. The authors present the moments of the distribution, its scoring vector, and Fisher information matrix, and discuss estimation of its parameters. The properties of the suggested estimators are studied using Monte Carlo simulation. In addition, the authors present an application of the TBEINF distribution for unemployment insurance data.
Resumo:
When estimating the effect of treatment on HIV using data from observational studies, standard methods may produce biased estimates due to the presence of time-dependent confounders. Such confounding can be present when a covariate, affected by past exposure, is both a predictor of the future exposure and the outcome. One example is the CD4 cell count, being a marker for disease progression for HIV patients, but also a marker for treatment initiation and influenced by treatment. Fitting a marginal structural model (MSM) using inverse probability weights is one way to give appropriate adjustment for this type of confounding. In this paper we study a simple and intuitive approach to estimate similar treatment effects, using observational data to mimic several randomized controlled trials. Each 'trial' is constructed based on individuals starting treatment in a certain time interval. An overall effect estimate for all such trials is found using composite likelihood inference. The method offers an alternative to the use of inverse probability of treatment weights, which is unstable in certain situations. The estimated parameter is not identical to the one of an MSM, it is conditioned on covariate values at the start of each mimicked trial. This allows the study of questions that are not that easily addressed fitting an MSM. The analysis can be performed as a stratified weighted Cox analysis on the joint data set of all the constructed trials, where each trial is one stratum. The model is applied to data from the Swiss HIV cohort study.
Resumo:
Current guidelines suggest that primary prophylaxis for Pneumocystis jiroveci pneumonia (PcP) can be safely stopped in human immunodeficiency virus (HIV)-infected patients who are receiving combined antiretroviral therapy (cART) and who have a CD4 cell count >200 cells/microL. There are few data regarding the incidence of PcP or safety of stopping prophylaxis in virologically suppressed patients with CD4 cell counts of 101-200 cells/microL.
Resumo:
Background Most adults infected with HIV achieve viral suppression within a year of starting combination antiretroviral therapy (cART). It is important to understand the risk of AIDS events or death for patients with a suppressed viral load. Methods and Findings Using data from the Collaboration of Observational HIV Epidemiological Research Europe (2010 merger), we assessed the risk of a new AIDS-defining event or death in successfully treated patients. We accumulated episodes of viral suppression for each patient while on cART, each episode beginning with the second of two consecutive plasma viral load measurements <50 copies/µl and ending with either a measurement >500 copies/µl, the first of two consecutive measurements between 50–500 copies/µl, cART interruption or administrative censoring. We used stratified multivariate Cox models to estimate the association between time updated CD4 cell count and a new AIDS event or death or death alone. 75,336 patients contributed 104,265 suppression episodes and were suppressed while on cART for a median 2.7 years. The mortality rate was 4.8 per 1,000 years of viral suppression. A higher CD4 cell count was always associated with a reduced risk of a new AIDS event or death; with a hazard ratio per 100 cells/µl (95% CI) of: 0.35 (0.30–0.40) for counts <200 cells/µl, 0.81 (0.71–0.92) for counts 200 to <350 cells/µl, 0.74 (0.66–0.83) for counts 350 to <500 cells/µl, and 0.96 (0.92–0.99) for counts ≥500 cells/µl. A higher CD4 cell count became even more beneficial over time for patients with CD4 cell counts <200 cells/µl. Conclusions Despite the low mortality rate, the risk of a new AIDS event or death follows a CD4 cell count gradient in patients with viral suppression. A higher CD4 cell count was associated with the greatest benefit for patients with a CD4 cell count <200 cells/µl but still some slight benefit for those with a CD4 cell count ≥500 cells/µl.
Resumo:
OBJECTIVE: To describe the electronic medical databases used in antiretroviral therapy (ART) programmes in lower-income countries and assess the measures such programmes employ to maintain and improve data quality and reduce the loss of patients to follow-up. METHODS: In 15 countries of Africa, South America and Asia, a survey was conducted from December 2006 to February 2007 on the use of electronic medical record systems in ART programmes. Patients enrolled in the sites at the time of the survey but not seen during the previous 12 months were considered lost to follow-up. The quality of the data was assessed by computing the percentage of missing key variables (age, sex, clinical stage of HIV infection, CD4+ lymphocyte count and year of ART initiation). Associations between site characteristics (such as number of staff members dedicated to data management), measures to reduce loss to follow-up (such as the presence of staff dedicated to tracing patients) and data quality and loss to follow-up were analysed using multivariate logit models. FINDINGS: Twenty-one sites that together provided ART to 50 060 patients were included (median number of patients per site: 1000; interquartile range, IQR: 72-19 320). Eighteen sites (86%) used an electronic database for medical record-keeping; 15 (83%) such sites relied on software intended for personal or small business use. The median percentage of missing data for key variables per site was 10.9% (IQR: 2.0-18.9%) and declined with training in data management (odds ratio, OR: 0.58; 95% confidence interval, CI: 0.37-0.90) and weekly hours spent by a clerk on the database per 100 patients on ART (OR: 0.95; 95% CI: 0.90-0.99). About 10 weekly hours per 100 patients on ART were required to reduce missing data for key variables to below 10%. The median percentage of patients lost to follow-up 1 year after starting ART was 8.5% (IQR: 4.2-19.7%). Strategies to reduce loss to follow-up included outreach teams, community-based organizations and checking death registry data. Implementation of all three strategies substantially reduced losses to follow-up (OR: 0.17; 95% CI: 0.15-0.20). CONCLUSION: The quality of the data collected and the retention of patients in ART treatment programmes are unsatisfactory for many sites involved in the scale-up of ART in resource-limited settings, mainly because of insufficient staff trained to manage data and trace patients lost to follow-up.
Resumo:
Monte Carlo simulation was used to evaluate properties of a simple Bayesian MCMC analysis of the random effects model for single group Cormack-Jolly-Seber capture-recapture data. The MCMC method is applied to the model via a logit link, so parameters p, S are on a logit scale, where logit(S) is assumed to have, and is generated from, a normal distribution with mean μ and variance σ2 . Marginal prior distributions on logit(p) and μ were independent normal with mean zero and standard deviation 1.75 for logit(p) and 100 for μ ; hence minimally informative. Marginal prior distribution on σ2 was placed on τ2=1/σ2 as a gamma distribution with α=β=0.001 . The study design has 432 points spread over 5 factors: occasions (t) , new releases per occasion (u), p, μ , and σ . At each design point 100 independent trials were completed (hence 43,200 trials in total), each with sample size n=10,000 from the parameter posterior distribution. At 128 of these design points comparisons are made to previously reported results from a method of moments procedure. We looked at properties of point and interval inference on μ , and σ based on the posterior mean, median, and mode and equal-tailed 95% credibility interval. Bayesian inference did very well for the parameter μ , but under the conditions used here, MCMC inference performance for σ was mixed: poor for sparse data (i.e., only 7 occasions) or σ=0 , but good when there were sufficient data and not small σ .
Resumo:
BACKGROUND In many resource-limited settings monitoring of combination antiretroviral therapy (cART) is based on the current CD4 count, with limited access to HIV RNA tests or laboratory diagnostics. We examined whether the CD4 count slope over 6 months could provide additional prognostic information. METHODS We analyzed data from a large multicohort study in South Africa, where HIV RNA is routinely monitored. Adult HIV-positive patients initiating cART between 2003 and 2010 were included. Mortality was analyzed in Cox models; CD4 count slope by HIV RNA level was assessed using linear mixed models. RESULTS About 44,829 patients (median age: 35 years, 58% female, median CD4 count at cART initiation: 116 cells/mm) were followed up for a median of 1.9 years, with 3706 deaths. Mean CD4 count slopes per week ranged from 1.4 [95% confidence interval (CI): 1.2 to 1.6] cells per cubic millimeter when HIV RNA was <400 copies per milliliter to -0.32 (95% CI: -0.47 to -0.18) cells per cubic millimeter with >100,000 copies per milliliter. The association of CD4 slope with mortality depended on current CD4 count: the adjusted hazard ratio (aHRs) comparing a >25% increase over 6 months with a >25% decrease was 0.68 (95% CI: 0.58 to 0.79) at <100 cells per cubic millimeter but 1.11 (95% CI: 0.78 to 1.58) at 201-350 cells per cubic millimeter. In contrast, the aHR for current CD4 count, comparing >350 with <100 cells per cubic millimeter, was 0.10 (95% CI: 0.05 to 0.20). CONCLUSIONS Absolute CD4 count remains a strong risk for mortality with a stable effect size over the first 4 years of cART. However, CD4 count slope and HIV RNA provide independently added to the model.
Resumo:
Persistently low white blood cell count (WBC) and neutrophil count is a well-described phenomenon in persons of African ancestry, whose etiology remains unknown. We recently used admixture mapping to identify an approximately 1-megabase region on chromosome 1, where ancestry status (African or European) almost entirely accounted for the difference in WBC between African Americans and European Americans. To identify the specific genetic change responsible for this association, we analyzed genotype and phenotype data from 6,005 African Americans from the Jackson Heart Study (JHS), the Health, Aging and Body Composition (Health ABC) Study, and the Atherosclerosis Risk in Communities (ARIC) Study. We demonstrate that the causal variant must be at least 91% different in frequency between West Africans and European Americans. An excellent candidate is the Duffy Null polymorphism (SNP rs2814778 at chromosome 1q23.2), which is the only polymorphism in the region known to be so differentiated in frequency and is already known to protect against Plasmodium vivax malaria. We confirm that rs2814778 is predictive of WBC and neutrophil count in African Americans above beyond the previously described admixture association (P = 3.8 x 10(-5)), establishing a novel phenotype for this genetic variant.
Resumo:
OBJECTIVE: To determine whether algorithms developed for the World Wide Web can be applied to the biomedical literature in order to identify articles that are important as well as relevant. DESIGN AND MEASUREMENTS A direct comparison of eight algorithms: simple PubMed queries, clinical queries (sensitive and specific versions), vector cosine comparison, citation count, journal impact factor, PageRank, and machine learning based on polynomial support vector machines. The objective was to prioritize important articles, defined as being included in a pre-existing bibliography of important literature in surgical oncology. RESULTS Citation-based algorithms were more effective than noncitation-based algorithms at identifying important articles. The most effective strategies were simple citation count and PageRank, which on average identified over six important articles in the first 100 results compared to 0.85 for the best noncitation-based algorithm (p < 0.001). The authors saw similar differences between citation-based and noncitation-based algorithms at 10, 20, 50, 200, 500, and 1,000 results (p < 0.001). Citation lag affects performance of PageRank more than simple citation count. However, in spite of citation lag, citation-based algorithms remain more effective than noncitation-based algorithms. CONCLUSION Algorithms that have proved successful on the World Wide Web can be applied to biomedical information retrieval. Citation-based algorithms can help identify important articles within large sets of relevant results. Further studies are needed to determine whether citation-based algorithms can effectively meet actual user information needs.
Resumo:
Information overload is a significant problem for modern medicine. Searching MEDLINE for common topics often retrieves more relevant documents than users can review. Therefore, we must identify documents that are not only relevant, but also important. Our system ranks articles using citation counts and the PageRank algorithm, incorporating data from the Science Citation Index. However, citation data is usually incomplete. Therefore, we explore the relationship between the quantity of citation information available to the system and the quality of the result ranking. Specifically, we test the ability of citation count and PageRank to identify "important articles" as defined by experts from large result sets with decreasing citation information. We found that PageRank performs better than simple citation counts, but both algorithms are surprisingly robust to information loss. We conclude that even an incomplete citation database is likely to be effective for importance ranking.
Resumo:
Information overload is a significant problem for modern medicine. Searching MEDLINE for common topics often retrieves more relevant documents than users can review. Therefore, we must identify documents that are not only relevant, but also important. Our system ranks articles using citation counts and the PageRank algorithm, incorporating data from the Science Citation Index. However, citation data is usually incomplete. Therefore, we explore the relationship between the quantity of citation information available to the system and the quality of the result ranking. Specifically, we test the ability of citation count and PageRank to identify "important articles" as defined by experts from large result sets with decreasing citation information. We found that PageRank performs better than simple citation counts, but both algorithms are surprisingly robust to information loss. We conclude that even an incomplete citation database is likely to be effective for importance ranking.
Resumo:
The interpretation of data on genetic variation with regard to the relative roles of different evolutionary factors that produce and maintain genetic variation depends critically on our assumptions concerning effective population size and the level of migration between neighboring populations. In humans, recent population growth and movements of specific ethnic groups across wide geographic areas mean that any theory based on assumptions of constant population size and absence of substructure is generally untenable. We examine the effects of population subdivision on the pattern of protein genetic variation in a total sample drawn from an artificial agglomerate of 12 tribal populations of Central and South America, analyzing the pooled sample as though it were a single population. Several striking findings emerge. (1) Mean heterozygosity is not sensitive to agglomeration, but the number of different alleles (allele count) is inflated, relative to neutral mutation/drift/equilibrium expectation. (2) The inflation is most serious for rare alleles, especially those which originally occurred as tribally restricted "private" polymorphisms. (3) The degree of inflation is an increasing function of both the number of populations encompassed by the sample and of the genetic divergence among them. (4) Treating an agglomerated population as though it were a panmictic unit of long standing can lead to serious biases in estimates of mutation rates, selection pressures, and effective population sizes. Current DNA studies indicate the presence of numerous genetic variants in human populations. The findings and conclusions of this paper are all fully applicable to the study of genetic variation at the DNA level as well.