15 resultados para Editor of flow analysis methods
em DigitalCommons@The Texas Medical Center
Resumo:
Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^
Resumo:
The pattern of the births during the week has been reported by many studies. The births occurred in weekends are found consistently less then births occurred in weekdays. This study employed two statistical methods, two-way ANOVA and two-way Friedman's test to analyse the daily variations in amount of births of 222,735 births from 2005-2007 in Harris County, Texas. The two methods were compared on their assumptions, procedures and results. Both of the tests showed a significant result which indicated that the births through the week are not uniformly distributed. The result of multiple comparison demonstrated the births occurring on weekends were significantly different than the births occurring on weekdays with least amount on Sundays.^
Resumo:
Studies have shown that rare genetic variants have stronger effects in predisposing common diseases, and several statistical methods have been developed for association studies involving rare variants. In order to better understand how these statistical methods perform, we seek to compare two recently developed rare variant statistical methods (VT and C-alpha) on 10,000 simulated re-sequencing data sets with disease status and the corresponding 10,000 simulated null data sets. The SLC1A1 gene has been suggested to be associated with diastolic blood pressure (DBP) in previous studies. In the current study, we applied VT and C-alpha methods to the empirical re-sequencing data for the SLC1A1 gene from 300 whites and 200 blacks. We found that VT method obtains higher power and performs better than C-alpha method with the simulated data we used. The type I errors were well-controlled for both methods. In addition, both VT and C-alpha methods suggested no statistical evidence for the association between the SLC1A1 gene and DBP. Overall, our findings provided an important comparison of the two statistical methods for future reference and provided preliminary and pioneer findings on the association between the SLC1A1 gene and blood pressure.^
Resumo:
Objective. The goal of this study is to characterize the current workforce of CIHs, the lengths of professional practice careers of the past and current CIHs.^ Methods. This is a secondary data analysis of data compiled from all of the nearly 50 annual roster listings of the American Board of Industrial Hygiene (ABIH) for Certified Industrial Hygienists active in each year since 1960. Survival analysis was performed as a technique to measure the primary outcome of interest. The technique which was involved in this study was the Kaplan-Meier method for estimating the survival function.^ Study subjects: The population to be studied is all Certified Industrial Hygienists (CIHs). A CIH is defined by the ABIH as an individual who has achieved the minimum requirements for education, working experience and through examination, has demonstrated a minimum level of knowledge and competency in the prevention of occupational illnesses. ^ Results. A Cox-proportional hazards model analysis was performed by different start-time cohorts of CIHs. In this model we chose cohort 1 as the reference cohort. The estimated relative risk of the event (defined as retirement, or absent from 5 consecutive years of listing) occurred for CIHs for cohorts 2,3,4,5 relative to cohort 1 is 0.385, 0.214, 0.234, 0.299 relatively. The result show that cohort 2 (CIHs issued from 1970-1980) has the lowest hazard ratio which indicates the lowest retirement rate.^ Conclusion. The manpower of CIHs (still actively practicing up to the end of 2009) increased tremendously starting in 1980 and grew into a plateau in recent decades. This indicates that the supply and demand of the profession may have reached equilibrium. More demographic information and variables are needed to actually predict the future number of CIHs needed. ^
Resumo:
An extension of k-ratio multiple comparison methods to rank-based analyses is described. The new method is analogous to the Duncan-Godbold approximate k-ratio procedure for unequal sample sizes or correlated means. The close parallel of the new methods to the Duncan-Godbold approach is shown by demonstrating that they are based upon different parameterizations as starting points.^ A semi-parametric basis for the new methods is shown by starting from the Cox proportional hazards model, using Wald statistics. From there the log-rank and Gehan-Breslow-Wilcoxon methods may be seen as score statistic based methods.^ Simulations and analysis of a published data set are used to show the performance of the new methods. ^
Resumo:
The role of clinical chemistry has traditionally been to evaluate acutely ill or hospitalized patients. Traditional statistical methods have serious drawbacks in that they use univariate techniques. To demonstrate alternative methodology, a multivariate analysis of covariance model was developed and applied to the data from the Cooperative Study of Sickle Cell Disease.^ The purpose of developing the model for the laboratory data from the CSSCD was to evaluate the comparability of the results from the different clinics. Several variables were incorporated into the model in order to control for possible differences among the clinics that might confound any real laboratory differences.^ Differences for LDH, alkaline phosphatase and SGOT were identified which will necessitate adjustments by clinic whenever these data are used. In addition, aberrant clinic values for LDH, creatinine and BUN were also identified.^ The use of any statistical technique including multivariate analysis without thoughtful consideration may lead to spurious conclusions that may not be corrected for some time, if ever. However, the advantages of multivariate analysis far outweigh its potential problems. If its use increases as it should, the applicability to the analysis of laboratory data in prospective patient monitoring, quality control programs, and interpretation of data from cooperative studies could well have a major impact on the health and well being of a large number of individuals. ^
Resumo:
OBJECTIVE: To characterize PubMed usage over a typical day and compare it to previous studies of user behavior on Web search engines. DESIGN: We performed a lexical and semantic analysis of 2,689,166 queries issued on PubMed over 24 consecutive hours on a typical day. MEASUREMENTS: We measured the number of queries, number of distinct users, queries per user, terms per query, common terms, Boolean operator use, common phrases, result set size, MeSH categories, used semantic measurements to group queries into sessions, and studied the addition and removal of terms from consecutive queries to gauge search strategies. RESULTS: The size of the result sets from a sample of queries showed a bimodal distribution, with peaks at approximately 3 and 100 results, suggesting that a large group of queries was tightly focused and another was broad. Like Web search engine sessions, most PubMed sessions consisted of a single query. However, PubMed queries contained more terms. CONCLUSION: PubMed's usage profile should be considered when educating users, building user interfaces, and developing future biomedical information retrieval systems.
Resumo:
Many studies in biostatistics deal with binary data. Some of these studies involve correlated observations, which can complicate the analysis of the resulting data. Studies of this kind typically arise when a high degree of commonality exists between test subjects. If there exists a natural hierarchy in the data, multilevel analysis is an appropriate tool for the analysis. Two examples are the measurements on identical twins, or the study of symmetrical organs or appendages such as in the case of ophthalmic studies. Although this type of matching appears ideal for the purposes of comparison, analysis of the resulting data while ignoring the effect of intra-cluster correlation has been shown to produce biased results.^ This paper will explore the use of multilevel modeling of simulated binary data with predetermined levels of correlation. Data will be generated using the Beta-Binomial method with varying degrees of correlation between the lower level observations. The data will be analyzed using the multilevel software package MlwiN (Woodhouse, et al, 1995). Comparisons between the specified intra-cluster correlation of these data and the estimated correlations, using multilevel analysis, will be used to examine the accuracy of this technique in analyzing this type of data. ^
Resumo:
The difficulty of detecting differential gene expression in microarray data has existed for many years. Several correction procedures try to avoid the family-wise error rate in multiple comparison process, including the Bonferroni and Sidak single-step p-value adjustments, Holm's step-down correction method, and Benjamini and Hochberg's false discovery rate (FDR) correction procedure. Each multiple comparison technique has its advantages and weaknesses. We studied each multiple comparison method through numerical studies (simulations) and applied the methods to the real exploratory DNA microarray data, which detect of molecular signatures in papillary thyroid cancer (PTC) patients. According to our results of simulation studies, Benjamini and Hochberg step-up FDR controlling procedure is the best process among these multiple comparison methods and we discovered 1277 potential biomarkers among 54675 probe sets after applying the Benjamini and Hochberg's method to PTC microarray data.^
Resumo:
Developing countries are heavily burdened by limited access to safe drinking water and subsequent water-related diseases. Numerous water treatment interventions combat this public health crisis, encompassing both traditional and less-common methods. Of these, water disinfection serves as an important means to provide safe drinking water. Existing literature discusses a wide range of traditional treatment options and encourages the use of multi-barrier approaches including coagulation-flocculation, filtration, and disinfection. Most sources do not delve into approaches specifically appropriate for developing countries, nor do they exclusively examine water disinfection methods.^ The objective of this review is to focus on an extensive range of chemical, physio-chemical, and physical water disinfection techniques to provide a compilation, description and evaluation of options available. Such an objective provides further understanding and knowledge to better inform water treatment interventions and explores alternate means of water disinfection appropriate for developing countries. Appropriateness for developing countries corresponds to the effectiveness of an available, easy to use disinfection technique at providing safe drinking water at a low cost.^ Among chemical disinfectants, SWS sodium hypochlorite solution is preferred over sodium hypochlorite bleach due to consistent concentrations. Tablet forms are highly recommended chemical disinfectants because they are effective and very easy to use, but also because they are stable. Examples include sodium dichloroisocyanurate, calcium hypochlorite, and chlorine dioxide, which vary in cost depending on location and availability. Among physio-chemical disinfection options, electrolysis which produces mixed oxidants (MIOX) provides a highly effective disinfection option with a higher upfront cost but very low cost over the long term. Among physical disinfection options, solar disinfection (SODIS) applications are effective, but they treat only a fixed volume of water at a time. They come with higher initial costs but very low on-going costs. Additional effective disinfection techniques may be suitable depending on the location, availability and cost.^
Resumo:
The Advisory Committee on Immunization Practices (ACIP) develops written recommendations for the routine administration of vaccines to children and adults in the U.S. civilian population. The ACIP is the only entity in the federal government that makes such recommendations. ACIP elaborates on selection of its members and rules out concerns regarding its integrity, but fails to provide information about the importance of economic analysis in vaccine selection. ACIP recommendations can have large health and economic consequences. Emphasis on economic evaluation in health is a likely response to severe pressures of the federal and state health budget. This study describes the economic aspects considered by the ACIP while sanctioning a vaccine, and reviews the economic evaluations (our economic data) provided for vaccine deliberations. A five year study period from 2004 to 2009 is adopted. Publicly available data from ACIP web database is used. Drummond et al. (2005) checklist serves as a guide to assess the quality of economic evaluations presented. Drummond et al.'s checklist is a comprehensive hence it is unrealistic to expect every ACIP deliberation to meet all of their criteria. For practical purposes we have selected seven criteria that we judge to be significant criteria provided by Drummond et al. Twenty-four data points were obtained in a five year period. Our results show that out of the total twenty-four data point‘s (economic evaluations) only five data points received a score of six; that is six items on the list of seven were met. None of the data points received a perfect score of seven. Seven of the twenty-four data points received a score of five. A minimum of a two score was received by only one of the economic analyses. The type of economic evaluation along with the model criteria and ICER/QALY criteria met at 0.875 (87.5%). These three criteria were met at the highest rate among the seven criteria studied. Our study findings demonstrate that the perspective criteria met at 0.583 (58.3%) followed by source and sensitivity analysis criteria both tied at 0.541 (54.1%). The discount factor was met at 0.250 (25.0%).^ Economic analysis is not a novel concept to the ACIP. It has been practiced and presented at these meetings on a regular basis for more than five years. ACIP‘s stated goal is to utilize good quality epidemiologic, clinical and economic analyses to help policy makers choose among alternatives presented and thus achieve a better informed decision. As seen in our study the economic analyses over the years are inconsistent. The large variability coupled with lack of a standardized format may compromise the utility of the economic information for decision-making. While making recommendations, the ACIP takes into account all available information about a vaccine. Thus it is vital that standardized high quality economic information is provided at the ACIP meetings. Our study may provide a call for the ACIP to further investigate deficiencies within the system and thereby to improve economic evaluation data presented. ^
Resumo:
Path analysis has been applied to components of the iron metabolic system with the intent of suggesting an integrated procedure for better evaluating iron nutritional status at the community level. The primary variables of interest in this study were (1) iron stores, (2) total iron-binding capacity, (3) serum ferritin, (4) serum iron, (5) transferrin saturation, and (6) hemoglobin concentration. Correlation coefficients for relationships among these variables were obtained from published literature and postulated in a series of models using measures of those variables that are feasible to include in a community nutritional survey. Models were built upon known information about the metabolism of iron and were limited by what had been reported in the literature in terms of correlation coefficients or quantitative relationships. Data were pooled from various studies and correlations of the same bivariate relationships were averaged after z- transformations. Correlation matrices were then constructed by transforming the average values back into correlation coefficients. The results of path analysis in this study indicate that hemoglobin is not a good indicator of early iron deficiency. It does not account for variance in iron stores. On the other hand, 91% of the variance in iron stores is explained by serum ferritin and total iron-binding capacity. In addition, the magnitude of the path coefficient (.78) of the serum ferritin-iron stores relationship signifies that serum ferritin is the most important predictor of iron stores in the proposed model. Finally, drawing upon known relations among variables and the amount of variance explained in path models, it is suggested that the following blood measures should be made in assessing community iron deficiency: (1) serum ferritin, (2) total iron-binding capacity, (3) serum iron, (4) transferrin saturation, and (5) hemoglobin concentration. These measures (with acceptable ranges and cut-off points) could make possible the complete evaluation of all three stages of iron deficiency in those persons surveyed at the community level. ^
Resumo:
Next-generation sequencing (NGS) technology has become a prominent tool in biological and biomedical research. However, NGS data analysis, such as de novo assembly, mapping and variants detection is far from maturity, and the high sequencing error-rate is one of the major problems. . To minimize the impact of sequencing errors, we developed a highly robust and efficient method, MTM, to correct the errors in NGS reads. We demonstrated the effectiveness of MTM on both single-cell data with highly non-uniform coverage and normal data with uniformly high coverage, reflecting that MTM’s performance does not rely on the coverage of the sequencing reads. MTM was also compared with Hammer and Quake, the best methods for correcting non-uniform and uniform data respectively. For non-uniform data, MTM outperformed both Hammer and Quake. For uniform data, MTM showed better performance than Quake and comparable results to Hammer. By making better error correction with MTM, the quality of downstream analysis, such as mapping and SNP detection, was improved. SNP calling is a major application of NGS technologies. However, the existence of sequencing errors complicates this process, especially for the low coverage (