980 resultados para Databases


Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: To investigate the effect of statin use after radical prostatectomy (RP) on biochemical recurrence (BCR) in patients with prostate cancer who never received statins before RP. PATIENTS AND METHODS: We conducted a retrospective analysis of 1146 RP patients within the Shared Equal Access Regional Cancer Hospital (SEARCH) database. Multivariable Cox proportional hazards analyses were used to examine differences in risk of BCR between post-RP statin users vs nonusers. To account for varying start dates and duration of statin use during follow-up, post-RP statin use was treated as a time-dependent variable. In a secondary analysis, models were stratified by race to examine the association of post-RP statin use with BCR among black and non-black men. RESULTS: After adjusting for clinical and pathological characteristics, post-RP statin use was significantly associated with 36% reduced risk of BCR (hazard ratio [HR] 0.64, 95% confidence interval [CI] 0.47-0.87; P = 0.004). Post-RP statin use remained associated with reduced risk of BCR after adjusting for preoperative serum cholesterol levels. In secondary analysis, after stratification by race, this protective association was significant in non-black (HR 0.49, 95% CI 0.32-0.75; P = 0.001) but not black men (HR 0.82, 95% CI 0.53-1.28; P = 0.384). CONCLUSION: In this retrospective cohort of men undergoing RP, post-RP statin use was significantly associated with reduced risk of BCR. Whether the association between post-RP statin use and BCR differs by race requires further study. Given these findings, coupled with other studies suggesting that statins may reduce risk of advanced prostate cancer, randomised controlled trials are warranted to formally test the hypothesis that statins slow prostate cancer progression.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pharmacogenomics (PGx) offers the promise of utilizing genetic fingerprints to predict individual responses to drugs in terms of safety, efficacy and pharmacokinetics. Early-phase clinical trial PGx applications can identify human genome variations that are meaningful to study design, selection of participants, allocation of resources and clinical research ethics. Results can inform later-phase study design and pipeline developmental decisions. Nevertheless, our review of the clinicaltrials.gov database demonstrates that PGx is rarely used by drug developers. Of the total 323 trials that included PGx as an outcome, 80% have been conducted by academic institutions after initial regulatory approval. Barriers for the application of PGx are discussed. We propose a framework for the role of PGx in early-phase drug development and recommend PGx be universally considered in study design, result interpretation and hypothesis generation for later-phase studies, but PGx results from underpowered studies should not be used by themselves to terminate drug-development programs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Approximately 45,000 individuals are hospitalized annually for burn treatment. Rehabilitation after hospitalization can offer a significant improvement in functional outcomes. Very little is known nationally about rehabilitation for burns, and practices may vary substantially depending on the region based on observed Medicare post-hospitalization spending amounts. This study was designed to measure variation in rehabilitation utilization by state of hospitalization for patients hospitalized with burn injury. This retrospective cohort study used nationally collected data over a 10-year period (2001 to 2010), from the Healthcare Cost and Utilization Project (HCUP) State Inpatient Databases (SIDs). Patients hospitalized for burn injury (n = 57,968) were identified by ICD-9-CM codes and were examined to see specifically if they were discharged immediately to inpatient rehabilitation after hospitalization (primary endpoint). Both unadjusted and adjusted likelihoods were calculated for each state taking into account the effects of age, insurance status, hospitalization at a burn center, and extent of burn injury by TBSA. The relative risk of discharge to inpatient rehabilitation varied by as much as 6-fold among different states. Higher TBSA, having health insurance, higher age, and burn center hospitalization all increased the likelihood of discharge to inpatient rehabilitation following acute care hospitalization. There was significant variation between states in inpatient rehabilitation utilization after adjusting for variables known to affect each outcome. Future efforts should be focused on identifying the cause of this state-to-state variation, its relationship to patient outcome, and standardizing treatment across the United States.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: To ascertain the degree of variation, by state of hospitalization, in outcomes associated with traumatic brain injury (TBI) in a pediatric population. DESIGN: A retrospective cohort study of pediatric patients admitted to a hospital with a TBI. SETTING: Hospitals from states in the United States that voluntarily participate in the Agency for Healthcare Research and Quality's Healthcare Cost and Utilization Project. PARTICIPANTS: Pediatric (age ≤ 19 y) patients hospitalized for TBI (N=71,476) in the United States during 2001, 2004, 2007, and 2010. INTERVENTIONS: None. MAIN OUTCOME MEASURES: Primary outcome was proportion of patients discharged to rehabilitation after an acute care hospitalization among alive discharges. The secondary outcome was inpatient mortality. RESULTS: The relative risk of discharge to inpatient rehabilitation varied by as much as 3-fold among the states, and the relative risk of inpatient mortality varied by as much as nearly 2-fold. In the United States, approximately 1981 patients could be discharged to inpatient rehabilitation care if the observed variation in outcomes was eliminated. CONCLUSIONS: There was significant variation between states in both rehabilitation discharge and inpatient mortality after adjusting for variables known to affect each outcome. Future efforts should be focused on identifying the cause of this state-to-state variation, its relationship to patient outcome, and standardizing treatment across the United States.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite a large and multifaceted effort to understand the vast landscape of phenotypic data, their current form inhibits productive data analysis. The lack of a community-wide, consensus-based, human- and machine-interpretable language for describing phenotypes and their genomic and environmental contexts is perhaps the most pressing scientific bottleneck to integration across many key fields in biology, including genomics, systems biology, development, medicine, evolution, ecology, and systematics. Here we survey the current phenomics landscape, including data resources and handling, and the progress that has been made to accurately capture relevant data descriptions for phenotypes. We present an example of the kind of integration across domains that computable phenotypes would enable, and we call upon the broader biology community, publishers, and relevant funding agencies to support efforts to surmount today's data barriers and facilitate analytical reproducibility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: The wealth of phenotypic descriptions documented in the published articles, monographs, and dissertations of phylogenetic systematics is traditionally reported in a free-text format, and it is therefore largely inaccessible for linkage to biological databases for genetics, development, and phenotypes, and difficult to manage for large-scale integrative work. The Phenoscape project aims to represent these complex and detailed descriptions with rich and formal semantics that are amenable to computation and integration with phenotype data from other fields of biology. This entails reconceptualizing the traditional free-text characters into the computable Entity-Quality (EQ) formalism using ontologies. METHODOLOGY/PRINCIPAL FINDINGS: We used ontologies and the EQ formalism to curate a collection of 47 phylogenetic studies on ostariophysan fishes (including catfishes, characins, minnows, knifefishes) and their relatives with the goal of integrating these complex phenotype descriptions with information from an existing model organism database (zebrafish, http://zfin.org). We developed a curation workflow for the collection of character, taxonomic and specimen data from these publications. A total of 4,617 phenotypic characters (10,512 states) for 3,449 taxa, primarily species, were curated into EQ formalism (for a total of 12,861 EQ statements) using anatomical and taxonomic terms from teleost-specific ontologies (Teleost Anatomy Ontology and Teleost Taxonomy Ontology) in combination with terms from a quality ontology (Phenotype and Trait Ontology). Standards and guidelines for consistently and accurately representing phenotypes were developed in response to the challenges that were evident from two annotation experiments and from feedback from curators. CONCLUSIONS/SIGNIFICANCE: The challenges we encountered and many of the curation standards and methods for improving consistency that we developed are generally applicable to any effort to represent phenotypes using ontologies. This is because an ontological representation of the detailed variations in phenotype, whether between mutant or wildtype, among individual humans, or across the diversity of species, requires a process by which a precise combination of terms from domain ontologies are selected and organized according to logical relations. The efficiencies that we have developed in this process will be useful for any attempt to annotate complex phenotypic descriptions using ontologies. We also discuss some ramifications of EQ representation for the domain of systematics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The skeleton is of fundamental importance in research in comparative vertebrate morphology, paleontology, biomechanics, developmental biology, and systematics. Motivated by research questions that require computational access to and comparative reasoning across the diverse skeletal phenotypes of vertebrates, we developed a module of anatomical concepts for the skeletal system, the Vertebrate Skeletal Anatomy Ontology (VSAO), to accommodate and unify the existing skeletal terminologies for the species-specific (mouse, the frog Xenopus, zebrafish) and multispecies (teleost, amphibian) vertebrate anatomy ontologies. Previous differences between these terminologies prevented even simple queries across databases pertaining to vertebrate morphology. This module of upper-level and specific skeletal terms currently includes 223 defined terms and 179 synonyms that integrate skeletal cells, tissues, biological processes, organs (skeletal elements such as bones and cartilages), and subdivisions of the skeletal system. The VSAO is designed to integrate with other ontologies, including the Common Anatomy Reference Ontology (CARO), Gene Ontology (GO), Uberon, and Cell Ontology (CL), and it is freely available to the community to be updated with additional terms required for research. Its structure accommodates anatomical variation among vertebrate species in development, structure, and composition. Annotation of diverse vertebrate phenotypes with this ontology will enable novel inquiries across the full spectrum of phenotypic diversity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The application of semantic technologies to the integration of biological data and the interoperability of bioinformatics analysis and visualization tools has been the common theme of a series of annual BioHackathons hosted in Japan for the past five years. Here we provide a review of the activities and outcomes from the BioHackathons held in 2011 in Kyoto and 2012 in Toyama. In order to efficiently implement semantic technologies in the life sciences, participants formed various sub-groups and worked on the following topics: Resource Description Framework (RDF) models for specific domains, text mining of the literature, ontology development, essential metadata for biological databases, platforms to enable efficient Semantic Web technology development and interoperability, and the development of applications for Semantic Web data. In this review, we briefly introduce the themes covered by these sub-groups. The observations made, conclusions drawn, and software development projects that emerged from these activities are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Feeding Experiments End-user Database (FEED) is a research tool developed by the Mammalian Feeding Working Group at the National Evolutionary Synthesis Center that permits synthetic, evolutionary analyses of the physiology of mammalian feeding. The tasks of the Working Group are to compile physiologic data sets into a uniform digital format stored at a central source, develop a standardized terminology for describing and organizing the data, and carry out a set of novel analyses using FEED. FEED contains raw physiologic data linked to extensive metadata. It serves as an archive for a large number of existing data sets and a repository for future data sets. The metadata are stored as text and images that describe experimental protocols, research subjects, and anatomical information. The metadata incorporate controlled vocabularies to allow consistent use of the terms used to describe and organize the physiologic data. The planned analyses address long-standing questions concerning the phylogenetic distribution of phenotypes involving muscle anatomy and feeding physiology among mammals, the presence and nature of motor pattern conservation in the mammalian feeding muscles, and the extent to which suckling constrains the evolution of feeding behavior in adult mammals. We expect FEED to be a growing digital archive that will facilitate new research into understanding the evolution of feeding anatomy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Measurement of CD4+ T-lymphocytes (CD4) is a crucial parameter in the management of HIV patients, particularly in determining eligibility to initiate antiretroviral treatment (ART). A number of technologies exist for CD4 enumeration, with considerable variation in cost, complexity, and operational requirements. We conducted a systematic review of the performance of technologies for CD4 enumeration. METHODS AND FINDINGS: Studies were identified by searching electronic databases MEDLINE and EMBASE using a pre-defined search strategy. Data on test accuracy and precision included bias and limits of agreement with a reference standard, and misclassification probabilities around CD4 thresholds of 200 and 350 cells/μl over a clinically relevant range. The secondary outcome measure was test imprecision, expressed as % coefficient of variation. Thirty-two studies evaluating 15 CD4 technologies were included, of which less than half presented data on bias and misclassification compared to the same reference technology. At CD4 counts <350 cells/μl, bias ranged from -35.2 to +13.1 cells/μl while at counts >350 cells/μl, bias ranged from -70.7 to +47 cells/μl, compared to the BD FACSCount as a reference technology. Misclassification around the threshold of 350 cells/μl ranged from 1-29% for upward classification, resulting in under-treatment, and 7-68% for downward classification resulting in overtreatment. Less than half of these studies reported within laboratory precision or reproducibility of the CD4 values obtained. CONCLUSIONS: A wide range of bias and percent misclassification around treatment thresholds were reported on the CD4 enumeration technologies included in this review, with few studies reporting assay precision. The lack of standardised methodology on test evaluation, including the use of different reference standards, is a barrier to assessing relative assay performance and could hinder the introduction of new point-of-care assays in countries where they are most needed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Emergency departments are challenging research settings, where truly informed consent can be difficult to obtain. A deeper understanding of emergency medical patients' opinions about research is needed. We conducted a systematic review and meta-summary of quantitative and qualitative studies on which values, attitudes, or beliefs of emergent medical research participants influence research participation. We included studies of adults that investigated opinions toward emergency medicine research participation. We excluded studies focused on the association between demographics or consent document features and participation and those focused on non-emergency research. In August 2011, we searched the following databases: MEDLINE, EMBASE, Google Scholar, Scirus, PsycINFO, AgeLine and Global Health. Titles, abstracts and then full manuscripts were independently evaluated by two reviewers. Disagreements were resolved by consensus and adjudicated by a third author. Studies were evaluated for bias using standardised scores. We report themes associated with participation or refusal. Our initial search produced over 1800 articles. A total of 44 articles were extracted for full-manuscript analysis, and 14 were retained based on our eligibility criteria. Among factors favouring participation, altruism and personal health benefit had the highest frequency. Mistrust of researchers, feeling like a 'guinea pig' and risk were leading factors favouring refusal. Many studies noted limitations of informed consent processes in emergent conditions. We conclude that highlighting the benefits to the participant and society, mitigating risk and increasing public trust may increase research participation in emergency medical research. New methods for conducting informed consent in such studies are needed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nutrient availability profoundly influences gene expression. Many animal genes encode multiple transcript isoforms, yet the effect of nutrient availability on transcript isoform expression has not been studied in genome-wide fashion. When Caenorhabditis elegans larvae hatch without food, they arrest development in the first larval stage (L1 arrest). Starved larvae can survive L1 arrest for weeks, but growth and post-embryonic development are rapidly initiated in response to feeding. We used RNA-seq to characterize the transcriptome during L1 arrest and over time after feeding. Twenty-seven percent of detectable protein-coding genes were differentially expressed during recovery from L1 arrest, with the majority of changes initiating within the first hour, demonstrating widespread, acute effects of nutrient availability on gene expression. We used two independent approaches to track expression of individual exons and mRNA isoforms, and we connected changes in expression to functional consequences by mining a variety of databases. These two approaches identified an overlapping set of genes with alternative isoform expression, and they converged on common functional patterns. Genes affecting mRNA splicing and translation are regulated by alternative isoform expression, revealing post-transcriptional consequences of nutrient availability on gene regulation. We also found that phosphorylation sites are often alternatively expressed, revealing a common mode by which alternative isoform expression modifies protein function and signal transduction. Our results detail rich changes in C. elegans gene expression as larvae initiate growth and post-embryonic development, and they provide an excellent resource for ongoing investigation of transcriptional regulation and developmental physiology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

X-ray crystallography is the predominant method for obtaining atomic-scale information about biological macromolecules. Despite the success of the technique, obtaining well diffracting crystals still critically limits going from protein to structure. In practice, the crystallization process proceeds through knowledge-informed empiricism. Better physico-chemical understanding remains elusive because of the large number of variables involved, hence little guidance is available to systematically identify solution conditions that promote crystallization. To help determine relationships between macromolecular properties and their crystallization propensity, we have trained statistical models on samples for 182 proteins supplied by the Northeast Structural Genomics consortium. Gaussian processes, which capture trends beyond the reach of linear statistical models, distinguish between two main physico-chemical mechanisms driving crystallization. One is characterized by low levels of side chain entropy and has been extensively reported in the literature. The other identifies specific electrostatic interactions not previously described in the crystallization context. Because evidence for two distinct mechanisms can be gleaned both from crystal contacts and from solution conditions leading to successful crystallization, the model offers future avenues for optimizing crystallization screens based on partial structural information. The availability of crystallization data coupled with structural outcomes analyzed through state-of-the-art statistical models may thus guide macromolecular crystallization toward a more rational basis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Association studies of quantitative traits have often relied on methods in which a normal distribution of the trait is assumed. However, quantitative phenotypes from complex human diseases are often censored, highly skewed, or contaminated with outlying values. We recently developed a rank-based association method that takes into account censoring and makes no distributional assumptions about the trait. In this study, we applied our new method to age-at-onset data on ALDX1 and ALDX2. Both traits are highly skewed (skewness > 1.9) and often censored. We performed a whole genome association study of age at onset of the ALDX1 trait using Illumina single-nucleotide polymorphisms. Only slightly more than 5% of markers were significant. However, we identified two regions on chromosomes 14 and 15, which each have at least four significant markers clustering together. These two regions may harbor genes that regulate age at onset of ALDX1 and ALDX2. Future fine mapping of these two regions with densely spaced markers is warranted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cognitive neuroscience, as a discipline, links the biological systems studied by neuroscience to the processing constructs studied by psychology. By mapping these relations throughout the literature of cognitive neuroscience, we visualize the semantic structure of the discipline and point to directions for future research that will advance its integrative goal. For this purpose, network text analyses were applied to an exhaustive corpus of abstracts collected from five major journals over a 30-month period, including every study that used fMRI to investigate psychological processes. From this, we generate network maps that illustrate the relationships among psychological and anatomical terms, along with centrality statistics that guide inferences about network structure. Three terms--prefrontal cortex, amygdala, and anterior cingulate cortex--dominate the network structure with their high frequency in the literature and the density of their connections with other neuroanatomical terms. From network statistics, we identify terms that are understudied compared with their importance in the network (e.g., insula and thalamus), are underspecified in the language of the discipline (e.g., terms associated with executive function), or are imperfectly integrated with other concepts (e.g., subdisciplines like decision neuroscience that are disconnected from the main network). Taking these results as the basis for prescriptive recommendations, we conclude that semantic analyses provide useful guidance for cognitive neuroscience as a discipline, both by illustrating systematic biases in the conduct and presentation of research and by identifying directions that may be most productive for future research.