977 resultados para BIOMEDICAL ANALYSIS
Resumo:
Background: Statistical shape models are widely used in biomedical research. They are routinely implemented for automatic image segmentation or object identification in medical images. In these fields, however, the acquisition of the large training datasets, required to develop these models, is usually a time-consuming process. Even after this effort, the collections of datasets are often lost or mishandled resulting in replication of work. Objective: To solve these problems, the Virtual Skeleton Database (VSD) is proposed as a centralized storage system where the data necessary to build statistical shape models can be stored and shared. Methods: The VSD provides an online repository system tailored to the needs of the medical research community. The processing of the most common image file types, a statistical shape model framework, and an ontology-based search provide the generic tools to store, exchange, and retrieve digital medical datasets. The hosted data are accessible to the community, and collaborative research catalyzes their productivity. Results: To illustrate the need for an online repository for medical research, three exemplary projects of the VSD are presented: (1) an international collaboration to achieve improvement in cochlear surgery and implant optimization, (2) a population-based analysis of femoral fracture risk between genders, and (3) an online application developed for the evaluation and comparison of the segmentation of brain tumors. Conclusions: The VSD is a novel system for scientific collaboration for the medical image community with a data-centric concept and semantically driven search option for anatomical structures. The repository has been proven to be a useful tool for collaborative model building, as a resource for biomechanical population studies, or to enhance segmentation algorithms.
Resumo:
OBJECTIVE: To characterize PubMed usage over a typical day and compare it to previous studies of user behavior on Web search engines. DESIGN: We performed a lexical and semantic analysis of 2,689,166 queries issued on PubMed over 24 consecutive hours on a typical day. MEASUREMENTS: We measured the number of queries, number of distinct users, queries per user, terms per query, common terms, Boolean operator use, common phrases, result set size, MeSH categories, used semantic measurements to group queries into sessions, and studied the addition and removal of terms from consecutive queries to gauge search strategies. RESULTS: The size of the result sets from a sample of queries showed a bimodal distribution, with peaks at approximately 3 and 100 results, suggesting that a large group of queries was tightly focused and another was broad. Like Web search engine sessions, most PubMed sessions consisted of a single query. However, PubMed queries contained more terms. CONCLUSION: PubMed's usage profile should be considered when educating users, building user interfaces, and developing future biomedical information retrieval systems.
Resumo:
OBJECTIVE: Occupational low back pain (LBP) is considered to be the most expensive form of work disability, with the socioeconomic costs of persistent LBP exceeding the costs of acute and subacute LBP by far. This makes the early identification of patients at risk of developing persistent LBP essential, especially in working populations. The aim of the study was to evaluate both risk factors (for the development of persistent LBP) and protective factors (preventing the development of persistent LBP) in the same cohort. PARTICIPANTS: An inception cohort of 315 patients with acute to subacute or with recurrent LBP was recruited from 14 health practitioners (twelve general practitioners and two physiotherapists) across New Zealand. METHODS: Patients with persistent LBP at six-month follow-up were compared to patients with non-persistent LBP looking at occupational, psychological, biomedical and demographic/lifestyle predictors at baseline using multiple logistic regression analyses. All significant variables from the different domains were combined into a one predictor model. RESULTS: A final two-predictor model with an overall predictive value of 78% included social support at work (OR 0.67; 95%CI 0.45 to 0.99) and somatization (OR 1.08; 95%CI 1.01 to 1.15). CONCLUSIONS: Social support at work should be considered as a resource preventing the development of persistent LBP whereas somatization should be considered as a risk factor for the development of persistent LBP. Further studies are needed to determine if addressing these factors in workplace interventions for patients suffering from acute, subacute or recurrent LBP prevents subsequent development of persistent LBP.
Resumo:
The successful management of cancer with radiation relies on the accurate deposition of a prescribed dose to a prescribed anatomical volume within the patient. Treatment set-up errors are inevitable because the alignment of field shaping devices with the patient must be repeated daily up to eighty times during the course of a fractionated radiotherapy treatment. With the invention of electronic portal imaging devices (EPIDs), patient's portal images can be visualized daily in real-time after only a small fraction of the radiation dose has been delivered to each treatment field. However, the accuracy of human visual evaluation of low-contrast portal images has been found to be inadequate. The goal of this research is to develop automated image analysis tools to detect both treatment field shape errors and patient anatomy placement errors with an EPID. A moments method has been developed to align treatment field images to compensate for lack of repositioning precision of the image detector. A figure of merit has also been established to verify the shape and rotation of the treatment fields. Following proper alignment of treatment field boundaries, a cross-correlation method has been developed to detect shifts of the patient's anatomy relative to the treatment field boundary. Phantom studies showed that the moments method aligned the radiation fields to within 0.5mm of translation and 0.5$\sp\circ$ of rotation and that the cross-correlation method aligned anatomical structures inside the radiation field to within 1 mm of translation and 1$\sp\circ$ of rotation. A new procedure of generating and using digitally reconstructed radiographs (DRRs) at megavoltage energies as reference images was also investigated. The procedure allowed a direct comparison between a designed treatment portal and the actual patient setup positions detected by an EPID. Phantom studies confirmed the feasibility of the methodology. Both the moments method and the cross-correlation technique were implemented within an experimental radiotherapy picture archival and communication system (RT-PACS) and were used clinically to evaluate the setup variability of two groups of cancer patients treated with and without an alpha-cradle immobilization aid. The tools developed in this project have proven to be very effective and have played an important role in detecting patient alignment errors and field-shape errors in treatment fields formed by a multileaf collimator (MLC). ^
Resumo:
Recurrent wheezing or asthma is a common problem in children that has increased considerably in prevalence in the past few decades. The causes and underlying mechanisms are poorly understood and it is thought that a numb er of distinct diseases causing similar symptoms are involved. Due to the lack of a biologically founded classification system, children are classified according to their observed disease related features (symptoms, signs, measurements) into phenotypes. The objectives of this PhD project were a) to develop tools for analysing phenotypic variation of a disease, and b) to examine phenotypic variability of wheezing among children by applying these tools to existing epidemiological data. A combination of graphical methods (multivariate co rrespondence analysis) and statistical models (latent variables models) was used. In a first phase, a model for discrete variability (latent class model) was applied to data on symptoms and measurements from an epidemiological study to identify distinct phenotypes of wheezing. In a second phase, the modelling framework was expanded to include continuous variability (e.g. along a severity gradient) and combinations of discrete and continuo us variability (factor models and factor mixture models). The third phase focused on validating the methods using simulation studies. The main body of this thesis consists of 5 articles (3 published, 1 submitted and 1 to be submitted) including applications, methodological contributions and a review. The main findings and contributions were: 1) The application of a latent class model to epidemiological data (symptoms and physiological measurements) yielded plausible pheno types of wheezing with distinguishing characteristics that have previously been used as phenotype defining characteristics. 2) A method was proposed for including responses to conditional questions (e.g. questions on severity or triggers of wheezing are asked only to children with wheeze) in multivariate modelling.ii 3) A panel of clinicians was set up to agree on a plausible model for wheezing diseases. The model can be used to generate datasets for testing the modelling approach. 4) A critical review of methods for defining and validating phenotypes of wheeze in children was conducted. 5) The simulation studies showed that a parsimonious parameterisation of the models is required to identify the true underlying structure of the data. The developed approach can deal with some challenges of real-life cohort data such as variables of mixed mode (continuous and categorical), missing data and conditional questions. If carefully applied, the approach can be used to identify whether the underlying phenotypic variation is discrete (classes), continuous (factors) or a combination of these. These methods could help improve precision of research into causes and mechanisms and contribute to the development of a new classification of wheezing disorders in children and other diseases which are difficult to classify.
Resumo:
We present the first analytical approach to demonstrate the in situ imaging of metabolites from formalin-fixed, paraffin-embedded (FFPE) human tissue samples. Using high-resolution matrix-assisted laser desorption/ionization Fourier-transform ion cyclotron resonance mass spectrometry imaging (MALDI-FT-ICR MSI), we conducted a proof-of-principle experiment comparing metabolite measurements from FFPE and fresh frozen tissue sections, and found an overlap of 72% amongst 1700 m/z species. In particular, we observed conservation of biomedically relevant information at the metabolite level in FFPE tissues. In biomedical applications, we analysed tissues from 350 different cancer patients and were able to discriminate between normal and tumour tissues, and different tumours from the same organ, and found an independent prognostic factor for patient survival. This study demonstrates the ability to measure metabolites in FFPE tissues using MALDI-FT-ICR MSI, which can then be assigned to histology and clinical parameters. Our approach is a major technical, histochemical, and clinicopathological advance that highlights the potential for investigating diseases in archived FFPE tissues.
Resumo:
BACKGROUND: The robotics-assisted tilt table (RATT), including actuators for tilting and cyclical leg movement, is used for rehabilitation of severely disabled neurological patients. Following further engineering development of the system, i.e. the addition of force sensors and visual bio-feedback, patients can actively participate in exercise testing and training on the device. Peak cardiopulmonary performance parameters were previously investigated, but it also important to compare submaximal parameters with standard devices. The aim of this study was to evaluate the feasibility of the RATT for estimation of submaximal exercise thresholds by comparison with a cycle ergometer and a treadmill. METHODS: 17 healthy subjects randomly performed six maximal individualized incremental exercise tests, with two tests on each of the three exercise modalities. The ventilatory anaerobic threshold (VAT) and respiratory compensation point (RCP) were determined from breath-by-breath data. RESULTS: VAT and RCP on the RATT were lower than the cycle ergometer and the treadmill: oxygen uptake (V'O2) at VAT was [mean (SD)] 1.2 (0.3), 1.5 (0.4) and 1.6 (0.5) L/min, respectively (p < 0.001); V'O2 at RCP was 1.7 (0.4), 2.3 (0.8) and 2.6 (0.9) L/min, respectively (p = 0.001). High correlations for VAT and RCP were found between the RATT vs the cycle ergometer and RATT vs the treadmill (R on the range 0.69-0.80). VAT and RCP demonstrated excellent test-retest reliability for all three devices (ICC from 0.81 to 0.98). Mean differences between the test and retest values on each device were close to zero. The ventilatory equivalent for O2 at VAT for the RATT and cycle ergometer were similar and both were higher than the treadmill. The ventilatory equivalent for CO2 at RCP was similar for all devices. Ventilatory equivalent parameters demonstrated fair-to-excellent reliability and repeatability. CONCLUSIONS: It is feasible to use the RATT for estimation of submaximal exercise thresholds: VAT and RCP on the RATT were lower than the cycle ergometer and the treadmill, but there were high correlations between the RATT vs the cycle ergometer and vs the treadmill. Repeatability and test-retest reliability of all submaximal threshold parameters from the RATT were comparable to those of standard devices.
Resumo:
Arachidonic acid (5Z,8Z,11Z,14Z-eicosatetraenoic acid; C20:4) (arachidonate, AA) is a vital polyunsaturated omega-6 fatty acid (PUFA) without its presence the mammalian brain, muscles, and possibly other organs cannot develop or function [1] and [2]. AA fulfils numerous known and possibly yet unknown functions as integral part of mammalian phospholipid membranes and as free AA which also acts as a precursor of a variety of biologically active lipid mediators generally referred to as eicosanoids (e.g., prostaglandins, leukotrienes). A more recent class of eicosanoids is composed of the endogenous cannabinoids (endocannabinoids) 2-arachidonoyl glycerol (2-AG) and arachidonoyl ethanolamide (anandamide, AEA), which act on cannabinoid CB1 and CB2 receptors but also modulate ion channels and nuclear receptors [3] and [4]. In recent years, the role of endocannabinoids as prominent anti-inflammatory and neuromodulatory eicosanoids has been shown by numerous studies [5].
Resumo:
Improvements in the analysis of microarray images are critical for accurately quantifying gene expression levels. The acquisition of accurate spot intensities directly influences the results and interpretation of statistical analyses. This dissertation discusses the implementation of a novel approach to the analysis of cDNA microarray images. We use a stellar photometric model, the Moffat function, to quantify microarray spots from nylon microarray images. The inherent flexibility of the Moffat shape model makes it ideal for quantifying microarray spots. We apply our novel approach to a Wilms' tumor microarray study and compare our results with a fixed-circle segmentation approach for spot quantification. Our results suggest that different spot feature extraction methods can have an impact on the ability of statistical methods to identify differentially expressed genes. We also used the Moffat function to simulate a series of microarray images under various experimental conditions. These simulations were used to validate the performance of various statistical methods for identifying differentially expressed genes. Our simulation results indicate that tests taking into account the dependency between mean spot intensity and variance estimation, such as the smoothened t-test, can better identify differentially expressed genes, especially when the number of replicates and mean fold change are low. The analysis of the simulations also showed that overall, a rank sum test (Mann-Whitney) performed well at identifying differentially expressed genes. Previous work has suggested the strengths of nonparametric approaches for identifying differentially expressed genes. We also show that multivariate approaches, such as hierarchical and k-means cluster analysis along with principal components analysis, are only effective at classifying samples when replicate numbers and mean fold change are high. Finally, we show how our stellar shape model approach can be extended to the analysis of 2D-gel images by adapting the Moffat function to take into account the elliptical nature of spots in such images. Our results indicate that stellar shape models offer a previously unexplored approach for the quantification of 2D-gel spots. ^
Resumo:
Clinical text understanding (CTU) is of interest to health informatics because critical clinical information frequently represented as unconstrained text in electronic health records are extensively used by human experts to guide clinical practice, decision making, and to document delivery of care, but are largely unusable by information systems for queries and computations. Recent initiatives advocating for translational research call for generation of technologies that can integrate structured clinical data with unstructured data, provide a unified interface to all data, and contextualize clinical information for reuse in multidisciplinary and collaborative environment envisioned by CTSA program. This implies that technologies for the processing and interpretation of clinical text should be evaluated not only in terms of their validity and reliability in their intended environment, but also in light of their interoperability, and ability to support information integration and contextualization in a distributed and dynamic environment. This vision adds a new layer of information representation requirements that needs to be accounted for when conceptualizing implementation or acquisition of clinical text processing tools and technologies for multidisciplinary research. On the other hand, electronic health records frequently contain unconstrained clinical text with high variability in use of terms and documentation practices, and without commitmentto grammatical or syntactic structure of the language (e.g. Triage notes, physician and nurse notes, chief complaints, etc). This hinders performance of natural language processing technologies which typically rely heavily on the syntax of language and grammatical structure of the text. This document introduces our method to transform unconstrained clinical text found in electronic health information systems to a formal (computationally understandable) representation that is suitable for querying, integration, contextualization and reuse, and is resilient to the grammatical and syntactic irregularities of the clinical text. We present our design rationale, method, and results of evaluation in processing chief complaints and triage notes from 8 different emergency departments in Houston Texas. At the end, we will discuss significance of our contribution in enabling use of clinical text in a practical bio-surveillance setting.
Resumo:
Next-generation sequencing (NGS) technology has become a prominent tool in biological and biomedical research. However, NGS data analysis, such as de novo assembly, mapping and variants detection is far from maturity, and the high sequencing error-rate is one of the major problems. . To minimize the impact of sequencing errors, we developed a highly robust and efficient method, MTM, to correct the errors in NGS reads. We demonstrated the effectiveness of MTM on both single-cell data with highly non-uniform coverage and normal data with uniformly high coverage, reflecting that MTM’s performance does not rely on the coverage of the sequencing reads. MTM was also compared with Hammer and Quake, the best methods for correcting non-uniform and uniform data respectively. For non-uniform data, MTM outperformed both Hammer and Quake. For uniform data, MTM showed better performance than Quake and comparable results to Hammer. By making better error correction with MTM, the quality of downstream analysis, such as mapping and SNP detection, was improved. SNP calling is a major application of NGS technologies. However, the existence of sequencing errors complicates this process, especially for the low coverage (
New methods for quantification and analysis of quantitative real-time polymerase chain reaction data
Resumo:
Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^
Resumo:
At issue is whether or not isolated DNA is patent eligible under the U.S. Patent Law and the implications of that determination on public health. The U.S. Patent and Trademark Office has issued patents on DNA since the 1980s, and scientists and researchers have proceeded under that milieu since that time. Today, genetic research and testing related to the human breast cancer genes BRCA1 and BRCA2 is conducted within the framework of seven patents that were issued to Myriad Genetics and the University of Utah Research Foundation between 1997 and 2000. In 2009, suit was filed on behalf of multiple researchers, professional associations and others to invalidate fifteen of the claims underlying those patents. The Court of Appeals for the Federal Circuit, which hears patent cases, has invalidated claims for analyzing and comparing isolated DNA but has upheld claims to isolated DNA. The specific issue of whether isolated DNA is patent eligible is now before the Supreme Court, which is expected to decide the case by year's end. In this work, a systematic review was performed to determine the effects of DNA patents on various stakeholders and, ultimately, on public health; and to provide a legal analysis of the patent eligibility of isolated DNA and the likely outcome of the Supreme Court's decision. ^ A literature review was conducted to: first, identify principle stakeholders with an interest in patent eligibility of the isolated DNA sequences BRCA1 and BRCA2; and second, determine the effect of the case on those stakeholders. Published reports that addressed gene patents, the Myriad litigation, and implications of gene patents on stakeholders were included. Next, an in-depth legal analysis of the patent eligibility of isolated DNA and methods for analyzing it was performed pursuant to accepted methods of legal research and analysis based on legal briefs, federal law and jurisprudence, scholarly works and standard practice legal analysis. ^ Biotechnology, biomedical and clinical research, access to health care, and personalized medicine were identified as the principle stakeholders and interests herein. Many experts believe that the patent eligibility of isolated DNA will not greatly affect the biotechnology industry insofar as genetic testing is concerned; unlike for therapeutics, genetic testing does not require tremendous resources or lead time. The actual impact on biomedical researchers is uncertain, with greater impact expected for researchers whose work is intended for commercial purposes (versus basic science). The impact on access to health care has been surprisingly difficult to assess; while invalidating gene patents might be expected to decrease the cost of genetic testing and improve access to more laboratories and physicians' offices that provide the test, a 2010 study on the actual impact was inconclusive. As for personalized medicine, many experts believe that the availability of personalized medicine is ultimately a public policy issue for Congress, not the courts. ^ Based on the legal analysis performed in this work, this writer believes the Supreme Court is likely to invalidate patents on isolated DNA whose sequences are found in nature, because these gene sequences are a basic tool of scientific and technologic work and patents on isolated DNA would unduly inhibit their future use. Patents on complementary DNA (cDNA) are expected to stand, however, based on the human intervention required to craft cDNA and the product's distinction from the DNA found in nature. ^ In the end, the solution as to how to address gene patents may lie not in jurisprudence but in a fundamental change in business practices to provide expanded licenses to better address the interests of the several stakeholders. ^
Resumo:
Decorin, a dermatan/chondroitin sulfate proteoglycan, is ubiquitously distributed in the extracellular matrix (ECM) of mammals. Decorin belongs to the small leucine rich proteoglycan (SLRP) family, a proteoglycan family characterized by a core protein dominated by Leucine Rich Repeat motifs. The decorin core protein appears to mediate the binding of decorin to ECM molecules, such as collagens and fibronectin. It is believed that the interactions of decorin with these ECM molecules contribute to the regulation of ECM assembly, cell adhesions, and cell proliferation. These basic biological processes play critical roles during embryonic development and wound healing and are altered in pathological conditions such as fibrosis and tumorgenesis. ^ In this dissertation, we discover that decorin core protein can bind to Zn2+ ions with high affinity. Zinc is an essential trace element in mammals. Zn2+ ions play a catalytic role in the activation of many enzymes and a structural role in the stabilization of protein conformation. By examining purified recombinant decorin and its core protein fragments for Zn2+ binding activity using Zn2+-chelating column chromatography and Zn2+-equilibrium dialysis approaches, we have located the Zn2+ binding domain to the N-terminal sequence of the decorin core protein. The decorin N-terminal domain appears to contain two Zn2+ binding sites with similar high binding affinity. The sequence of the decorin N-terminal domain does not resemble any other reported zinc-binding motifs and, therefore, represents a novel Zn 2+ binding motif. By investigating the influence of Zn2+ ions on decorin binding interactions, we found a novel Zn2+ dependent interaction with fibrinogen, the major plasma protein in blood clots. Furthermore, a recombinant peptide (MD4) consisting of a 41 amino acid sequence of mouse decorin N-terminal domain can prolong thrombin induced fibrinogen/fibrin clot formation. This suggests that in the presence of Zn2+ the decorin N-terminal domain has an anticoagulation activity. The changed Zn2+-binding activities of the truncated MD4 peptides and site-directed mutagenesis generated mutant peptides revealed that the functional MD4 peptide might contain both a structural zinc-binding site in the cysteine cluster region and a catalytic zinc site that could be created by the flanking sequences of the cysteine cluster region. A model of a loop-like structure for MD4 peptide is proposed. ^