6 resultados para Local Variation Method
em Duke University
Resumo:
A large increase in natural gas production occurred in western Colorado’s Piceance basin in the mid- to late-2000s, generating a surge in population, economic activity, and heavy truck traffic in this rural region. We describe the fiscal effects related to this development for two county governments: Garfield and Rio Blanco, and two city governments: Grand Junction and Rifle. Counties maintain rural road networks in Colorado, and Garfield County’s ability to fashion agreements with operators to repair roads damaged during operations helped prevent the types of large new costs seen in Rio Blanco County, a neighboring county with less government capacity and where such agreements were not made. Rifle and Grand Junction experienced substantial oil- and gas-driven population growth, with greater challenges in the smaller, more isolated, and less economically diverse city of Rifle. Lessons from this case study include the value of crafting road maintenance agreements, fiscal risks for small and geographically isolated communities experiencing rapid population growth, challenges associated with limited infrastructure, and the desirability of flexibility in the allocation of oil- and gas-related revenue.
Resumo:
Improvements in genomic technology, both in the increased speed and reduced cost of sequencing, have expanded the appreciation of the abundance of human genetic variation. However the sheer amount of variation, as well as the varying type and genomic content of variation, poses a challenge in understanding the clinical consequence of a single mutation. This work uses several methodologies to interpret the observed variation in the human genome, and presents novel strategies for the prediction of allele pathogenicity.
Using the zebrafish model system as an in vivo assay of allele function, we identified a novel driver of Bardet-Biedl Syndrome (BBS) in CEP76. A combination of targeted sequencing of 785 cilia-associated genes in a cohort of BBS patients and subsequent in vivo functional assays recapitulating the human phenotype gave strong evidence for the role of CEP76 mutations in the pathology of an affected family. This portion of the work demonstrated the necessity of functional testing in validating disease-associated mutations, and added to the catalogue of known BBS disease genes.
Further study into the role of copy-number variations (CNVs) in a cohort of BBS patients showed the significant contribution of CNVs to disease pathology. Using high-density array comparative genomic hybridization (aCGH) we were able to identify pathogenic CNVs as small as several hundred bp. Dissection of constituent gene and in vivo experiments investigating epistatic interactions between affected genes allowed for an appreciation of several paradigms by which CNVs can contribute to disease. This study revealed that the contribution of CNVs to disease in BBS patients is much higher than previously expected, and demonstrated the necessity of consideration of CNV contribution in future (and retrospective) investigations of human genetic disease.
Finally, we used a combination of comparative genomics and in vivo complementation assays to identify second-site compensatory modification of pathogenic alleles. These pathogenic alleles, which are found compensated in other species (termed compensated pathogenic deviations [CPDs]), represent a significant fraction (from 3 – 10%) of human disease-associated alleles. In silico pathogenicity prediction algorithms, a valuable method of allele prioritization, often misrepresent these alleles as benign, leading to omission of possibly informative variants in studies of human genetic disease. We created a mathematical model that was able to predict CPDs and putative compensatory sites, and functionally showed in vivo that second-site mutation can mitigate the pathogenicity of disease alleles. Additionally, we made publically available an in silico module for the prediction of CPDs and modifier sites.
These studies have advanced the ability to interpret the pathogenicity of multiple types of human variation, as well as made available tools for others to do so as well.
Resumo:
Abstract
The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.
This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.
I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.
Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.
II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.
The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.
In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.
Resumo:
Aims: Measurement of glycated hemoglobin (HbA1c) is an important indicator of glucose control over time. Point-of-care (POC) devices allow for rapid and convenient measurement of HbA1c, greatly facilitating diabetes care. We assessed two POC analyzers in the Peruvian Amazon where laboratory-based HbA1c testing is not available.
Methods: Venous blood samples were collected from 203 individuals from six different Amazonian communities with a wide range of HbA1c, 4.4-9.0% (25-75 mmol/mol). The results of the Afinion AS100 and the DCA Vantage POC analyzers were compared to a central laboratory using the Premier Hb9210 high-performance liquid chromatography (HPLC) method. Imprecision was assessed by performing 14 successive tests of a single blood sample.
Results: The correlation coefficient r for POC and HPLC results was 0.92 for the Afinion and 0.93 for the DCA Vantage. The Afinion generated higher HbA1c results than the HPLC (mean difference = +0.56% [+6 mmol/mol]; p < 0.001), as did the DCA Vantage (mean difference = +0.32% [4 mmol/mol]). The bias observed between POC and HPLC did not vary by HbA1c level for the DCA Vantage (p = 0.190), but it did for the Afinion (p < 0.001). Imprecision results were: CV = 1.75% for the Afinion, CV = 4.01% for the DCA Vantage. Sensitivity was 100% for both devices, specificity was 48.3% for the Afinion and 85.1% for the DCA Vantage, positive predictive value (PPV) was 14.4% for the Afinion and 34.9% for the DCA Vantage, and negative predictive value (NPV) for both devices was 100%. The area under the receiver operating characteristic (ROC) curve was 0.966 for the Afinion and 0.982 for the DCA Vantage. Agreement between HPLC and POC in classifying diabetes and prediabetes status was slight for the Afinion (Kappa = 0.12) and significantly different (McNemar’s statistic = 89; p < 0.001), and moderate for the DCA Vantage (Kappa = 0.45) and significantly different (McNemar’s statistic = 28; p < 0.001).
Conclusions: Despite significant variation of HbA1c results between the Afinion and DCA Vantage analyzers compared to HPLC, we conclude that both analyzers should be considered in health clinics in the Peruvian Amazon for therapeutic adjustments if healthcare workers are aware of the differences relative to testing in a clinical laboratory. However, imprecision and bias were not low enough to recommend either device for screening purposes, and the local prevalence of anemia and malaria may interfere with diagnostic determinations for a substantial portion of the population.
Resumo:
Forests change with changes in their environment based on the physiological responses of individual trees. These short-term reactions have cumulative impacts on long-term demographic performance. For a tree in a forest community, success depends on biomass growth to capture above- and belowground resources and reproductive output to establish future generations. Here we examine aspects of how forests respond to changes in moisture and light availability and how these responses are related to tree demography and physiology.
First we address the long-term pattern of tree decline before death and its connection with drought. Increasing drought stress and chronic morbidity could have pervasive impacts on forest composition in many regions. We use long-term, whole-stand inventory data from southeastern U.S. forests to show that trees exposed to drought experience multiyear declines in growth prior to mortality. Following a severe, multiyear drought, 72% of trees that did not recover their pre-drought growth rates died within 10 years. This pattern was mediated by local moisture availability. As an index of morbidity prior to death, we calculated the difference in cumulative growth after drought relative to surviving conspecifics. The strength of drought-induced morbidity varied among species and was correlated with species drought tolerance.
Next, we investigate differences among tree species in reproductive output relative to biomass growth with changes in light availability. Previous studies reach conflicting conclusions about the constraints on reproductive allocation relative to growth and how they vary through time, across species, and between environments. We test the hypothesis that canopy exposure to light, a critical resource, limits reproductive allocation by comparing long-term relationships between reproduction and growth for trees from 21 species in forests throughout the southeastern U.S. We found that species had divergent responses to light availability, with shade-intolerant species experiencing an alleviation of trade-offs between growth and reproduction at high light. Shade-tolerant species showed no changes in reproductive output across light environments.
Given that the above patterns depend on the maintenance of transpiration, we next developed an approach for predicting whole-tree water use from sap flux observations. Accurately scaling these observations to tree- or stand-levels requires accounting for variation in sap flux between wood types and with depth into the tree. We compared different models with sap flux data to test the hypotheses that radial sap flux profiles differ by wood type and tree size. We show that radial variation in sap flux is dependent on wood type but independent of tree size for a range of temperate trees. The best-fitting model predicted out-of-sample sap flux observations and independent estimates of sapwood area with small errors, suggesting robustness in new settings. We outline a method for predicting whole-tree water use with this model and include computer code for simple implementation in other studies.
Finally, we estimated tree water balances during drought with a statistical time-series analysis. Moisture limitation in forest stands comes predominantly from water use by the trees themselves, a drought-stand feedback. We show that drought impacts on tree fitness and forest composition can be predicted by tracking the moisture reservoir available to each tree in a mass balance. We apply this model to multiple seasonal droughts in a temperate forest with measurements of tree water use to demonstrate how species and size differences modulate moisture availability across landscapes. As trees deplete their soil moisture reservoir during droughts, a transpiration deficit develops, leading to reduced biomass growth and reproductive output.
This dissertation draws connections between the physiological condition of individual trees and their behavior in crowded, diverse, and continually-changing forest stands. The analyses take advantage of growing data sets on both the physiology and demography of trees as well as novel statistical techniques that allow us to link these observations to realistic quantitative models. The results can be used to scale up tree measurements to entire stands and address questions about the future composition of forests and the land’s balance of water and carbon.